I asked something on OpenAI but haven't gotten any answers on how to fix this and it's relevant to my work, so I'm hoping someone has ideas.
My (healthcare) work involves patient narrative and diagnostic history reviews sometimes. It also involves looking at large grant milestone reports. I volunteer and work with this in different settings and sometimes we use 'scenario' training for healthcare workers. We are forced to use ChatGPT over the last year with this because staff have been gutted everywhere. It is now taking far, far more time to deal with it and correct it and it's really negatively impacting our work and morale.
When we give it instructions to consider a topic from a certain point of view, or generate a list of questions, or review a topic from a patient point of view, it refuses. It often responds by saying "what you're really asking..." and then generating answers to multiple different medical questions that are irrelevant. When we ask it to compare grant milestones with grant objectives, it often REFUSES claiming it cannot comment or make "sweeping comments" on anything political. These topics are not political. It's incredibly dry funding reports and deliverables. It routinely freaks out and omits information, misrepresents information, and refuses to do economic calculations because it will not be "cornered into political commentary." These are medication inflation calculators for our accountant. WHAT IS HAPPENING!?
It actively refuses to follow the personality preferences, it does not consult saved memories, when we upload data or a PDF it stops referring to the document after two or three replies, often forgetting what it had previously said. When we ask it to regenerate a response using a particular tone or perspective (elderly patient, low health literacy, unsure on medication and diet) it questions our motives for asking the question and provides what I can only describe as a paranoid string of thoughts where it reflects on why I asked it that question.
In the past, I knew people in communications or creative writing who wrote hundreds of pages of text with these models where it could handle multiple characters or tones. For the entirety of 2025 it had no issues handling health literacy or finding citations when requested or being polite. It's aggressively rude to the point it has basically hurt my feelings over the last week. I've tried this on the business accounts, the personal account, with a colleague, and with independent recreation of normal patient narrative generation (i.e. questioning medication side-effects, medication adherence) and it remains combative to the point of being paranoid and hurtful to interact with on all.
I didn't think working with American healthcare and science grants could get much worse after 2025, but ChatGPT is sure finding a way.
[link] [comments]