Warning: GPT-5 may change the behavior of Custom GPTs and prompts
GPT-5 broke GPTs. The new model could silently alter workflows and outputs you’ve relied on for months — here’s why it’s worth revalidating everything.
While using GPT-5, I’ve had the impression that some Custom GPTs, prompts, and project instructions that worked a certain way with GPT-4 have started producing different results.
I can’t say for sure this is true for everyone, but it’s enough to make me wonder if the model is now interpreting instructions differently than before.
It’s not an error: it seems like a change in behavior
We don’t know if the cause is the new router that directs queries to multiple submodels, modified internal parameters, or a more “proactive” training approach. It could be any of these factors, something else entirely, or even just coincidences tied to the type of inputs used.
What I’ve noticed is that, on more than one occasion, the same workflow seemed to produce different results compared to GPT-4.
For example: a Custom GPT designed for minimal editing started rewriting entire sentences; a prompt fine-tuned to keep a specific narrative tone made the text more neutral.
What are “Custom GPTs”?
These are personalized instances of ChatGPT created through the official interface, known as Custom GPTs: configurations that combine a base prompt (initial instructions), optional reference files, and, if applicable, API integrations.
Unlike prompts you manually type into chat, these internal instructions cannot be viewed or edited if the Custom GPT was created by someone else and published in the GPT Store.
Your own prompts vs. third-party Custom GPTs: what you can do
With your own prompts or project instructions: you can edit, update, and adapt them to verify whether they still work well with GPT-5.
With third-party Custom GPTs:
you can’t see or edit the base prompt;
you can only use them as they are;
you can give extra instructions during a conversation, but their effectiveness depends on how strongly the internal prompt takes priority;
if the behavior no longer suits you, the only real option is to create your own Custom GPT inspired by it, but written from scratch.
Why this might be an issue for precision work
If these impressions turn out to be correct, it would mean:
the model’s behavior with GPT-5 is not identical to GPT-4;
project instructions may no longer be interpreted the same way;
third-party Custom GPTs may contain prompts that are no longer aligned with the current logic — and you can’t modify them.
This could make tools and workflows you’ve memorized less reliable without fresh testing.
What to do (if you’ve had similar impressions)
You don’t need to rebuild everything from scratch, but it may be worth:
Testing them on multiple practical cases and comparing the results.
Reviewing and, if needed, updating your project instructions.
Being cautious with third-party Custom GPTs if you don’t know their base prompt.
Remembering that “it’s worked for months” doesn’t guarantee “it will work the same tomorrow.”
Here’s the point
What I’m sharing here is not a technical certainty, but a personal observation I haven’t had the chance to investigate further: GPT-5 might behave differently from GPT-4 when using the same Custom GPTs and prompts.
It may not be the case for everyone, but it’s worth keeping an eye on.
💬 And you? Have you noticed similar changes in your Custom GPTs or prompts since switching to GPT-5?
Share your experiences in the comments: the more concrete examples we gather, the easier it will be to tell whether this is widespread or just isolated cases.