Ask HN: Is GPT-5 a regression, or is it just me?
Context: I have been using GPT5 since its release over a month ago, within my Plus subscription. Before this release, I heavily relied on gpt-o3 for most complex tasks, with 4o for simple question. I use it for a mix of scientific literature websearch for e.g. understanding health related topics, the occasional coding assistance and helping me out with *nix sysadmin related tasks. Note that I have not used its API or integration with an IDE.
Based on a month of GPT5 usage, this model feels like primarily like a regression:
1. It's slow: thinking mode can take ages, and sometimes gets downright stuck. It's auto-assessment of whether or not it needs to think feels poorly tuned to most tasks and defaults too easily to going into deep reasoning mode.
2. Hallucinations are in overdrive: I would assess that in 7/10 tasks, hallucinations continuously clutter the responses and warrant corrections and careful monitoring and steering back. It hallucinates list items from your prompt that weren't there, software package functionalities/capabilities and CLI parameters etc. Even thorough prompting with explicit linking to sources, e.g. also wihtin deep research frequently goes of the rails.
3. Not self critical: even in thinking mode, it frequently spews out incorrect stuff, that a blatant "this is not correct, check your answer" can directly correct.
Note: I am not a super advanced prompt engineer, and this above assessment is mainly wrt the previous generation of models. I would expect that with progression of model capabilities, the need for users to apply careful prompt engineering goes down, not up.
I am very curious to hear your experiences.
I saw the significant difference in gpt5. If someone were using mostly or just gpt4 before, then it might be a culture shock of difference type situation.
Me who actively uses claude, gemini, perplexity, and a whole gamut of local LLMs.
The personality of the models are different and so when gpt5 came along, it wasnt really a surprise to me.
GPT-5 was an upgrade for investors. The primary feature of it is to use a router that will decide between a stronger model and a weaker one for a given query. The goal is to reduce operating costs without regard to improving the user experience, while they market it as "new and improved".
"Ahhh, you're right, now it's clear, that explains it....."
Yeah, you're not alone. I've even been getting responses with contradictions within the same sentence. ("X won't work therefore you should use X")
You are not alone.
Another pet peeve is that it, when asked to provide several possible solutions, sometimes generates two that are identical but with different explanations.
Ah yes, I've had similar experiences actually. Also the variant where I ask it to provide an alternate solution/answer to the one it gave, where it than proceeds to basically regurgitate its previous answer with slight stylistic (i.e. maintaining content parity) modifications.
Hallucinations are definitely up at least 5X compared with gpt4 from my personal experience.
It's just you