OpenAI quietly released GPT-5.2.
According to the stated data, the model outperforms humans on 71% of difficult tasks. GPT-5.1 had about 39%. The release also claims stronger coding capabilities, including building frontends, a several-fold increase in context length, knowledge up to August 2025, and 30% fewer hallucinations compared to GPT-5.1.
OpenAI released GPT-5.2 with little fanfare. The release notes include several figures that quickly demonstrate the essence of the changes. According to the stated data, GPT-5.2 outperforms humans on 71% of difficult tasks. By comparison, the previous version’s figure was about 39%. The difference is large. It matters in situations where you need to maintain logic, not get tangled in the conditions, and avoid falling back on guessing. If the stated data are to be believed, GPT-5.2 is less likely to “drift” in such cases and more often brings reasoning to a sensible result.
The second notable change is coding. The model has become stronger and can easily put together a polished frontend for websites and apps. This is a practical area for quality checks because frontend work requires cohesion. You need to maintain structure, component logic, and interface neatness. If the model succeeds not in isolated pieces but assembles a coherent whole, it begins to look less like a fragment generator and more like a working tool for prototypes and rapid iterations.
The third part is context. The model better captures the gist of a dialogue and doesn’t lose the thread even after many messages. This is exactly the kind of improvement that isn’t very noticeable on short queries but is critical in long discussions—when requirements are clarified on the fly, when people return to earlier agreements, when an exchange contains a lot of setup. In such dialogues the model’s “memory” decides whether it will be a helper or a source of chaos.
There is also an update to knowledge. It is stated that the data are current through August 2025. This determines how well the model fits the current context. The closer its knowledge is to present reality, the fewer situations where an answer seems logical but is based on an outdated picture.
And one more point that directly affects trust is errors and hallucinations. The provided data say GPT-5.2 produces 30% fewer hallucinations than GPT-5.1. This is not a promise of perfection, but the model will far less often “convincingly err.” That means less time spent on verification and a lower chance of accepting fiction as fact.
Other articles
OpenAI quietly released GPT-5.2.
According to reported data, the model outperforms humans on 71% of difficult tasks. GPT-5.1 was around 39%. The release also claims stronger coding abilities, including front-end building, a several-fold increase in context, knowledge up to August 2025, and 30% fewer hallucinations compared to GPT-5.1.
