I noticed that the previous model in the interface had approximately 64k tokens available for context based on what users were reporting in forums.
Now GPT-5 shows a maximum 32k context window in the interface for Plus subscribers. The older model option has been removed completely.
This means as someone who pays for the service, my context capacity got cut in half while being marketed as an improvement.
Having good context length is really important for discussing code projects and technical topics. Even if the new model works better in other ways, losing context means it forgets important information much sooner.
I have been subscribed to Plus since they first offered it but decided to cancel my subscription because of this change.
this is straight-up bait and switch. they probably figured out 64k context was burning through way more compute than expected. halving it saves openAI cash while we’re stuck with worse service at the same price. super shady how they quietly swapped models without warning us about the downgrade.
Same thing happened to me mid-project while analyzing a huge codebase. The context cut completely killed my workflow - I was feeding it multiple files that needed to reference each other. What really pisses me off is they sold GPT-5 as an upgrade while secretly gutting one of the best features for developers. Zero transparency - no warning, no explanation, just a silent downgrade. I’m looking at alternatives because this feels like they’re testing how much they can screw users before we bail. They didn’t just reduce context, they completely axed the old model option. This wasn’t about choice - it was cost cutting dressed up as an improvement.
Been tracking this since the model swap and honestly feels like OpenAI’s testing how much they can get away with. The economics are brutal - each token costs processing power and memory, so cutting from 64k to 32k probably saved millions in infrastructure. What really bugs me is zero communication. They could’ve been upfront about technical constraints or offered different pricing tiers. Instead they just screwed over paying customers. For anyone doing serious dev work, losing that context length breaks entire workflows that took months to optimize.
Hit this exact issue during a big refactor last month. Context cuts suck because they destroy your entire workflow.
I stopped fighting these arbitrary limits and just automated everything. Built flows that chunk large codebases intelligently, track context between related files, and auto-switch between AI providers based on what works best for each task.
When OpenAI cuts context like this, the system just routes to Claude or whoever has better limits that day. No more getting stuck with one provider’s BS decisions.
Runs visually without coding custom integrations for every API change. Way better than hoping subscription services won’t screw you next month.
This is exactly what’s wrong with subscription services - they quietly downgrade features and hope nobody notices. Same thing happened to me when they switched model versions last year. My long conversations would suddenly lose coherence way faster than before. The 32k limit really screws over developers who need context across multiple files or long debugging sessions. What pisses me off most is how they handled it. Zero transparency. They could’ve explained why they had to reduce it or offered a higher tier with more context. Instead, they just killed the old model completely rather than keeping it as an option. Classic forced obsolescence of something we were actually using.
I understand your frustration with the significant reduction in context window size. Such changes, especially without prior notice, can seem disheartening when you’re paying for a premium service. A larger context is essential for engaging in deep technical discussions and working on complex code projects. It’s concerning that OpenAI has removed the older model instead of merely introducing the new one. Many users rely on that extended context, and losing it impacts our ability to work effectively. It’s completely reasonable to reconsider your subscription under these circumstances.
AI services pull this crap constantly. Companies change the rules whenever they want, and we’re stuck dealing with it.
I hit the same wall working on a project that processed long code reviews and docs. Context limits kept breaking everything.
So I built my own automation instead. Rather than betting on one service that’ll change tomorrow, I made a system that chunks big documents smart and processes them in batches. When one piece needs context from another, it grabs exactly what it needs.
Best part? I’m not stuck with one provider’s BS limits. OpenAI cuts context windows? Route to Claude. Claude acts up? Switch to something else. The automation handles all the switching and context stuff automatically.
This used to mean tons of custom coding and API headaches. Now you can build the whole pipeline visually - no code needed. It connects multiple AI services, handles chunking, and manages context across requests.
Way more reliable than whatever OpenAI feels like offering this month.