ChatGPT Plus users lose the option to select models - contemplating other options

Has anyone else experienced that OpenAI removed the model selection feature? As a Plus subscriber, I feel like they are directing us towards cheaper models without our agreement. My interactions don’t seem as effective as they used to be, and I can’t even see which version I’m using. It’s frustrating to spend money each month only to feel like I’m receiving a lesser service. I’m considering cancelling my subscription and looking into different AI platforms. Are there any viable alternatives that allow for more customization in terms of the models?

same here - model switching vanished for me in nov and it was infuriating. i’ve been using poe.com since then. you get gpt-4, claude, and open source models all in one spot. it’s not as smooth as chatgpt, but at least you know exactly which model you’re using and can compare responses directly.

OpenAI made some terrible choices with these interface changes. I’ve had the same problems since the dashboard update - not knowing which model you’re using is ridiculous when you’re paying for this. I switched to multiple services instead of one subscription. Cohere’s Command models have been great lately, especially for technical work, and their pricing makes sense. Mistral AI’s API also works well with clear model specs. What pisses me off most is removing features without telling anyone. They simplified the interface, killed advanced settings, now hid model selection. They’re clearly targeting casual users and screwing over power users who know the difference between models. Want alternatives? Try Together AI or Replicate - better control over parameters and they don’t hide what model you’re actually using.

The Problem:

You’re frustrated because OpenAI removed the model selection feature in ChatGPT Plus, leaving you unsure which model you’re using and concerned about receiving a lower-quality service than before. You’re considering canceling your subscription and exploring alternative AI platforms offering more model customization.

:thinking: Understanding the “Why” (The Root Cause):

OpenAI’s decision to remove the model selection feature likely stems from a desire to simplify the user interface and potentially streamline their backend operations. While this simplification might benefit casual users, it significantly impacts power users who rely on specific models for optimal performance. The lack of transparency regarding model selection adds to the frustration. Essentially, you’re paying for a service without full control over its core component: the underlying AI model.

:gear: Step-by-Step Guide:

This guide focuses on regaining control by directly interacting with multiple AI APIs instead of relying on a single platform’s interface. This allows you to choose the most appropriate model for each task and eliminates dependence on a single provider’s changes.

Step 1: Choose Your AI Services:

Several providers offer strong alternatives with clear model selection:

  • Cohere: Excellent for technical tasks. Their pricing model is generally transparent.
  • Mistral AI: Provides a robust API with well-defined model specifications.
  • Anthropic (Claude): Known for its reliability and clear model identification.
  • Google AI Studio (Gemini): Offers free usage tiers and provides detailed model information.

You can combine services based on your needs and budget.

Step 2: Set up API Access:

Each provider will have its own API key and setup process. Follow their respective documentation to obtain API keys and set up authentication.

Step 3: Build Your Workflow (Automation):

This is where you regain control. You’ll need to create a system (using scripting languages like Python, Node.js, or even tools like IFTTT if your needs are simpler) that routes your requests to different AI APIs based on the task’s complexity.

For example:

  • Simple tasks (e.g., summarizing text): Route to a cheaper model from Cohere or Google.
  • Complex tasks (e.g., code generation, in-depth analysis): Use more powerful models from Anthropic or Mistral AI.

This requires coding skills and a basic understanding of API interactions. However, the investment in learning this yields long-term control and avoids reliance on any single platform.

Step 4: Implement Fallbacks:

If one model returns unsatisfactory results, your automated system should automatically try another model. This ensures consistent quality.

Step 5: Monitor and Adjust:

Track the performance of each model and adjust your routing logic as needed. Over time, you’ll build an optimized workflow tailored to your exact needs.

:mag: Common Pitfalls & What to Check Next:

  • API Key Management: Securely store your API keys. Do not hardcode them directly into your scripts; use environment variables or dedicated secret management solutions.
  • Rate Limiting: Be aware of each provider’s rate limits to avoid exceeding allowed requests.
  • Cost Optimization: Track your API usage and costs to ensure you’re efficiently managing your budget.

:speech_balloon: Still running into issues? Share your (sanitized) config files, the exact command you ran, and any other relevant details. The community is here to help!

Been dealing with this since December and honestly the model hiding feels intentional - they’re pushing users toward whatever’s cheapest to run. I switched to Google’s AI Studio and it solved everything. It’s free for reasonable usage and shows you exactly which Gemini model you’re using. The interface displays model parameters and lets you adjust temperature settings that ChatGPT buried or killed entirely. For heavy usage I add Anthropic’s API through their console. Way cheaper than Plus and no surprise changes. This combo does everything ChatGPT Plus used to do, except I know which models I’m getting and can switch deliberately instead of hoping OpenAI isn’t quietly downgrading responses to save money.

I understand the frustration with OpenAI’s changes. Same thing happened to me when they began limiting model access.

Before jumping to another AI platform, consider building your own workflow instead. You’ll have complete control and can connect multiple AI services through APIs, allowing you to choose which model handles each task.

I route requests based on complexity: simple tasks go to cheaper models, while complex analysis uses premium ones. This way, you get better results and save money.

The best part? You’re not tied to one provider’s whims. If they change their interface or raise prices, your workflow keeps running smoothly.

You can also set up fallbacks—try one model and automatically switch to another if the response isn’t satisfactory.

This gives you way more control than any single platform can offer, plus easy integration with other tools and full automation.

Had this exact problem two months ago and ditched ChatGPT Plus for Claude Pro. The disappearing model selection was the last straw - I’m paying for GPT-4 access, so I want to know what I’m actually using. Claude’s been rock solid and always shows which model you’re getting. No guesswork needed. Quality feels just as good as early GPT-4, and Anthropic’s way more transparent about pricing. Tried Perplexity Pro too - decent for research but Claude wins for creative stuff and deeper analysis. Same monthly cost as ChatGPT Plus, except you actually know what you’re paying for.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.