Claude Code performance changes: Max subscription vs API - A shocking comparison

I recently compared Claude Code’s performance using Max subscription and the pay-as-you-go API. The results were surprising.

After spending $400 on the API, I upgraded to Max to test it out. I noticed big differences in speed and quality. To confirm my suspicions, I ran a test on my project.

I asked both versions to wrap console.logs in if statements with a config const. I tracked how many files they could process, how long it took, and context usage.

The results were unexpected. Max was slower but did a better job than the API version. It seems like Claude Code has changed recently, and not for the better.

I also tested aider.chat with Sonnet 3.7. It finished the task in minutes for under $2.

Max subscription:

  • Processed 8 files in 45 minutes
  • One file was broken
  • Slower but more thorough

API version:

  • Processed 2 files before running out of context
  • Faster but less effective
  • Cost $7 for partial completion

Both versions performed worse than expected. Claude Code seems to have declined in quality recently. It’s slower, less capable, and not as agentic as before.

I’m disappointed as I’ve been a big fan of Claude Code. I hope this comparison helps others decide if Max subscription is worth it right now.

wow, that’s disappointing to hear. i’ve been thinking about upgrading to Max but now im not so sure. have u tried contacting their support about the performance issues? maybe there’s some optimization settings we’re missing. either way, thanks for sharing ur experience - super helpful for the rest of us trying to decide!

Your findings are certainly concerning. I’ve been using Claude Code for a while and have noticed some inconsistencies lately. It’s frustrating to see a decline in performance, especially when paying for a premium service. Have you considered exploring other AI coding assistants? GitHub Copilot or Amazon CodeWhisperer might be worth looking into as alternatives. They’ve been gaining traction and could potentially offer better performance. In any case, I hope Anthropic takes note of user feedback like yours and works on improving Claude Code’s capabilities and efficiency.

I’ve been using Claude Code for several months now, and I’ve noticed similar issues. The performance decline is quite noticeable, especially when working on larger projects. Last week, I was refactoring a complex codebase, and Claude struggled with maintaining context across multiple files. It kept losing track of variable definitions and function scopes.

One workaround I’ve found is to break down tasks into smaller chunks and feed them to Claude separately. This seems to help with context management, but it’s far from ideal. It’s time-consuming and defeats the purpose of having an AI assistant.

Regarding the Max subscription, I had high hopes, but your experience mirrors mine. The improved quality doesn’t compensate for the sluggish performance. I’m seriously considering switching to alternative tools. Has anyone tried GPT-4 with code completion plugins? I’ve heard good things, but I’m curious about real-world experiences.

It’s a shame to see Claude Code falter like this. I hope Anthropic is aware of these issues and working on improvements. For now, I’m keeping my options open and exploring alternatives.