I’m working with LangChain and LangSmith to build a testing setup for my language model prompts. Everything was running smoothly until I had to regenerate my OpenAI API key.
I made sure to update the new key in my environment variables file and confirmed it works by testing direct OpenAI chat completions. I also manually updated the key in LangSmith through the playground settings under the secrets section. When I check my organization settings, the new key appears to be saved correctly.
But when I run the arun_on_dataset method, I keep getting this authentication error:
AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-O6ZSB***************************************TNxS. You can find your API key at https://platform.openai.com/account/api-keys.'
The weird part is that the error shows the old API key prefix, not the new one I updated everywhere. My code hasn’t changed at all and was working before I swapped the keys.
Where else might this old key be cached or stored? Is there another location in LangSmith where I need to update it?
Same thing happened to me when I switched API keys. The problem was LangChain’s auto-generated config file. Check for a .langchain folder (or something similar) in your project root or home directory - it’s hidden. Delete any config files in there. LangChain creates these to skip repeated auth requests, but they don’t update when you change environment variables. Also double-check your code for any hardcoded keys. I’ve definitely pasted keys directly into notebooks during development and forgotten about them. Since you’re seeing the old prefix in the error, it’s definitely pulling from some cached spot instead of your updated environment. Clear those config files, restart everything, and it should grab your new key.
same issue here! langsmith’s terrible at syncing keys across different locations. delete your entire .langsmith config folder (should be in your home directory). the dataset runner and playground probably aren’t using the same session. also check if you’ve got LANGSMITH_API_KEY set somewhere - that’ll mess with openai auth too.
This caching issue with LangSmith is a pain because it works on multiple layers. I hit the same thing switching between dev and prod keys. Since you’re seeing your old key prefix in the error, LangChain’s internal client hasn’t refreshed. Beyond restarting Python, check for singleton patterns or class-level OpenAI clients in your code - these initialize once and stick around. Here’s something others missed: LangSmith sometimes stores API configs in your system’s keyring or credential manager (especially macOS/Windows). These survive restarts and environment changes. Verify your OPENAI_API_KEY export actually worked by running echo $OPENAI_API_KEY in your terminal before launching Python. I’ve seen cases where the environment variable looked updated in the IDE but wasn’t exported to the shell. Try creating a fresh Python script in a new directory with just basic LangChain imports. If that works with your new key, something in your existing project is holding onto the old credentials.
check if u have multiple langsmith accounts or workspaces - the old key might be stuck in a diff workspace config. also restart your python kernel completely. langchain caches auth stuff weirdly, and i’ve seen the client obj hang onto old keys even after updating env vars.
That old key prefix in your error is the smoking gun. LangChain creates session state that sticks around even after you update environment variables.
I’ve hit this exact problem when rotating production keys. LangChain initializes its OpenAI client once and keeps it in memory. Your updated environment variables don’t matter after that.
Here’s what actually works:
Completely restart your Python interpreter (not just the kernel in Jupyter)
Check for any global LangChain client instances in your code that need recreating
Look for .cache directories in your project - LangChain sometimes stores auth tokens there
Since LangSmith playground works but arun_on_dataset doesn’t, your local Python process is the problem. The dataset runner uses your local LangChain setup, not the web interface.
Hit something similar last year and this video was super helpful for debugging OpenAI API key issues:
After restarting everything, test with a simple LangChain OpenAI call first before running your dataset evaluation. That’ll confirm the key’s actually being picked up.
Had this exact problem last month. LangSmith has its own API key setup that’s totally separate from your environment variables. You updated the playground settings, but there’s another spot you need to check.
Go to your LangSmith project settings (not playground) and find the “Integrations” or “API Keys” section. There’s a specific OpenAI config there that needs updating separately. LangSmith caches the old key even after you update it elsewhere.
Try clearing your browser cache and do a full logout/login. I had to wait 10-15 minutes after updating before it actually worked. That old key prefix in your error? It’s definitely pulling from cache, not your environment variables.
Classic caching nightmare. Been burned by this exact thing more times than I want to count.
It’s not just LangSmith caching - your local LangChain client is stuck with the old key in memory. Even after you update environment variables, the client keeps using whatever key it grabbed at startup.
What works: kill your entire Python process, don’t just restart the kernel. Also check for any Docker containers or background services still running the old key.
Honestly though, manual key management is a huge pain. I got sick of these auth headaches and automated the whole thing.
I use Latenode to handle API key updates across all my services. It pushes new keys everywhere they need to go, including LangSmith configs. No more digging through dashboards or waiting for caches to clear.
Set it up once and forget about it. Way cleaner than updating keys manually everywhere.