I’ve built a system that uses a LangChain AI agent for analyzing pandas DataFrames. The setup includes comprehensive instructions in the system prompt, along with conversation history and column descriptions for each dataset.
My main issue is with how the agent handles user input when converting natural language to DataFrame queries. The agent works great for answering data questions and generating charts with plotly, but sometimes it misinterprets what users are asking for.
The biggest problem happens with country lookups. My data has countries stored in multiple formats across different columns. When someone asks about Puerto Rico, the agent should find “PRI” but sometimes searches for “PR” instead and gets zero results. Same thing with Mexico - users might type “Mexico”, “MEX”, or “MX” but the agent doesn’t always pick the right identifier that exists in the data.
Capitalization also causes issues even though I added instructions about case sensitivity in my prompt. I’ve provided detailed metadata for all columns across my three datasets and enabled conversation memory. The temperature is set to 0 for consistency.
I need a solution that doesn’t require hardcoding every possible variation users might enter. How can I make the agent more reliable at matching user input to actual data values?