I’m having trouble getting Langsmith to properly monitor my Gemini API calls
I’m making direct calls to the Gemini API without using langchain. Even though I added the @traceable decorator to my function, when I check Langsmith it shows 0 tokens being used.
What steps do I need to take to make the tracing work correctly? I know there’s a wrap_openai helper function available, and I’m wondering if there’s something equivalent for Gemini that I should be using instead.
Has anyone successfully set up token tracking for direct Gemini API calls? Any guidance would be helpful.
Hit this exact issue when moving a client from OpenAI to Gemini. The problem? Langsmith’s @traceable decorator expects standardized response formats, but Gemini doesn’t play nice out of the box. Here’s what worked: build a translation layer inside your traced functions. Don’t just log tokens afterward - grab the usage_metadata from Gemini responses and map it to whatever format Langsmith wants. Think of it as a translator, not manual logging. Your traced function handles the API call, then immediately reformats the response data so Langsmith can read it. Keeps everything clean and your token tracking actually shows up in the dashboard.
yeah, gemini’s api response has usage_metadata, but u gotta extract it urself – there’s no auto wrapper like openai has. try logging response.usage_metadata.prompt_token_count and response.usage_metadata.candidates_token_count in ur traceable function. that’s how i fixed this issue last month.
Manual logging gets messy fast with multiple API calls across different projects.
I’ve been running Gemini integrations for a while - automation’s the way to go. Skip the manual token tracking and set up automated workflows that handle monitoring and logging without touching your core code.
These workflows automatically capture API responses, parse token usage, and push everything to your monitoring system. No decorators, no manual logging cluttering your functions.
Mine handles batches of API calls, retries failed requests, and generates usage reports. Takes 10 minutes to set up, then runs in the background.
You can pipe data anywhere - spreadsheets, databases, or back to Langsmith. Way cleaner than hardcoding trace updates everywhere.
The @traceable decorator won’t capture token usage from direct Gemini API calls - it doesn’t have built-in instrumentation like OpenAI’s wrapper. You’ll need to manually log tokens inside your decorated function. After each API call, grab the usage data from the response and pass it to the trace using update_current_trace from langsmith. I ran into this same issue and fixed it by explicitly logging input_tokens and output_tokens from response.usage_metadata. Also make sure you’re setting the trace name and metadata correctly so everything shows up properly in your Langsmith dashboard.