I’m building an application that needs to monitor token usage for different user projects. My initial approach was to use LangSmith traces and add the project_id to the metadata. This would let me query all runs associated with a specific project.
The problem is that users can delete their projects, which breaks the connection between user projects and the project_ids stored in LangSmith. This creates data integrity issues.
What would you suggest as an alternative approach? I’m considering storing the token_count in my local database after each API call, but I’m not sure if this is the most efficient method.
Additionally, I’m curious about handling token tracking when using agents with LangGraph. Is there a way to capture tokens consumed during tool calls within these agent workflows?
honestly, i just store token counts locally after each call - works gr8 for me. been doing it for months without issues. with langGraph agents, hook into the callback system to catch token usage from all tool executions. way more control than external tracing.
Hit the same problem scaling to thousands of users. Solution? Treat token tracking as its own service.
Don’t tie it to project lifecycles. I built a separate token consumption service that runs independently. Each API call gets a session ID that never changes - even when projects get deleted. Your main app just needs a lookup table mapping current projects to session IDs.
Delete a project? Token records stay put for billing and analytics. You lose the active mapping but keep all historical data.
For LangGraph agents, intercept at the HTTP client level instead of chasing callbacks everywhere. I wrapped our HTTP client to log tokens before they reach LangChain. Catches everything - model calls, tools, retries, all of it.
Architecture: API calls → Token logger → Database. Completely separate from project management. Much cleaner than syncing through external tracing services.
I faced similar challenges when developing a multi-tenant API recently. What worked for me was utilizing a database to log token usage while employing UUIDs for better integrity instead of project names or user IDs. This way, when users create projects, they get a consistent UUID that remains unchanged regardless of renaming. I utilize this UUID in LangSmith for tracing, serving as a reliable foreign key for local tracking. In instances of project deletion, I opted for soft-deletion or archiving for billing persistence. Regarding LangGraph agents, I implemented overrides in callback handlers at the graph level to track token consumption at each node effectively. Ensure that your callback includes both primary model calls and nested tool executions, as local storage often proves superior to external services for billing accuracy.
Why not just use UUID mapping? Create permanent tracking IDs that stick around even after project deletions. I keep mine in Redis for quick lookups and Postgres for the long-term storage. For LangGraph tokens, patch the OpenAI client directly - saves you from callback hell.
Look, I’ve dealt with this exact headache at scale. Manual tracking works but gets messy fast with hundreds of projects and multiple API endpoints.
Automated pipeline saved me. I use Latenode to build workflows that log token usage to my database after every API call, manage project lifecycle events, and handle soft deletion when projects get removed.
The real win is LangGraph agents. Instead of writing custom callbacks and hoping you catch every tool execution, build a Latenode workflow that monitors your entire agent pipeline. It captures token usage from all nested calls, aggregates everything, and stores it with the right project mappings.
I set up triggers that fire when projects get created or deleted, so token tracking stays synced automatically. No more broken foreign keys or missing data when users mess with their projects.
Automation handles edge cases I never thought of manually. Like API calls failing halfway through, or backfilling missing token data.
Same problem here. I ditched the external tracing services and built a hybrid setup instead. I keep a separate tracking table that maps my internal project IDs to immutable reference keys. When projects get deleted, I just flag the token records as archived instead of wiping everything. For LangGraph token tracking, wrap your agent execution in a custom callback that adds up token counts across all tool calls. Intercept at the LLM level - don’t try tracking individual tools. You’ll get full visibility into usage patterns without relying on external services for billing.