I’m facing a problem with my langsmith tracing setup. When I run my code using two different methods to make calls to the language model, only one of them appears in my langsmith dashboard. Here’s the situation: I have configured langsmith properly and I’m testing using both a crewai agent and a direct langchain call. The strange part is that my direct langchain request gets tracked perfectly, but when I make the same request via a crewai agent, there’s no trace at all. Although both calls work well and produce the expected outputs, the tracing for crewai seems to be the issue.
from crewai import Agent
from langchain_openai import ChatOpenAI
language_model = ChatOpenAI(
model="gpt-4o",
temperature=0.5,
)
def build_assistant():
# Set up a basic response agent
assistant = Agent(
role='Content Helper',
goal='Provide helpful and accurate responses',
backstory="""You are an experienced assistant that provides useful information.
Always give straightforward and helpful answers.""",
llm=language_model,
verbose=True
)
return assistant
if __name__ == "__main__":
assistant = build_assistant()
response_one = assistant.llm.call("Share a funny pun with me")
print(response_one)
response_two = language_model.invoke("Tell me your best pun")
print(response_two.content)
Does anyone have insights on why the crewai calls aren’t being traced?
CrewAI bypasses LangChain’s callback system that LangSmith needs for tracing. When you call assistant.llm.call(), CrewAI uses its own wrapper methods instead of hitting the LangChain model directly.
I hit this same issue six months ago with a multi-agent setup. Here’s what worked: add custom callbacks in your CrewAI agent config. Pass LangSmith’s callback handler directly to the agent constructor with callbacks=[langsmith_callback_handler] when you create your Agent instance. You’ll need to import and set up the LangSmith callback handler first.
This forces CrewAI to use LangSmith’s tracing even when it routes through its internal methods. Another option - some devs patch the LLM instance before giving it to CrewAI. That way all calls keep tracing no matter how CrewAI handles them.
CrewAI’s execution flow is the problem. When you call assistant.llm.call(), CrewAI intercepts it before LangChain’s callbacks can catch it.
I hit this same issue building a customer support bot with multiple agents. Fixed it by setting up LangSmith tracing at the environment level instead of depending on LangChain’s callbacks.
Set these environment variables before importing anything:
This forces LangSmith to capture all LLM calls no matter how they’re wrapped. CrewAI can do its own routing, but the underlying OpenAI calls still get traced.
Also - try agent.execute_task() instead of agent.llm.call() if you can. CrewAI’s execute methods have better instrumentation in newer versions.
I’ve hit this same tracing nightmare with agent frameworks that wrap LangChain.
CrewAI agents skip the standard LangChain tracing hooks. When you call assistant.llm.call(), CrewAI uses its own internal methods that don’t trigger LangSmith’s instrumentation.
Honestly? Skip the tracing config headaches and automate this workflow with Latenode instead. You get proper logging and monitoring built in, plus full visibility into every agent interaction.
I switched a similar setup to Latenode last year - haven’t looked back. Same agent logic, better observability, no framework compatibility nightmares.
Their monitoring dashboards show exactly what’s happening at each step. Way more useful than basic LangSmith traces.
CrewAI breaks LangSmith’s auto-tracing since it wraps LLM calls weird. Monkey patch the OpenAI client before CrewAI starts up - I did this for chat agents last month and it worked. Just override the completion method and add LangSmith hooks yourself.