Why doesn't Langsmith capture LLM requests when using CrewAI agents?

I’m having trouble getting Langsmith to track my language model calls when they go through CrewAI. I have a setup where I make LLM requests in two different ways. One goes directly through Langchain and the other uses a CrewAI agent wrapper. Both calls work fine and return responses, but Langsmith only shows the direct Langchain call in its tracing dashboard. The CrewAI agent calls seem to be invisible to Langsmith monitoring.

from crewai import Agent
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="gpt-4o", 
    temperature=0.5
)

def build_assistant():
    assistant = Agent(
        role='Helper Bot',
        goal='Provide helpful responses to user queries',
        backstory="""You are an AI that gives useful answers to questions.
        Always be friendly and informative in your responses.""",
        llm=model,
        verbose=True
    )
    return assistant

if __name__ == "__main__":
    bot = build_assistant() 
    response_one = bot.llm.call("What's a funny pun?")
    print(response_one)
    
    response_two = model.invoke("Share a clever joke")
    print(response_two.content)

Has anyone figured out how to make CrewAI calls visible in Langsmith tracing?

I hit this exact problem 6 months ago when we put CrewAI into production. CrewAI creates its own execution context that doesn’t inherit langsmith tracing settings properly.

What fixed it for me was wrapping the Agent creation with langsmith tracing enabled. Do this before building your assistant:

import langsmith
from langsmith import traceable

@traceable
def build_assistant():
    assistant = Agent(
        role='Helper Bot',
        goal='Provide helpful responses to user queries', 
        backstory="""You are an AI that gives useful answers to questions.
        Always be friendly and informative in your responses.""",
        llm=model,
        verbose=True
    )
    return assistant

Also, don’t call bot.llm.call() directly - use the Agent’s task execution methods instead. This way the calls go through CrewAI’s pipeline where langsmith can actually see them.

The direct LLM invoke works because it bypasses CrewAI completely. But if you want agent behavior tracked, you need to let CrewAI handle the execution flow.

crewAI’s setup can sometimes mess with langsmith tracking. try setting LANGCHAIN_TRACING_V2=true before you import crewAI modules. helped me out last week! the agent wrapper can handle things differently from direct langchain calls.

Check if ur running the latest CrewAI version - they changed how tracing works recently. also make sure ur LangSmith env vars are exported globally, not just in ur script. the agent subprocess might not be inheriting them properly.

Had this same issue a few months ago. CrewAI doesn’t automatically pick up langsmith tracing when it wraps your LLM - you need to enable it manually. Add langchain.debug = True right after your imports and make sure LANGCHAIN_API_KEY and LANGCHAIN_PROJECT are set before creating the Agent. Also, you’re using bot.llm.call() which probably bypasses the Agent’s request handling. Use execute_task() or other CrewAI methods instead so requests go through their tracing pipeline.

CrewAI overrides LangChain’s default callbacks when it initializes agents. I hit this same issue in a multi-agent project - only some calls showed up in my traces. Here’s what fixed it for me: manually pass the LangSmith callback to the Agent constructor. Import LangChainTracer from langsmith, set it up with your project settings, then pass it as a callback parameter when you create your Agent. The trick is getting CrewAI to recognize the tracer before it builds its internal execution pipeline. If you don’t, it just gets buried in the agent’s workflow.

This tracing mess happens because CrewAI creates isolated execution contexts that break monitoring chains. I’ve hit this exact problem with multiple agents running different workflows - needed full visibility and couldn’t get it.

The real issue isn’t configuration. You’re mixing execution patterns. CrewAI agents need structured task flows for proper tracing, not direct LLM calls.

I stopped trying to patch these integrations together. Built the whole agent workflow in Latenode instead - now I can see every step clearly. Set up your LLM calls, agent logic, and monitoring all in one visual flow. No more guessing what’s happening.

Latenode handles orchestration between different services cleanly. Your CrewAI-style agent behavior works with full tracing, and you can modify flows easily when requirements change without touching code.

I’ve been running production agent workflows this way for months. Way more reliable than forcing different frameworks to play nice.

CrewAI doesn’t inherit the langsmith client config properly. I hit the same issue recently—CrewAI spawns its own process threads that lose the tracing context. You must initialize langsmith directly within CrewAI’s execution scope. Add this before creating your agent:

from langsmith import Client
client = Client()
os.environ['LANGCHAIN_CALLBACKS'] = '["langsmith"]'

Your main problem is using bot.llm.call(); that completely bypasses CrewAI and goes straight to the model. CrewAI agents need to process requests through their internal workflow to keep tracing visible. Switch to proper agent task execution, and langsmith should start capturing those calls.