Langsmith logging not working in Google Colab environment

I’m having issues with Langsmith tracing in my Colab notebook

I set up a basic RAG application using LangChain but can’t see any traces showing up in my Langsmith dashboard. Here’s my setup:

from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.prompts import ChatPromptTemplate
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.messages import SystemMessage, HumanMessage

def load_pdf_content(file_path):
    pdf_loader = PyPDFLoader(file_path)
    documents = pdf_loader.load()
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=500,
        chunk_overlap=50
    )
    chunks = text_splitter.split_documents(documents)
    return chunks

def setup_vector_db(document_chunks):
    embedding_model = OpenAIEmbeddings(model="text-embedding-ada-002")
    vector_db = Chroma.from_documents(document_chunks, embedding_model)
    return vector_db

def build_qa_chain(vector_database):
    system_prompt = ChatPromptTemplate.from_messages([
        ("system", "Use the provided context to answer questions: {context}"),
        ("human", "{input}")
    ])
    
    document_chain = create_stuff_documents_chain(llm=chat_model, prompt=system_prompt)
    search_retriever = vector_database.as_retriever(search_kwargs={"k": 2})
    qa_chain = create_retrieval_chain(search_retriever, document_chain)
    return qa_chain

def ask_question(chain, query):
    result = chain.invoke({"input": query})
    return result["answer"]

# Main execution
docs = load_pdf_content("sample.pdf")
vector_store = setup_vector_db(docs)
qa_system = build_qa_chain(vector_store)

while True:
    question = input("Ask something: ")
    if question.lower() == "quit":
        break
    answer = ask_question(qa_system, question)
    print(f"Answer: {answer}")

The code runs fine but I don’t see any activity in Langsmith. What am I missing for proper tracking?

Yeah, fixing those environment variables and the LLM import will work, but debugging Langsmith tracing in Colab is a nightmare. I’ve wasted hours on this exact problem.

The real issue? You’re manually wiring everything together and debugging each piece. When something breaks, you’re stuck digging through logs and API calls.

I ditched that approach and automated the whole workflow instead. Built a simple automation that handles PDF processing, creates the vector store, manages LLM calls, and logs everything properly. No more environment variable headaches or missing imports.

It runs reliably every time with proper logging built-in. You can set it to retry failed API calls or switch between LLM providers when one goes down.

Best part - you can trigger it from anywhere, not just Colab. Way cleaner than debugging tracing issues in notebooks.

Check out Latenode for setting this up: https://latenode.com

colab caches old env vars in weird ways. restart your runtime after setting langchain vars, then test with print(os.environ.get('LANGCHAIN_TRACING_V2')) to check they’re loaded. also, make sure you call .invoke() on chains you create AFTER setting the vars - existing chain objects won’t pick up tracing.

You’re missing the environment variables. Langsmith won’t know where to send traces without them.

Add this at the top of your notebook:

import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your_langsmith_api_key"
os.environ["LANGCHAIN_PROJECT"] = "your_project_name"  # optional but recommended

I hit this same problem when I started using Langsmith. Tracing just fails silently without these vars.

Also check how you’re importing your LLM model. You reference chat_model but don’t show where it’s defined. If it’s not LangChain compatible, tracing won’t work.

One more thing - if you’re in a shared Colab environment, API calls sometimes get throttled. Check your Langsmith project permissions.

Your code’s missing the language model import and setup. I see you’re calling chat_model in build_qa_chain but there’s no import or initialization anywhere. Without a proper LangChain LLM, the tracing has nothing to track. You need something like from langchain_openai import ChatOpenAI then chat_model = ChatOpenAI(model="gpt-3.5-turbo") before building your chain. I’ve made this exact mistake copying code snippets - everything looks right but you forget the LLM setup. Also, Colab can be weird with external API calls to Langsmith’s servers. If your environment variables are set but you still don’t see traces, try restarting the runtime.

Had this exact issue last month - it’s usually network-related. Colab’s runtime blocks outbound connections to Langsmith’s endpoints, especially on the free tier. Yeah, you need those environment variables, but there’s more to check.

Run a quick connectivity test before your main code. I just use a basic HTTP request to see if Langsmith’s API is reachable. Sometimes switching runtimes or using a VPN fixes it.

Check your Langsmith dashboard’s project settings too. My API key had wrong permissions - could read projects but couldn’t write traces. It fails silently with no error messages, which makes debugging a pain.