Langsmith tracking not working in Google Colab environment

Hi everyone,

I’m having issues with getting my code to show up in Langsmith when running it in Google Colab. The code runs fine but nothing appears in the Langsmith dashboard.

from langchain_core.documents import Document
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.prompts import ChatPromptTemplate
from langchain_community.vectorstores.faiss import FAISS
from langchain_openai import AzureOpenAIEmbeddings
import logging
from langchain.chains import create_retrieval_chain
from langsmith import Client
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.prompts import MessagesPlaceholder

def load_web_content(webpage_url):
    logging.getLogger("langchain_text_splitters.base").setLevel(logging.ERROR)
    web_loader = WebBaseLoader(webpage_url)
    documents = web_loader.load()
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=500,
        chunk_overlap=50
    )
    split_documents = text_splitter.split_documents(documents)
    return split_documents

def setup_vector_database(documents):
    embedding_model = AzureOpenAIEmbeddings(
        model="text-embedding-ada-002",
        azure_endpoint="your-endpoint-here",
        openai_api_key="your-key-here",
        openai_api_version="2023-12-01-preview"
    )
    vector_db = FAISS.from_documents(documents, embedding_model)
    return vector_db

def build_chat_chain(vector_db):
    chat_prompt = ChatPromptTemplate.from_messages([
        ("system", "Please answer the question using the provided context: {context}"),
        MessagesPlaceholder(variable_name="conversation_history"),
        ("human", "{user_query}")
    ])
    
    document_chain = create_stuff_documents_chain(
        llm=chat_model,
        prompt=chat_prompt
    )
    
    doc_retriever = vector_db.as_retriever(search_kwargs={"k": 5})
    final_chain = create_retrieval_chain(
        doc_retriever,
        document_chain
    )
    return final_chain

def handle_user_message(chain, user_question, history):
    result = chain.invoke({
        "user_query": user_question,
        "conversation_history": history
    })
    return result["answer"]

conversation_log = []

if __name__ == "__main__":
    web_docs = load_web_content("https://docs.smith.langchain.com/tutorials")
    database = setup_vector_database(web_docs)
    chat_chain = build_chat_chain(database)
    
    while True:
        question = input("Ask me: ")
        if question.lower() == "quit":
            break
        answer = handle_user_message(chat_chain, question, conversation_log)
        conversation_log.append(HumanMessage(content=question))
        conversation_log.append(AIMessage(content=answer))
        print("Assistant:", answer)

Everything works as expected but no traces show up in my Langsmith project. Has anyone experienced this before?

Any help would be really appreciated!

Been there - debugging Langsmith tracking in Colab is a nightmare. Environment variables help, but there’s always more mess hiding underneath.

Here’s the thing: you’re doing all this tracking manually when you could just automate the whole monitoring setup. I built a workflow that handles Langsmith integration automatically and runs everything in a way more reliable environment than Colab.

Moved my RAG pipeline to an automated workflow that:

  • Sets up environment variables correctly
  • Handles Langsmith client initialization
  • Runs document processing and retrieval automatically
  • Captures all traces without manual setup
  • Sends notifications when things break

Once it’s set up, I never worry about missing traces or environment issues. Can trigger it from anywhere and get consistent logging every time.

Latenode made this super easy to build. Just drag and drop components, connect APIs, and it handles all the reliability stuff automatically. No more debugging why traces aren’t showing up in random Colab sessions.

Had this exact problem last month. You’re missing the environment variables for Langsmith tracing. Set LANGCHAIN_TRACING_V2=true and your LANGCHAIN_API_KEY before running your code. In Google Colab, add these at the top of your notebook: python import os os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_API_KEY"] = "your-langsmith-api-key" os.environ["LANGCHAIN_PROJECT"] = "your-project-name" # optional but recommended Without these, Langchain won’t send traces to Langsmith even though your code runs fine. Make sure you’re using your Langsmith API key from the settings page, not your OpenAI key. Once I added these, traces showed up right away in my dashboard.

check if your langsmith client is initialized properly. you imported Client but i don’t see where ur using it in the code. also make sure ur colab has internet access to reach langsmith servers - sometimes colab blocks certain connections.