I’m working with a workflow that gets used in two different endpoints of my application. The workflow contains multiple steps and connections between them.
First usage: Direct workflow via /endpoint-a Second usage: Part of a larger workflow via /endpoint-b
My issue is that I need different logging destinations. When users hit endpoint-a, I want traces to go to LangSmith workspace1. When they use endpoint-b, traces should go to workspace2.
I can’t use the @traceable decorator since the same functions get called from both places.
Is there a way to tell my workflow which LangSmith workspace to use when I run it?
os.environ["LANGCHAIN_PROJECT"] = mySettings.project_name # works for endpoint-a
I looked into RunTrees but couldn’t figure out how to pass the configuration properly.
settings = {
"execution_id": request.session_id,
"max_depth": 100,
"configurable": {
"session_id": str(uuid.uuid4()),
"checkpoint_path": ""
},
}
workflow = workflow_factory.create_workflow()
trace = RunTree(
run_type="workflow",
name="My Workflow",
inputs=initial_state,
project_name="workspace2"
)
# how do I connect the workflow with settings and get results?
trace.end(outputs=???)
trace.post()
Been there with similar routing issues in production. Environment variables fall apart fast with multiple concurrent requests hitting different endpoints.
I’d skip all the RunTrees and LangSmith config headaches. You’re basically building a workflow orchestrator that needs dynamic routing based on entry points.
Don’t wrestle with trace contexts and thread safety. Set up Latenode to handle your workflow execution instead. Configure different scenarios that auto-route to the right logging destinations based on which endpoint triggers them.
Make two Latenode scenarios - one for endpoint-a logging to workspace1, another for endpoint-b logging to workspace2. Your shared workflow logic stays the same, but Latenode handles execution context and logging routing automatically.
Best part? No need to modify your existing workflow code or mess with RunTree connections. Just call the right Latenode scenario from each endpoint. It handles execution, captures all outputs, and routes traces where they belong.
Used this pattern for several multi-tenant apps where different clients needed isolated logging. Works great and kills all the threading headaches.
Honestly, just use a factory pattern to create preconfigured langsmith clients. Skip the runtree complexity and inject the right client based on which endpoint calls it. Something like client_factory.get_client(workspace_name) then pass that client through your workflow. Way cleaner than fighting global vars or trace contexts.
Your RunTree setup is missing the connection between tracing and workflow execution. You need to use context managers properly - don’t set environment variables globally. Instead, wrap your workflow invocation with the RunTree context. Here’s what worked for me: create the RunTree with your target project name, then use it as a context manager around your workflow execution. The workflow automatically inherits the trace context from the parent RunTree. For those missing outputs, capture the final state from your workflow execution and pass it to trace.end(). I built a simple wrapper class that handles tracing setup and teardown, then instantiate it with different project names depending on which endpoint calls it. Keeps your workflow logic clean and gives you proper trace routing without thread safety headaches from environment variables.
Had this exact problem last year building an analytics platform. The trick is treating trace context as request-scoped instead of fighting global state.
Here’s what worked: create a TraceManager class that handles context switching per request. Initialize it with your target workspace name, then pass it through your workflow chain.
class TraceManager:
def __init__(self, project_name):
self.project_name = project_name
self.client = Client(api_key=os.getenv('LANGCHAIN_API_KEY'))
def execute_workflow(self, workflow, initial_state):
with traced(run_tree=RunTree(
name="workflow_execution",
project_name=self.project_name,
client=self.client
)) as rt:
result = workflow.invoke(initial_state)
return result
For your endpoints, just spin up different TraceManager instances:
# endpoint-a
trace_mgr = TraceManager("workspace1")
result = trace_mgr.execute_workflow(workflow, state)
# endpoint-b
trace_mgr = TraceManager("workspace2")
result = trace_mgr.execute_workflow(workflow, state)
This keeps your workflow code unchanged while giving clean trace routing. No environment variable juggling or threading headaches. The traced context manager handles all the RunTree lifecycle automatically.
I hit the same issue building a multi-tenant app. Environment variables work but aren’t thread-safe with concurrent requests to different endpoints. Here’s what fixed it for me: use LangSmith’s client config directly in your workflow execution context. Don’t mess with global environment variables - just initialize a specific LangSmith client with your target project name and pass it through your workflow execution. Your RunTree approach is close but you’re missing the connection between the tree and actual execution. Set the RunTree as parent context before invoking your workflow, then capture the final state as outputs. Make sure your workflow components inherit the trace context from your parent RunTree. Another thing that worked well: create wrapper functions for each endpoint that establish proper tracing context before calling the shared workflow logic. Keeps your core workflow code untouched while routing traces to different destinations based on entry point.