Setting up routing logic in Langgraph workflow with conditional branching

I’m working on building an AI system that needs to route between different workflows based on user input. The system should decide if a user wants basic information lookup or needs complex processing through multiple stages. I’m encountering a TypeError when I attempt to access the AI response. Here’s my current setup:

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph import StateGraph
from langchain.schema import HumanMessage, AIMessage, SystemMessage

load_dotenv()

class WorkflowState(TypedDict):
    conversations: Annotated[list, add_messages]

# Setup LLM
client = ChatOpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),
    model="gpt-3.5-turbo",
    temperature=0.1,
    max_tokens=200
)

def build_agent_node(state, prompt_text):
    user_msgs = [m for m in state["conversations"] if isinstance(m, HumanMessage)]
    bot_msgs = [m for m in state["conversations"] if isinstance(m, AIMessage)]
    sys_msg = [SystemMessage(content=prompt_text)]
    all_msgs = sys_msg + user_msgs + bot_msgs
    response = client.invoke(all_msgs)
    return {"conversations": [response]}

def load_data_from_file():
    data_store = {}
    try:
        with open('tech_data.csv', 'r') as file:
            # Process file content
            pass
    except FileNotFoundError:
        print("Data file missing")
    return data_store

def routing_decision(state):
    latest_query = [m for m in state["conversations"] if isinstance(m, HumanMessage)][-1].content
    
    decision_prompt = """
    Analyze the user request and classify it as either:
    - 'info_lookup' for simple factual questions
    - 'complex_task' for development or multi-step processes
    
    Return only the classification.
    """
    
    result = client.invoke([SystemMessage(content=decision_prompt)] + [HumanMessage(content=latest_query)])
    
    # This line causes the error
    choice = result["conversations"][0].lower()
    
    if "info_lookup" in choice:
        return "simple_lookup"
    else:
        return "task_processor"

# Define processing nodes
processor_a = lambda s: build_agent_node(s, "You handle initial analysis...")
processor_b = lambda s: build_agent_node(s, "You design solutions...")
processor_c = lambda s: build_agent_node(s, "You implement code...")

# Build workflow
workflow = StateGraph(WorkflowState)
workflow.add_node("task_processor", processor_a)
workflow.add_node("solution_designer", processor_b)
workflow.add_node("code_writer", processor_c)
workflow.add_node("simple_lookup", simple_info_node)

workflow.add_conditional_edges(START, routing_decision)

The error occurs when I attempt to access the response as if it were a dictionary. How can I properly extract the content from the AI response object? I need to obtain the text content to ensure the routing decisions work correctly.

This is a classic LangChain mistake. When you use client.invoke(), it returns an AIMessage object, not a dictionary. You can’t access result["conversations"][0] because that structure doesn’t exist.

In your routing_decision function, just grab the content directly:

result = client.invoke([SystemMessage(content=decision_prompt)] + [HumanMessage(content=latest_query)])
choice = result.content.lower()

The AIMessage has a .content property with the actual text response. This’ll fix your TypeError and get your routing working. I made this exact same mistake when I started with LangChain - kept treating the response like a dict when it’s actually a message object.

Your problem is you’re misunderstanding what ChatOpenAI’s invoke method returns. It gives you an AIMessage object directly, not a dictionary with conversations. I hit the same issue when I built my first routing system. Change this line in your routing_decision function: python choice = result["conversations"][0].lower() To: python choice = result.content.lower() Also, keep your state structure consistent across the workflow. Your build_agent_node function returns a dictionary with a conversations key, but your routing function tries to access something that doesn’t exist. The invoke method returns the message object itself - just use .content to get the actual text for routing.

You’ve got object type confusion. ChatOpenAI’s invoke method returns an AIMessage object, not a dictionary like your state format.

I hit this same issue last year building a document processing pipeline. Easy fix - just access the content property:

result = client.invoke([SystemMessage(content=decision_prompt)] + [HumanMessage(content=latest_query)])
choice = result.content.strip().lower()

Also spotted another problem - you’re missing the simple_info_node function that your workflow references. Define that before adding it to your graph.

For complex routing like this, I test the routing logic separately first. Just create a simple test function that prints the routing decisions, then integrate it into the full workflow.

This video covers exactly what you’re building and shows proper routing patterns. The main thing is understanding that LangGraph state management and direct LLM responses use different object structures.

the issue is you’re mixing up langgraph state structure with llm response format. client.invoke() returns an aiMessage directly - no nested dictionary. just use choice = result.content.strip().lower() and you’re good. also, you’re missing the simple_info_node function which will throw another error.