MemorySaver checkpointer causing validation issues when examining LangGraph message flow

I’m working on a LangGraph chatbot that uses MemorySaver to keep conversation history. Following the official tutorial but added my own system message for tool guidance. The issue happens when I try to run the graph and check the output.

Here’s my graph setup:

def build_template():
    return (ChatPromptTemplate.from_messages(
        [
            (
                "system",
                system_message.format(columns = column_data),
            ),
            MessagesPlaceholder(variable_name="conversation_history"),
                ("human", "{user_query}")
        ]
    ))

storage = MemorySaver()

class ConversationState(TypedDict):
    messages: Annotated[list, add_messages]

builder = StateGraph(ConversationState)

model_with_functions = model.bind_tools(available_tools)

def assistant_node(state: ConversationState):
    current_messages = state["messages"]
    
    template = build_template()
    processed_prompt = template.format(
        conversation_history=state["messages"][:-1],
        user_query=current_messages
    )
    
    result = model_with_functions.invoke(processed_prompt)
    state["messages"].append({"role": "assistant", "content": result})
    
    return {"messages": state["messages"]}

builder.add_node("assistant", assistant_node)

function_executor = ToolNode(tools=available_tools)
builder.add_node("functions", function_executor)

builder.add_conditional_edges(
    "assistant",
    function_condition,
)
builder.add_edge("functions", "assistant")
builder.add_edge(START, "assistant")

compiled_graph = builder.compile(checkpointer=storage)

When I test it like this:

test_message = "Hello! I'm Sarah."

results = compiled_graph.stream(
    {"messages": [("user", test_message)]}, configuration, stream_mode="values"
)
for result in results:
    result["messages"][-1].pretty_print()

I get a ValidationError about AIMessage content validation. The error shows it’s trying to process tuples instead of proper message objects. Even when I just print the results directly, I see the same validation error.

I think the problem might be in how I’m handling the ChatPromptTemplate since that’s different from the basic tutorial. Any ideas what I’m doing wrong with the message formatting or state handling?

you’re mixing message formats in the assistant_node. when you call template.format(), it gives you back a string, but LangGraph needs proper message objects. skip the template formatting and pass the messages straight to the model - it’ll handle the conversation history on its own.

You’re treating messages like strings when LangGraph needs proper message objects. Hit this same issue last year building something similar.

Your assistant_node is doing way too much. Ditch the template formatting and just pass state messages straight to your model. It already knows how to handle conversation history.

Here’s what your assistant node should look like:

def assistant_node(state: ConversationState):
    result = model_with_functions.invoke(state["messages"])
    return {"messages": [result]}

That’s it. The add_messages reducer handles appending to your message list.

For the system message, add it when you initialize conversation state instead of in the template. Keeps everything as proper message objects from the start.

This tutorial covers LangGraph message flow and state management really well:

Validation errors will disappear once you stop converting messages to strings and back to objects.

Your assistant_node function is double-processing messages. You’re formatting the template with conversation history, then trying to append a dictionary to the messages list - but LangGraph needs proper message objects the whole way through.

Ditch the template.format() call and let the model handle messages directly. Your append operation’s wrong too - return the new message from model invocation instead of manually appending dictionaries.

Just invoke the model with the current messages state and return that result as part of the messages list. MemorySaver checkpointer works fine when message objects keep their proper structure.

This ValidationError is occurring due to inconsistencies in the message types within your pipeline. In your assistant_node, the template produces a string, which you are then trying to append as a dictionary to the messages list, but the model requires that all messages remain in object form. In my experience building a LangGraph chatbot, I encountered this issue too, and here’s how I resolved it: after implementing MemorySaver, it’s essential to maintain the message objects consistently throughout the entire flow. Instead of formatting the template first and then invoking the model, construct your messages as proper objects initially before sending them to the model. Additionally, review your function_condition; if it mishandles message types, it could trigger validation errors that propagate through your state management.

your user_query parameter is taking the entire current_messages list, but the template needs a string. that’s causing the tuple validation errors. just use user_query=current_messages[-1].content to get the latest message content only.