Unexpected keyword argument error with 'response_format' in LangGraph's create_react_agent function

I’m running into a confusing issue with LangGraph. When I try to use the response_format parameter in the create_react_agent function, I get this error:

TypeError: create_react_agent() got an unexpected keyword argument 'response_format'

This is strange because the documentation clearly shows this parameter should work. Has anyone else encountered this problem? I’m wondering if there’s a version mismatch or something I’m missing.

Here’s my code that’s causing the issue:

from pydantic import BaseModel, Field
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from typing import Literal
from dotenv import load_dotenv

load_dotenv()

llm = ChatOpenAI(model="gpt-4o")

class StockResponse(BaseModel):
    """Format for stock price responses."""
    price: str = Field(description="Current stock price")
    trend: str = Field(description="Price trend direction")

@tool
def check_stock_price(symbol: Literal["AAPL", "GOOGL"]):
    """Tool to check stock prices."""
    if symbol == "AAPL":
        return "Apple stock is at $150, trending up"
    elif symbol == "GOOGL":
        return "Google stock is at $2800, trending down"
    else:
        raise ValueError("Unsupported stock symbol")

prompt = "Check stock prices for the requested company."

available_tools = [check_stock_price]

agent = create_react_agent(
    llm,
    tools=available_tools,
    response_format=StockResponse,
    state_modifier=prompt
)

query = {"messages": [("user", "What's Apple's current stock price?")]}
result = agent.invoke(query)

print('result:', result)

Any ideas what might be wrong here?

Had this exact same issue on a structured output project. It’s definitely a version problem, but here’s what got me - even after upgrading langgraph, you need to completely restart your Python environment. Spent hours debugging because pip showed the latest version but my IDE was still using cached imports from the old one. Also, response_format behaves differently depending on which LLM provider you’re using with ChatOpenAI. Some models handle structured output way better than others, so if you’re still having problems after fixing the version, try a different model first to see if it’s just a compatibility thing.

check your langgraph version - the response_format param was added in newer versions. run pip install --upgrade langgraph and see if that fixes it. had the same issue last week

This happens when you’re using an old LangGraph version that doesn’t have the response_format parameter yet. I hit the same issue a few months back on a similar project. That parameter only showed up in newer releases, so you’ll need to update your install. Also check that your langchain-core version matches - mismatched versions cause conflicts. If you still get errors after upgrading, run pip cache purge and reinstall fresh. The docs often show latest features even when you’re running older versions.