I’m using LangGraph version 0.3.x and working with the build_react_agent function to create my agent. I noticed that the function has an optional prompt parameter, but I’m confused about what happens when I don’t provide one.
Here’s what I want to know:
When I skip the prompt parameter, does build_react_agent automatically use some built-in ReAct prompt behind the scenes?
If there is a default prompt, what does it contain? How does it tell the language model to use the ReAct pattern (Think, Act, Input, Result)?
If there’s no default prompt at all, how does the agent make sure the model follows the ReAct structure? Is there some other internal code that handles this?
Basically, I’m trying to figure out how build_react_agent makes the ReAct pattern work when I don’t give it a specific prompt.
Here’s my code example:
from langgraph.prebuilt import build_react_agent
# my_model and my_tools are set up already
# my_model = ... # Language model instance
# my_tools = [...] # Tool collection
# Building agent without specifying prompt
my_agent = build_react_agent(my_model, my_tools)
# What makes my_agent follow ReAct format here?
I’d really appreciate any explanation about how prompt handling works internally in this situation.
Yeah, build_react_agent in LangGraph 0.3.x does come with a default prompt template when you don’t provide one. Found this out the hard way when debugging why my agent was acting weird - had to dig into what was actually happening behind the scenes. The default template is pretty bare-bones. It covers tool usage and output formatting but that’s about it. What’s cool is it assumes the LLM already knows ReAct patterns. Basically tells the model “use these tools and format your responses properly” but doesn’t walk through the whole think-act-observe cycle explicitly. Checking the agent’s internal state and message history really helped me figure this out. The default works okay for simple stuff, but I ended up writing custom prompts for my production apps. Needed better control over how it reasons and more consistent outputs.
Yeah, build_react_agent definitely has a built-in prompt when you don’t provide one. I’ve used LangGraph 0.3.x quite a bit, and it automatically sets up the ReAct pattern with system instructions that guide the model through the think-act-observe cycle. The default template tells the model to show its reasoning, pick the right tools, and format everything properly. You can see this in action - even without a custom prompt, the agent naturally switches between thinking and using tools. LangGraph’s internal prompt engineering handles all this behind the scenes and works well for most situations, which is why your agent runs smoothly without any prompt setup.
yeah, i totally understand where ur coming from! langgraph does use a default react template if you don’t give it a prompt. it basically guides the llm to think, act, and then results - pretty neat! you can tweak it later if you want, though.
Hit this same issue six months ago building an automation agent. Yeah, build_react_agent does inject a default system prompt, but it’s way more basic than you’d think.
The default prompt just says “here are your tools, use them when needed” and sets up response formatting. But here’s the kicker - GPT-4 and other modern LLMs already know the ReAct pattern from training. The agent works because the model naturally does that think-act-observe loop.
I logged the internal messages in one of my projects to check this out. The default prompt is super short and focuses on tool instructions, not teaching ReAct steps. The model just knows to reason first, call tools, then keep going based on what it gets back.
Want to see what’s actually happening? Run your agent with debug logging. You’ll see the model follows ReAct even with that bare-bones default prompt because it’s built into how these models think now.
For production though, I always write custom prompts since the default is pretty generic.