How to send user information and state from LangGraph to tools in OpenAI agents?

I am using LangGraph along with LangChain to develop an agent that can operate various custom tools. My key challenge is passing essential data like user_id from the graph’s state to my tools, ensuring they function correctly.

This is the structure I’m currently working with:

agent_runner = create_openai_tools_agent(
   tools=my_custom_tools,
   llm=my_language_model,
)

@tool
def retrieve_user_goals(identifier: int):
  """
  Fetches the goals associated with a specific user
  Args:
    identifier: Unique ID for the user

  Returns:
    list of goals for the user
  """

  ** I NEED TO ACCESS THE identifier HERE ** 

  print(f"Retrieving goals for user with ID: {identifier}")

  return "user goals data"


result = agent_runner.invoke({
  "input": "Can you show my goals?",
})

How can I efficiently access the LangGraph state within my tools or relay user context to ensure they are user-specific? I’ve been trying to resolve this for a while, but I can’t find a suitable solution.

All these solutions still make you manage state manually every time you add tools or change your agent structure.

I ran into this exact issue building agents that needed user context across dozens of tools. Custom middleware and context managers turn into a nightmare when your agent scales.

I ditched trying to patch LangGraph’s state management and built a workflow that handles user context injection automatically. It grabs incoming requests, pulls user info, and sends it to every tool that needs it.

No custom tool classes or schema changes needed. Your tools stay clean. The automation layer sits on top and makes sure user_id gets where it needs to go.

When I add new tools or change how user context works, the workflow updates itself. No more digging through tool definitions or debugging context issues.

Your retrieve_user_goals function works exactly as you wrote it. The automation just makes sure the identifier parameter gets filled correctly every time.

Works great across multiple agents too. Set it up once and every agent gets consistent user context handling.

Here’s where to build this: https://latenode.com

Had the same issue building multi-user agents. Easiest fix is adding user context directly to your state structure as a persistent field. Don’t bother injecting user_id into individual tools - just restructure your graph state to carry user info throughout the whole execution.

When creating your graph, define a state schema with both conversational data and user context. Your tools can then grab this from the graph’s current state without needing custom wrappers or signature changes.

Treat user context like part of your agent’s memory instead of something you pass around. Your retrieve_user_goals tool gets the identifier naturally through execution context. Really handy when multiple tools need user-specific data since they’re all pulling from the same state source.

you can pass user context by adding it to the tool’s function signature. when you invoke the agent, just include user_id in your state dict with your input. the tools will access this context automatically through the execution environment.

I solved this exact problem with context injection through middleware. Set up a context manager that catches tool calls and automatically injects user state before they run. Don’t modify each tool - just wrap your whole tool collection with a context provider that handles user session data. Here’s how it works: create a tool executor that sits between LangGraph and your actual tools. When the agent calls any tool, the executor grabs the user_id from the current graph state and makes it available through a context variable. Your tools just use a simple context getter instead of needing it as a function parameter. This keeps your tool definitions clean and ensures user context flows consistently everywhere. It’s especially great when you’ve got lots of tools that need user data - you write the context logic once instead of changing every tool signature.

check out ToolMessage - you can bind user context directly when creating tools instead of threading it through state. just do tool.bind(user_id=current_user) before adding to your agent. way cleaner than modifying signatures or building custom wrappers.

This is a super common issue with stateful agents. Don’t rely just on function signatures - you need to pass the state context directly to your tool. I’ve had good luck restructuring tools to receive the full state or just the specific context params you need. Try creating a wrapper function that injects the user_id before hitting your actual tool logic. You could also use LangChain’s RunnablePassthrough to keep state flowing through the execution chain. What really worked for me was building a custom tool class that inherits from BaseTool - gives you way more control over how state gets passed around and accessed in your tools.

Been there. Managing state across multiple tools is a pain when you’re juggling user context.

Skip the LangGraph wrestling and custom tool classes - just automate the whole thing. Build a workflow that grabs user context once and feeds it to all your tools automatically.

I did this last month with multiple agents that needed user data. Created an automation that catches the initial request, pulls the user_id, dumps it in shared context, then routes everything with that context already baked in.

No more tweaking function signatures or building wrapper functions. The automation does the heavy lifting so your tools just get what they need.

Your retrieve_user_goals function stays simple. The automation layer pushes that identifier wherever it goes.

Scales way better too. Throw in more tools or change your state structure - the automation adjusts without touching your tool code.

Check out how to automate this: https://latenode.com