LangChain tool validation error: tool name not recognized by agent

Issue Description

I’m having trouble with my LangChain agent setup. My network checking tool runs perfectly when I test it alone, but when the agent tries to use it, I get a validation error saying the tool doesn’t exist.

Error Output:

user@container:/project# python agent_test.py
> Starting AgentExecutor chain...
Assistant: Should I use a tool? Yes.
Action: check_network(server='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_network(server='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_network].
Thought: Should I use a tool? Yes
Action: check_network(server='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_network(server='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_network].

My Code:

import requests
from langchain import hub
from langchain.agents import Tool, AgentExecutor, create_react_agent
from langchain_ollama.llms import OllamaLLM
from langchain.memory import ConversationBufferWindowMemory
from pydantic import BaseModel, Field
from langchain.tools import tool

class NetworkCheck(BaseModel):
    hostname: str = Field(description="Target hostname")

@tool("check_network", args_schema=NetworkCheck, return_direct=False)
def check_network(hostname: str) -> str:
    '''Checks if a network host is reachable. usage: check_network("example.com")'''
    import os
    status = os.system(f"ping -c 1 {hostname}")
    if status == 0:
        message = f"{hostname} is reachable!"
    else:
        message = f"{hostname} is unreachable!"
    return {"text": message}

def save_conversation(user_input, bot_response):
    memory.save_context({"input": user_input}, {"output": bot_response})

def run_query(user_input):
    request_data = {
        "input": user_input,
        "chat_history": memory.load_memory_variables({}),
    }
    result = executor.invoke(request_data)
    save_conversation(result["input"], result["output"])
    return result

available_tools = [
    Tool(
        name="check_network",
        func=check_network,
        description="Use this to test network connectivity to a host. Input: hostname",
    ),
]

template = hub.pull("hwchase17/react-chat")
memory = ConversationBufferWindowMemory(k=10)

model = OllamaLLM(
    model="llama2",
    keep_alive=-1,
    base_url="http://ollama:11434",
)

my_agent = create_react_agent(model, available_tools, template, stop_sequence=True)
executor = AgentExecutor(
    agent=my_agent,
    tools=available_tools,
    verbose=True,
    max_iterations=2,
    handle_parsing_errors=True
)

run_query("can you check if 127.0.0.1 is available?")

I’ve tried different approaches from the docs and tested several working examples from other projects. The tool works fine independently but the agent can’t seem to find it. What could be causing this tool recognition problem?

Check your langchain version - I hit this same bug with older versions. The agent executor gets confused when you mix @tool decorator with Tool wrapper. That args_schema=NetworkCheck might be causing it too. Try dropping it and just use the docstring for description. Fixed it for me when I switched to simpler tool definitions.

Had the same issue with Ollama models. Your LLM probably can’t parse the tool calls right - Llama2 sucks at function calling compared to OpenAI. Add clearer formatting to your tool description and switch to llama3 or codellama if you can. That stop_sequence=True looks wrong too - should be a list or None. When the agent keeps repeating broken actions, it means the model doesn’t get the format. I fixed this by simplifying tool descriptions and using way more detailed prompts.

I hit this same issue with custom LangChain tools. Your function signature looks fine - both the schema and function use hostname as the parameter. But I see the agent’s trying to call it with server='127.0.0.1' in that error.

The real issue is probably your prompt or how you’re phrasing the question. When you ask “check if 127.0.0.1 is available”, the agent’s not connecting it properly to your tool.

Make your tool description way more explicit: “Checks network connectivity to a hostname or IP address. Input should be just the hostname/IP as a string, example: 127.0.0.1”. And double-check that your Pydantic schema field name matches exactly what you want the agent to pass.

I’ve hit this exact problem before. Your tool function returns a dictionary {“text”: message} but the agent wants a plain string.

Change this:

return {"text": message}

To this:

return message

The agent breaks when tools return complex objects instead of simple strings. I wasted hours debugging the same thing last year before figuring out LangChain’s ReAct agent just wants plain text from tools.

The double definition mentioned in the other answer is also an issue, but this return type mismatch is what’s actually killing your agent.

you’re defining the tool twice - that’s probably confusing the agent. the @tool decorator already creates check_network, but then you’re wrapping it again with Tool() class. pick one or the other, not both.