Langchain agent throws error stating tool is invalid

Issue Overview

I am facing an issue with my langchain agent. My network checking tool functions correctly on its own, but when the agent attempts to utilize it, an error appears indicating that the tool is not valid.

Error Output:

user@container:/project# python network_agent.py 
> Entering new AgentExecutor chain...
Assistant: Do I need to use a tool? Yes.
Action: check_host_status(target='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_host_status(target='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_host_status].
Thought: Do I need to use a tool? Yes
Action: check_host_status(target='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_host_status(target='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_host_status].

My Implementation:

import requests
from langchain import hub
from langchain.agents import Tool, AgentExecutor, create_react_agent
from langchain_ollama.llms import OllamaLLM
from langchain.memory import ConversationBufferWindowMemory
from pydantic import BaseModel, Field
from langchain.tools import tool

class HostChecker(BaseModel):
    hostname: str = Field(description="Target hostname")

@tool("check_host_status", args_schema=HostChecker, return_direct=False)
def check_host_status(hostname: str) -> str:
    '''Checks if a host is reachable. usage: check_host_status("example.com")'''
    import os
    result = os.system(f"ping -c 1 {hostname}")
    if result == 0:
        status = f"{hostname} is reachable!"
    else:
        status = f"{hostname} is not reachable!"
    return {"text": status}

def save_conversation(user_input, bot_response):
    memory.save_context({"input": user_input}, {"output": bot_response})

def run_query(user_input):
    message = {
        "input": user_input,
        "chat_history": memory.load_memory_variables({}),
    }
    result = executor.invoke(message)
    save_conversation(result["input"], result["output"])
    return result

tools_list = [
    Tool(
        name="check_host_status",
        func=check_host_status,
        description="Use this to verify if a host is online. Input: hostname",
    )
]

prompt_template = hub.pull("hwchase17/react-chat")
memory = ConversationBufferWindowMemory(k=10)

model = OllamaLLM(
    model="llama2",
    keep_alive=-1,
    base_url="http://ollama:11434"
)

agent_instance = create_react_agent(model, tools_list, prompt_template, stop_sequence=True)
executor = AgentExecutor(
    agent=agent_instance,
    tools=tools_list,
    verbose=True,
    max_iterations=2,
    handle_parsing_errors=True
)

run_query("can you check if 127.0.0.1 is online?")

Despite trying various methods outlined in the documentation, I consistently receive the same error. The tool appears to be recognized, as it lists among available tools, yet it seems to malfunction during execution. Has anyone had a similar experience?

This happens when the LLM generates broken action syntax and the agent can’t parse it. Your tool definition is fine - it’s how llama2 formats the tool calls that’s the problem. I’ve hit this constantly with Ollama models. The agent spits out Action: check_host_status(target='127.0.0.1', callbacks='Callbacks' = None) instead of proper ReAct format. That callbacks parameter shows the model’s just making up function signatures. Set a custom system prompt that shows the right format: When using tools, format as: Action: tool_name Action Input: input_value Drop max_iterations to 1 first so the agent doesn’t get stuck looping errors. Llama2 loves repeating the same broken syntax when it fails. If that doesn’t help, try mistral or codellama instead. Some models handle ReAct format way better. Your tool registration’s solid - this is purely a parsing issue with your LLM.

your tool function parameter is named hostname but you’re passing target - that’s your mismatch right there. also, try using just the decorated function without wrapping it in Tool(). langchain gets confused when you define things twice like that.

Your prompt template’s broken - the agent’s trying to pass callback parameters that don’t exist in your tool definition.

I hit this exact same issue building network monitoring stuff. That hwchase17/react-chat prompt is super finicky with tool execution.

Honestly? Ditch the langchain agent setup completely. Build this as an automated workflow in Latenode instead. You can set up HTTP endpoints for your network checks, chain monitoring tasks together, and schedule health checks.

With Latenode you won’t deal with agent parsing errors or tool validation headaches. Just drag your network checking logic into a workflow, connect some triggers, done. Way cleaner than debugging langchain execution chains.

I’ve moved most monitoring automation to workflows because they’re way more reliable than getting agents to execute tools properly.

Your agent’s messing up the tool call syntax. It’s trying to call the tool with explicit parameter names like target='127.0.0.1', callbacks='Callbacks' = None when it should just pass the input value.

I hit this same issue with Ollama models and langchain agents. Llama2 sometimes generates broken tool calls, especially with the react prompt template. Switch to a simpler prompt or try langchain.agents.initialize_agent with AgentType.ZERO_SHOT_REACT_DESCRIPTION instead of the hub prompt.

Also check your langchain version - older versions had tool parameter parsing bugs that caused this exact error. The agent sees your tool but crashes during invocation because of the broken action syntax.

I hit the same issue. You’re defining the tool twice - once with the @tool decorator and again wrapping it in a Tool object. This confuses the agent. Just drop the Tool wrapper and use the @tool decorated function directly in tools_list. Also, your function returns a dict with a ‘text’ key, but the agent wants a plain string. Change return {'text': status} to just return status. One more thing - make sure your args_schema parameter name matches your function parameter. Yours looks right (both use hostname), but this is where validation errors often come from.

The problem is your function signature mismatch and double definition. I spent weeks debugging this exact issue when building monitoring agents.

Your @tool decorator defines hostname as the parameter, but the agent’s trying to pass target. Pick one name and use it everywhere.

You’re also defining the tool twice - once with @tool decorator and again with Tool(). This breaks the agent’s tool registry. Just use the decorated function:

tools_list = [check_host_status]  # Just the decorated function

Your function returns {"text": status} but langchain agents expect plain strings. Change it to return status.

I’ve seen this pattern fail because the agent can’t figure out which tool definition to use. The decorator handles all the metadata - you don’t need the Tool wrapper.

Try this fix first before switching models or prompts. Tool registration issues cause most “not a valid tool” errors even when the tool shows up in the available list.