Issue Overview
I am facing an issue with my langchain agent. My network checking tool functions correctly on its own, but when the agent attempts to utilize it, an error appears indicating that the tool is not valid.
Error Output:
user@container:/project# python network_agent.py
> Entering new AgentExecutor chain...
Assistant: Do I need to use a tool? Yes.
Action: check_host_status(target='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_host_status(target='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_host_status].
Thought: Do I need to use a tool? Yes
Action: check_host_status(target='127.0.0.1', callbacks='Callbacks' = None)
Action Input: 127.0.0.1check_host_status(target='127.0.0.1', callbacks='Callbacks' = None) is not a valid tool, try one of [check_host_status].
My Implementation:
import requests
from langchain import hub
from langchain.agents import Tool, AgentExecutor, create_react_agent
from langchain_ollama.llms import OllamaLLM
from langchain.memory import ConversationBufferWindowMemory
from pydantic import BaseModel, Field
from langchain.tools import tool
class HostChecker(BaseModel):
hostname: str = Field(description="Target hostname")
@tool("check_host_status", args_schema=HostChecker, return_direct=False)
def check_host_status(hostname: str) -> str:
'''Checks if a host is reachable. usage: check_host_status("example.com")'''
import os
result = os.system(f"ping -c 1 {hostname}")
if result == 0:
status = f"{hostname} is reachable!"
else:
status = f"{hostname} is not reachable!"
return {"text": status}
def save_conversation(user_input, bot_response):
memory.save_context({"input": user_input}, {"output": bot_response})
def run_query(user_input):
message = {
"input": user_input,
"chat_history": memory.load_memory_variables({}),
}
result = executor.invoke(message)
save_conversation(result["input"], result["output"])
return result
tools_list = [
Tool(
name="check_host_status",
func=check_host_status,
description="Use this to verify if a host is online. Input: hostname",
)
]
prompt_template = hub.pull("hwchase17/react-chat")
memory = ConversationBufferWindowMemory(k=10)
model = OllamaLLM(
model="llama2",
keep_alive=-1,
base_url="http://ollama:11434"
)
agent_instance = create_react_agent(model, tools_list, prompt_template, stop_sequence=True)
executor = AgentExecutor(
agent=agent_instance,
tools=tools_list,
verbose=True,
max_iterations=2,
handle_parsing_errors=True
)
run_query("can you check if 127.0.0.1 is online?")
Despite trying various methods outlined in the documentation, I consistently receive the same error. The tool appears to be recognized, as it lists among available tools, yet it seems to malfunction during execution. Has anyone had a similar experience?