I’m working on creating an AI Agent with the Llama_index Python framework and need to monitor the execution steps as they happen. I’ve attempted two different approaches but neither seems to work properly.
First approach was using the verbose parameter:
from llama_index.core.agent.workflow import FunctionAgent
my_agent = FunctionAgent(
tools=[my_tool],
llm=language_model,
system_prompt="You are an AI assistant",
verbose=True,
allow_parallel_tool_calls=True,
)
Second attempt involved implementing callback handlers:
from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
# Setup debug handler
debug_handler = LlamaDebugHandler()
callback_mgr = CallbackManager([debug_handler])
my_agent = FunctionAgent(
tools=[my_tool],
llm=language_model,
system_prompt="You are a helpful AI that assists with questions",
callback_manager=callback_mgr,
allow_parallel_tool_calls=True,
)
Unfortunately, both methods failed to show the step-by-step execution. I’m currently using Llama_index version 0.12.31. What’s the correct way to enable step visibility for agent workflows?
This sounds like a workflow engine issue with step visibility in newer versions. I hit the same thing with LlamaIndex agents recently. The verbose parameter isn’t enough anymore - you need custom event handlers for workflow events. Set up handlers that catch workflow step events and print them yourself. Also check if your LLM provider streams responses, since some steps only show up when the model streams its reasoning. The callback approach works, but you’ll need extra event listeners for the specific workflow events you want to see. Your version supports this, so it’s just about configuring the right event types.
you might wanna try using logging for execution steps. i had issues too and just did import logging and logging.basicConfig(level=logging.DEBUG) before initiating the agent. it worked for my v0.11, but not sure if it’s the same for your version.