I’m working with the Llama_index Python framework to create an AI Agent and need help tracking its execution steps. My current setup isn’t showing the internal workflow processes.
First attempt with verbose parameter:
from llama_index.core.agent.workflow import FunctionAgent
my_agent = FunctionAgent(
tools=[my_tool],
llm=language_model,
system_prompt="You are an AI assistant",
verbose=True,
allow_parallel_tool_calls=True,
)
Second attempt using callback handlers:
from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
# Setup debug handler
debug_handler = LlamaDebugHandler()
callback_mgr = CallbackManager([debug_handler])
my_agent = FunctionAgent(
tools=[my_tool],
llm=language_model,
system_prompt="You are a helpful AI that assists with questions",
callback_manager=callback_mgr,
allow_parallel_tool_calls=True,
)
Neither approach works for me. I’m running Llama_index version 0.12.31. What’s the correct way to see agent step-by-step execution?
try adding logging - fixed the same issue for me. import logging, then add logging.basicConfig(level=logging.DEBUG) before you create your agent. you’ll see the debug output in console.
Been down this road myself. The real issue is that Llama_index changed how workflow debugging works in recent versions.
What fixed it for me was using the agent’s built-in workflow events. After creating your agent, hook into the workflow runner directly:
from llama_index.core.workflow import Event
# Add this after creating your agent
workflow = my_agent._workflow
workflow.add_workflows_event_handler(
lambda event: print(f"Step: {event.name}, Data: {event.data}")
)
Also check if your tools have proper names and descriptions. I’ve seen agents run but not log anything because the tools aren’t being recognized.
Another thing that helped was using AutoGen’s logging system since it integrates well with Llama_index workflows:
This gives you much more granular control over what gets logged during execution. Way better than trying to hack around with verbose flags that don’t work consistently.
The callback manager should work, but you’re missing a step. You need to grab the collected events from the debug handler after running your agent. Try this fix:
response = my_agent.chat("your question here")
print(debug_handler.get_llm_inputs_outputs())
print(debug_handler.get_event_pairs())
Or switch to the newer workflow-based agents if you don’t mind updating your code. WorkflowAgent has way better debugging built-in. You can also try setting LLAMA_INDEX_DEBUG=1 before running your script - sometimes this shows internal logging that the standard callback system misses.
Your issue is that FunctionAgent doesn’t support the verbose parameter the way you’re trying to use it. For version 0.12.31, you need to enable debugging at the workflow level instead. Try setting up an event handler for the workflow execution. Import WorkflowEvents and attach it to your agent’s workflow instance. Also, make sure you’re calling the agent correctly - use the chat() method with stream=True to see intermediate steps. I’ve found the callback manager approach usually works, but you might need to explicitly print the debug handler’s events after each run using debug_handler.get_events() to see the execution trace.