How to display execution steps in Llama_index AI Agent workflow

I’m working with the Llama_index Python framework to create an AI Agent and need to track the execution process step by step. I’ve attempted multiple approaches but haven’t been successful.

First approach was using the verbose parameter:

from llama_index.core.agent.workflow import FunctionAgent
my_agent = FunctionAgent(
    tools=[my_tool],
    llm=model,
    system_prompt="You are an AI assistant",
    verbose=True,
    allow_parallel_tool_calls=True,
)

Then I tried implementing callback handlers:

from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler

# Setup debug handler
debug_handler = LlamaDebugHandler()
callbacks = CallbackManager([debug_handler])

my_agent = FunctionAgent(
    tools=[my_tool],
    llm=model,
    system_prompt="You are a helpful AI that assists with questions",
    callback_manager=callbacks,
    allow_parallel_tool_calls=True,
)

Neither method produces the step-by-step output I’m looking for. I’m running Llama_index version 0.12.31. What’s the correct way to enable step visibility?

I hit this same issue recently. Switch to the workflow-based approach instead of FunctionAgent - that’s what fixed it for me. FunctionAgent doesn’t expose intermediate steps like workflows do. Create a custom workflow class that inherits from Workflow and use the @step decorator for each phase you want to track. Add logging inside each step method to see what’s happening. Workflows give you way better control over visibility than the agent layer. Also throw logging.basicConfig(level=logging.DEBUG) at the top of your script - some internal stuff only logs at debug level. This combo should get you the step-by-step visibility you need without messing with those limited verbose parameters.

This is likely a workflow visibility issue with newer LlamaIndex versions. I encountered this problem with 0.12.x as well; the verbose parameter and basic callback managers are insufficient for tracking steps. You need to implement custom event handlers now. Specifically, override the on_event_start and on_event_end methods to capture internal processing steps of the agent. The LlamaDebugHandler you’re using might not be tracking all workflow events by default. To resolve this, enable trace logging at the module level before you initialize your agent. This provides significantly better control over what information is displayed during execution. Keep in mind that FunctionAgent integrates its workflow events deeper into the execution chain than standard query processing, making them more difficult to capture.

try adding print_intermediate_steps=True when u call the agent’s chat method instead of during init. fixed the same workflow visibility issue for me in llamaindex.