How to display execution steps in Llama_index AI Agent workflow

I’m working on creating an AI Agent with the Llama_index Python framework and need to monitor the execution steps as they happen. I’ve attempted two different approaches but neither seems to work properly.

First approach was using the verbose parameter:

from llama_index.core.agent.workflow import FunctionAgent
my_agent = FunctionAgent(
    tools=[my_tool],
    llm=language_model,
    system_prompt="You are an AI assistant",
    verbose=True,
    allow_parallel_tool_calls=True,
)

Second attempt involved implementing callback handlers:

from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler

# Setup debug handler
debug_handler = LlamaDebugHandler()
callback_mgr = CallbackManager([debug_handler])

my_agent = FunctionAgent(
    tools=[my_tool],
    llm=language_model,
    system_prompt="You are a helpful AI that assists with questions",
    callback_manager=callback_mgr,
    allow_parallel_tool_calls=True,
)

Unfortunately, both methods failed to show the step-by-step execution. I’m currently using Llama_index version 0.12.31. What’s the correct way to enable step visibility for agent workflows?

This sounds like a workflow engine issue with step visibility in newer versions. I hit the same thing with LlamaIndex agents recently. The verbose parameter isn’t enough anymore - you need custom event handlers for workflow events. Set up handlers that catch workflow step events and print them yourself. Also check if your LLM provider streams responses, since some steps only show up when the model streams its reasoning. The callback approach works, but you’ll need extra event listeners for the specific workflow events you want to see. Your version supports this, so it’s just about configuring the right event types.

you might wanna try using logging for execution steps. i had issues too and just did import logging and logging.basicConfig(level=logging.DEBUG) before initiating the agent. it worked for my v0.11, but not sure if it’s the same for your version.

Had this exact problem last month debugging a complex agent workflow. FunctionAgent doesn’t expose workflow steps like other agent types do.

Switch to WorkflowAgent instead:

from llama_index.core.agent.workflow import WorkflowAgent
from llama_index.core.workflow import Event, StartEvent, StopEvent, Workflow, step

class MyWorkflow(Workflow):
    @step
    async def run_step(self, ev: StartEvent) -> StopEvent:
        print(f"Executing step: {ev}")
        # your logic here
        return StopEvent(result="done")

workflow = MyWorkflow()
agent = WorkflowAgent(workflow, verbose=True)

You get proper step visibility since you control the workflow directly. Each @step decorator logs what’s happening.

If you’re stuck with FunctionAgent, wrap your tools with logging:

def logged_tool(func):
    def wrapper(*args, **kwargs):
        print(f"Tool called: {func.__name__} with {args}")
        result = func(*args, **kwargs)
        print(f"Tool result: {result}")
        return result
    return wrapper

The callback approach you tried works better with QueryEngine than agents unfortunately.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.