How to display execution steps of Llama_index AI Agent?

I’m working on an AI Agent project using the Llama_index Python library. I need to see the execution steps as it runs. I’ve tried a couple of things but no luck so far.

First, I added verbose=True to the FunctionAgent:

from llama_index.core.agent.workflow import FunctionAgent

assistant = FunctionAgent(
    tools=[my_tool],
    language_model=my_llm,
    system_message='Be a helpful assistant',
    verbose=True,
    allow_multiple_tools=True,
)

That didn’t work, so I tried using callbacks:

from llama_index.core.callbacks import CallbackManager, DebugLogger

debug_log = DebugLogger()
callback_handler = CallbackManager([debug_log])

assistant = FunctionAgent(
    tools=[my_tool],
    language_model=my_llm,
    system_message='Be a helpful assistant for engagement questions',
    callback_handler=callback_handler,
    allow_multiple_tools=True,
)

But that didn’t work either. I’m using Llama_index version 0.12.31. Any ideas on how to see the steps?

For displaying execution steps in Llama_index, I’ve found success using the built-in tracing functionality. Here’s what worked for me:

from llama_index.core.callbacks import LlamaDebugHandler, CallbackManager

debug_handler = LlamaDebugHandler()
callback_manager = CallbackManager([debug_handler])

assistant = FunctionAgent(
tools=[my_tool],
language_model=my_llm,
system_message=‘Be a helpful assistant’,
callback_manager=callback_manager,
allow_multiple_tools=True,
)

This approach enables detailed tracing of the agent’s actions. You can access the collected events through debug_handler.get_events() after running your agent. It provides a comprehensive view of the execution flow, including tool selections and responses. Remember to review the Llama_index documentation for the most up-to-date tracing methods, as they may evolve with new releases.

I’ve been in a similar situation with Llama_index, and I found that the logging module can be quite helpful for debugging and seeing execution steps. Here’s what worked for me:

import logging
logging.basicConfig(level=logging.DEBUG)

assistant = FunctionAgent(
    tools=[my_tool],
    language_model=my_llm,
    system_message='Be a helpful assistant',
    verbose=True,
    allow_multiple_tools=True,
)

This setup should give you more detailed output about what’s happening under the hood. If you need even more granular control, you can create a custom logger:

logger = logging.getLogger('llama_index')
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger.addHandler(handler)

Hope this helps you get the visibility you need for your project!