Hey everyone, I’m working on an AI Agent project with the Llama_index Python library and I need help figuring out how to display its execution steps. I first tried enabling the verbose mode by setting verbose=True in the FunctionAgent constructor, but it didn’t work:
from llama_index.core.agent.workflow import FunctionAgent
agent_instance = FunctionAgent(
tools=[sample_tool],
llm=sample_llm,
system_prompt='Act as a helpful assistant',
verbose=True,
allow_parallel_tool_calls=True
)
Then, I attempted to use callbacks:
from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
debug_handler = LlamaDebugHandler()
callback_manager = CallbackManager([debug_handler])
agent_instance = FunctionAgent(
tools=[sample_tool],
llm=sample_llm,
system_prompt='Act as a supportive assistant for engagement queries',
callback_manager=callback_manager,
allow_parallel_tool_calls=True
)
Unfortunately, neither method seems to work. I’m currently using Llama_index version 0.12.31. Has anyone encountered this issue before or has any ideas on how to display the steps properly?