Empty response content when using function calls in Langchain

I’m working on implementing function calls with Langchain but running into issues where the model doesn’t return any text content in the final response. I followed a guide and set up my functions properly, but the AI just returns empty content after executing the tools.

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool

model = ChatOpenAI(
    base_url="https://api.together.xyz/v1",
    api_key="my_api_key",
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
)

@tool
def sum_numbers(x: int, y: int) -> int:
    """Sum two numbers together.
    
    Args:
        x: First number
        y: Second number
    """
    return x + y

@tool
def calculate_product(x: int, y: int) -> int:
    """Calculate product of two numbers.
    
    Args:
        x: First number
        y: Second number
    """
    return x * y

function_list = [sum_numbers, calculate_product]
model_with_tools = model.bind_tools(function_list)

user_messages = [HumanMessage("Calculate 5 * 8 and also find 15 + 25")]
first_result = model_with_tools.invoke(user_messages)

if first_result.tool_calls:
    user_messages.append(first_result)
    for call in first_result.tool_calls:
        tool_function = {"sum_numbers": sum_numbers, "calculate_product": calculate_product}[call["name"]]
        result_message = tool_function.invoke(call)
        user_messages.append(result_message)
    
    final_result = model_with_tools.invoke(user_messages)
    print(final_result)

The tools execute correctly and return the right values (40 and 40), but the final response has empty content instead of a natural language answer explaining the results. Has anyone faced this before?

Yeah, been there. Your tool execution works fine, but you’re breaking the message chain afterward.

When you call tool_function.invoke(call), you get raw results without proper message structure. The model needs ToolMessage objects that link back to the original tool_call_id. Without that connection, it’s like showing someone answers without the questions.

Here’s what fixed it for me:

from langchain_core.messages import ToolMessage

# Replace your tool execution loop with this
for call in first_result.tool_calls:
    tool_function = {"sum_numbers": sum_numbers, "calculate_product": calculate_product}[call["name"]]
    result = tool_function.invoke(call["args"])
    tool_message = ToolMessage(content=str(result), tool_call_id=call["id"])
    user_messages.append(tool_message)

Creating ToolMessage objects with the tool_call_id keeps the conversation thread intact. Now the model knows which results belong to which function calls.

I hit this exact issue building a data analysis workflow last year. Fixed the message formatting and boom - proper explanations came back.

This video covers function calling patterns really well:

you’re missing the ToolMessage wrapper when you add results back to the conversation. the model can’t understand what happened with tool calls without proper message formatting. wrap your tool results in ToolMessage before adding them to user_messages - that’ll fix the empty responses.

This happened to me with different LLM providers through Langchain. Your issue is how you’re handling message flow after tool execution. You’re invoking tool functions directly and just appending raw results to the message list - that’s the problem. The model needs properly formatted ToolMessage objects that reference the original tool call IDs. Without those IDs, the model can’t match results to function calls, so it gets confused and returns empty responses. Fix your tool execution loop to create proper ToolMessage objects that include the tool_call_id from the original call. This keeps the conversation context intact so the model can actually summarize the tool results.

Had the same issue with Together AI’s endpoints. You’re calling the tool functions directly instead of using Langchain’s tool execution. Don’t use tool_function.invoke(call) - create ToolMessage objects with the call ID and results instead. Also, Mixtral often needs explicit prompting to give explanatory text after running tools. Try adding a follow-up message asking it to summarize the results in plain English.