I’m trying to figure out the internal workings of langchain’s tool binding functionality. I want to know how the framework transforms code-based tool definitions into text prompts that can be understood by language models.
Please correct me if my understanding is wrong anywhere.
Here’s the bind_tools
method I’m looking at:
class BaseChatOpenAI(BaseChatModel):
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type, Callable, BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
"""
return super().bind(tools=formatted_tools, **kwargs)
When I trace the super()
call, I find:
class Runnable(Generic[Input, Output], ABC):
def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
"""Bind arguments to a Runnable, returning a new Runnable."""
return RunnableBinding(bound=self, kwargs=kwargs, config={})
And looking at RunnableBinding
:
class RunnableBinding(RunnableBindingBase[Input, Output]):
"""Wrap a Runnable with additional functionality.
Example usage:
from langchain_community.chat_models import ChatOpenAI
chat_model = ChatOpenAI()
chat_model.invoke('Say "Bird-MAGIC"', stop=['-'])
bound_model = chat_model.bind(stop=['-'])
bound_model.invoke('Say "Bird-MAGIC"')
"""
def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
return self.__class__(
bound=self.bound,
config=self.config,
kwargs={**self.kwargs, **kwargs},
custom_input_type=self.custom_input_type,
custom_output_type=self.custom_output_type,
)
I’m stuck at this point and can’t see how the tool information actually gets passed to the language model.