What's the best way to monitor HTTP traffic between LangGraph/LangChain and APIs?

I need help tracking network communication when working with LangChain and LangGraph applications. Basically I want to capture all HTTP requests my code sends to language model APIs and see the responses that come back.

Right now I’ve tried a few approaches:

  • mitmproxy - Works but feels like a workaround and setting up SSL certificates is annoying
  • Wireshark - Good results with local Ollama but too complicated for regular use

What I’m really looking for is some automatic logging solution that saves all request/response data to a file or similar. This needs to work with external services like OpenAI API, not just local setups. I’m on macOS and use Jupyter notebooks mostly.

Is there a cleaner way to get this visibility into the API calls without jumping through hoops? Something that just works out of the box would be perfect.

honestly the easiest thing ive found is just using python’s requests-toolbelt with HTTPAdapter logging. super lightweight compared to mitmproxy and doesnt require any ssl mess. just import it, enable debug logging for urllib3, and all your api calls get logged automaticaly to whatever file you want. works perfectly with openai sdk since it uses requests under the hood

Same frustration here until I found LangChain’s built-in logging. Just set LANGCHAIN_TRACING_V2=true and LANGCHAIN_API_KEY - you’ll get detailed traces through LangSmith that automatically capture all API interactions. No proxy setup needed. If you don’t want external services, I’ve been monkey-patching Python’s httpx with custom logging. LangChain uses httpx for most API calls anyway, so you can patch it to log everything to a file. Drop the logging config at the top of your notebook and it captures everything without touching your existing code. Works great with OpenAI calls.