I’m working on a NextJS application and trying to use LangGraph together with Vercel’s AI SDK. I started building a simple ReAct agent but I’m facing some problems.
The biggest problem is that connecting LangGraph with the AI SDK seems really complicated and there isn’t much documentation about it. I can’t find good examples or starter templates that show how to make these tools work together properly. The streaming part is especially confusing.
I’m thinking about removing LangGraph completely and just using the AI SDK by itself. But before I do that, I want to know if there are any good tutorials or working examples that I missed.
Did anyone here manage to get LangGraph working with NextJS and the AI SDK? Does it support streaming well? Is it worth the extra work and complexity?
I would really appreciate any advice, code samples, or experiences you can share with me!
I built something similar last year for an internal tool. The combo works, but you’ve got to approach it right.
Treat them as separate layers - don’t try to integrate deeply. Run LangGraph on the backend for agent logic and state management. Create simple REST endpoints for the AI SDK to consume. Skip the fancy streaming integration.
For streaming, I had LangGraph emit events to a message queue, then AI SDK picks them up through server-sent events. More setup, but way cleaner than piping LangGraph streams through Vercel’s infrastructure.
Here’s the thing though - you might be overengineering this for a ReAct agent. I only kept LangGraph because I needed complex multi-agent workflows with persistent state between sessions. For simpler patterns, AI SDK’s tools and function calling gets you 90% there with less complexity.
Start with just AI SDK for your ReAct pattern. If you hit walls with state management or need complex branching logic, then add LangGraph as a separate service.
I ran into this exact issue building a customer support chatbot about six months ago. Spent weeks trying to get LangGraph working with the AI SDK, then just ditched LangGraph entirely. Best decision I made. The main problem? LangGraph doesn’t play nice with Vercel’s streaming at all. You end up writing tons of adapter code just to get basic stuff working, and debugging becomes a nightmare when things break. Honestly, the AI SDK handles complex conversation flows just fine for most NextJS apps. Unless you absolutely need LangGraph’s graph structure for something specific, it’s not worth the headache. I moved everything to just the AI SDK and development got way faster and easier to maintain.
depends on what ur building, but i’d stick w/ AI SDK alone. tried combining them last month and hit weird middleware bugs that were a pain to debug. langgraph’s powerful but overkill for most projects. if u need graph features, add them later once ur main app works. much easier maintaining one system than juggling both.
I’ve had pretty good luck combining these two, but it wasn’t straightforward. The key is using LangGraph for your backend logic and AI SDK just for frontend streaming. I set up separate API routes for LangGraph execution, then used AI SDK’s streamText to pull in those results. Yeah, the docs suck, but streaming works fine if you get the async generators right. How complex this gets really depends on what you’re building. Need multi-step reasoning with conditional branching? LangGraph’s worth the hassle. Simple chat app? Probably overkill. I ended up using LangGraph for agent orchestration and AI SDK for all the NextJS integration - worked great.
Dealt with this exact integration nightmare on a client project. Skip trying to jam LangGraph through NextJS middleware - it’s a pain. The streaming issues are real because LangGraph’s execution doesn’t play nice with Vercel’s edge runtime limits. Here’s what worked: deployed LangGraph separately on Railway with basic HTTP endpoints, then just used regular fetch calls from NextJS. AI SDK handles the frontend streaming perfectly this way. Performance was actually better since LangGraph wasn’t fighting NextJS timeouts. One thing though - if your ReAct agent doesn’t need complex state between conversations, you’re probably overengineering this. AI SDK’s function calling handles most agent stuff without all the extra complexity.