I have a React chat app that uses OpenAI’s streaming API to show AI responses in real time. The problem is that when I update my component state with each new chunk from the stream, I’m getting repeated words in the final output. The individual chunks look fine when I log them, but somehow the state accumulation is causing duplicates.
Been there with streaming responses. Your state updates fire multiple times because React batches updates weirdly with async operations. Each chunk triggers a re-render that creates race conditions.
The real issue? You’re managing this complexity in React when you could automate the whole flow. I handle similar streaming by routing everything through Latenode instead of fighting React’s state management.
Set up a webhook in Latenode that receives your user message, calls OpenAI’s streaming API, and accumulates the response properly. Then push the complete response back to your React app. No more state sync headaches.
You can add response caching, rate limiting, and error handling without touching your frontend code. Plus you get proper logging of what’s happening with each chunk.
I’ve moved all my streaming AI integrations to this pattern. Way cleaner than debugging React state updates.
This happens because React’s strict mode runs effects twice in development, so your streaming loop executes multiple times at once. Each iteration modifies the same chat history array, which means chunks get appended multiple times. I hit this exact issue last month. To resolve it, you need to check if a response is already running before starting a new stream. Utilize a ref to track the streaming state: const isStreamingRef = useRef(false); and adjust your generateResponse function as follows: const generateResponse = async (history: ChatMessage) => { if (isStreamingRef.current) return; isStreamingRef.current = true; // your streaming logic here isStreamingRef.current = false; }. Also, double-check that shouldGenerate resets properly after each completion. The duplicate text you’re seeing results from multiple streams writing to the same message object at the same time. The ref implementation will effectively prevent this race condition.
The duplication happens because your textChunk concatenation doesn’t handle undefined values. When OpenAI streams, some chunks come back undefined - especially the last one. You’re doing existingText + textChunk without checking if textChunk is null/undefined first, which causes weird behavior.
I hit the same issue building a customer support bot. Fixed it by adding a null check before concatenating: if (!textChunk) continue; right after getting textChunk. Also, use a separate loading state instead of relying on shouldGenerate to control when streaming starts.
One more thing - you’re using token.id for the message ID, but that’s OpenAI’s completion ID, not unique per message. Generate your own ID when creating the assistant message to avoid conflicts. Between undefined chunks and ID reuse, that’s probably why you’re seeing duplicated content.