I built an educational tool that lets teachers create AI chatbots for student discussions. The system stores these conversations in a database so teachers can analyze student responses.
I’m using Guzzle to fetch the data and send it to OpenAI’s API. The responses are accurate but they come back as unformatted text blocks. When I ask something like “Show me 3 students with interesting responses and what they said” I get everything in one paragraph without bullet points or line breaks.
I’m not sure if this is a Guzzle configuration issue or if I need to modify my API request. Teachers shouldn’t have to specify formatting instructions in their queries. It should format nicely like regular ChatGPT does.
Here’s my current API call:
$result = $httpClient->post('https://api.openai.com/v1/chat/completions', [
'headers' => [
'Authorization' => 'Bearer ' . $token,
'Content-Type' => 'application/json',
],
'json' => [
'model' => 'gpt-4o-mini',
'messages' => [
['role' => 'system', 'content' => 'Begin analysis'],
['role' => 'user', 'content' => "Using this dataset:\n$dataFromDB\n\nProvide insights for: $teacherQuery"]
],
'max_tokens' => 1000,
],
]);
$responseData = json_decode($result->getBody(), true);
$analysis = $responseData['choices'][0]['message']['content'];
echo $analysis;
Any ideas on how to get properly formatted responses?
Yeah, this is super common with OpenAI’s API. The web ChatGPT interface formats everything automatically, but the API just spits out raw text - no HTML or markdown rendering. You’ve got to tell the model exactly how to format stuff in your system prompt. I ran into the same thing until I tweaked my system message. Try something like: ‘You are an educational analysis assistant. Always format responses with clear structure using bullet points, numbered lists, and line breaks. Use markdown formatting when presenting data.’ Or just tack formatting instructions onto each user message - ‘Format your response with bullet points and clear sections.’ That worked great for me when I needed consistent formatting across different request types. If you need really specific formatting, there’s also the newer structured output features in the API, but honestly the prompt modification should do the trick for what you’re doing.
Add response_format to your JSON payload - set it to markdown or json_object depending on what you need. Way easier than tweaking prompts every time. Your system prompt’s too basic though. Try something like ‘format all responses using markdown with proper headings and bullet points’ instead of just ‘begin analysis’.
Had this exact problem building something similar last year. The system prompt isn’t enough - you’ve got to be way more explicit about what you want. I started by adding instructions right in the user message: ‘Present your findings in a structured format with clear headings and organized sections.’ That helped, but the real game changer was structured outputs. Add response_format: {"type": "json_object"} to your request and change your prompt to something like ‘Return your analysis as JSON with separate fields for findings, student_examples, and summary.’ Then just parse the JSON and format it however you want on the frontend. Way more control than hoping the model stays consistent. Your current setup will work with prompt tweaks, but structured outputs are worth it if you want reliable formatting every time.