Troubleshooting OpenAI API: Resolving 'Invalid Request Error'

Hey everyone! I’m having trouble with the OpenAI API. I’m trying to send a request but keep getting a 400 error. Here’s what I’ve tried:

let aiCall = new XMLHttpRequest();
const data = {
  'model': 'gpt-3.5-turbo',
  'prompt': 'Hello, world!'
};

aiCall.open('POST', 'https://api.openai.com/v1/completions');
aiCall.setRequestHeader('Content-Type', 'application/json');
aiCall.setRequestHeader('Authorization', 'Bearer ' + apiKey);
aiCall.send(JSON.stringify(data));

At first, I forgot to actually send the request (oops!). After fixing that, I still got an error saying I didn’t provide an API key. But I’m pretty sure I did! Any ideas what I’m doing wrong?

Update: I figured it out! The problem was how I was sending the payload. Changing aiCall.send() to aiCall.send(JSON.stringify(data)) did the trick. Hope this helps anyone else who runs into this!

Glad you figured it out, SwimmingShark! Your solution is spot-on. Another thing to watch out for is the API endpoint. For the chat models like gpt-3.5-turbo, you need to use ‘/v1/chat/completions’ instead of ‘/v1/completions’. Also, the payload structure is slightly different for chat models. You’ll need to provide ‘messages’ instead of ‘prompt’. Here’s a quick example:

const data = {
  'model': 'gpt-3.5-turbo',
  'messages': [{'role': 'user', 'content': 'Hello, world!'}]
};

aiCall.open('POST', 'https://api.openai.com/v1/chat/completions');

This should help you avoid any ‘Invalid Request Error’ issues when working with the chat models. Always refer to the latest API documentation for the most up-to-date information on request formats and endpoints.