I’m facing a challenge when trying to enable the JSON response format for the GPT-4 Vision model. Each time I add the response_format
option in my request, I encounter a validation error indicating that extra fields are not allowed. If I leave this parameter out, the request goes through without any errors.
Here’s the code I’m currently using:
request_headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
request_payload = {
"model": "gpt-4-vision-preview",
"response_format": {"type": "json_object"},
"messages": [
{
"role": "system",
"content": "You are a helpful assistant. Please respond in JSON format."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": user_input
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_encoded_image}"
}
}
]
}
],
"max_tokens": 1000,
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=request_headers, json=request_payload)
print(response.json())
The validation error message I’m getting is:
{'error': {'message': '1 validation error for Request\nbody -> response_format\n extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Is there an alternative method to turn on JSON mode for vision models, or is this feature currently unsupported?