I’m building an interview assessment system that creates reports based on user responses captured through OpenAI’s real-time API. The problem is that voice transcription often produces garbled text with broken words or incorrect translations.
When I send these messy transcripts to my Azure OpenAI deployment, the content filter gets triggered randomly. Sometimes there are actual problematic words, but other times the filter blocks perfectly innocent content that just got mangled during transcription.
For example, poor audio quality might turn normal speech into something that looks suspicious to the filter. ChatGPT can usually identify these as simple transcription errors or foreign language mix-ups.
My use case is generating interview scores from AI bot conversations, so I need reliable processing of user responses. Is there a method to turn off content filtering in Azure OpenAI deployments? Or should I implement some kind of transcript cleaning step before sending data to the model?
I’ve tested this extensively and the content filter blocks roughly 50% of my requests without any clear pattern.