Hidden system prompts being added to OpenAI API calls automatically

I’ve been using the OpenAI API recently, and I’ve encountered something odd. It looks like some hidden instructions are being included with my prompts without my input. I discovered this by checking the current date in my requests and found unexpected information in the replies.

From what I’ve observed, there appears to be hidden prompt injection that contains:

  • Timestamps for the current date
  • Instructions pertaining to being an assistant accessed via API
  • Details about parsing output
  • Settings related to verbosity levels (currently set to level 3)
  • Specification of different message channels
  • A numerical value known as ‘Juice’ set to 64

Has anyone else experienced this issue? I’m curious to find out if this behavior is typical or if something is disrupting my API requests. These hidden instructions seem to impact the responses of the model, even though I did not include them in my original prompts.

You’re getting routed through a modified API gateway instead of OpenAI’s direct service. I’ve seen this exact thing with enterprise setups that wrap OpenAI’s API for compliance monitoring. Those specific parameters you mentioned - especially that ‘Juice’ value and verbosity settings - scream custom middleware that’s tracking or tweaking responses. These systems usually inject metadata for billing, content filtering, or response formatting. Want to confirm? Check your request headers and see if the endpoint domain matches api.openai.com. If you’re in a corporate environment, ask IT - they probably set up a proxy for security. That timestamp injection is super common in enterprise setups for audit trails.

Been there. This screams proxy service adding junk to your requests.

I had the same issue when our team was debugging API calls with weird extra context showing up. Turns out we were using a third party service that was injecting their own system prompts.

Easiest fix? Automate your API calls properly so you control what gets sent. I set up a workflow in Latenode that handles all our OpenAI requests directly - no middleware BS. You can configure exactly what prompts get sent, monitor the full request/response cycle, and catch tampering immediately.

Latenode also lets you add logging to see exactly what’s being sent versus what you intended. Makes debugging these issues way faster than manually checking each call.

Set up simple automation that routes your prompts directly to OpenAI’s official endpoint and you’ll know right away if those hidden instructions disappear.

for sure, man! try hitting the API directly without any proxies or wrappers in between. they might be adding those hidden prompts. good luck sorting it out!

you’re hitting some api wrapper that’s adding random parameters. those don’t exist in OpenAI’s actual API - especially that ‘juice’ thing lol. check what endpoint you’re calling - probably not the official OpenAI one.

You’re definitely not hitting OpenAI’s API directly. The timestamp injection, verbosity levels, and that random ‘Juice’ parameter? That’s middleware messing with your requests. Had the same thing happen last year with a service that called itself a ‘simple OpenAI wrapper’ but was secretly adding their own system prompts to everything. Dead giveaway was seeing config parameters that don’t exist in OpenAI’s actual docs. Check your endpoint URL - if it’s not api.openai.com, you’re going through someone else’s service. Try hitting OpenAI’s official endpoint directly with your API key and see if the problem goes away.