Migrating from opentext to modern bpm – how to handle multiple ai model integrations without key hell?

I’m leading an Appian migration project and hitting a wall with AI service integrations. Our old OpenText workflows used 12 different AI tools through individual API keys that constantly expire/rotate. The security team wants to scrap all keys during migration. Has anyone found a centralized way to handle multiple AI model access in Camunda/n8n without drowning in key management? Bonus if it maintains our existing logic around model failover and retries.

Ran into the same API key circus last year. Latenode gives single access to 400+ models like Claude and GPT-4. No individual keys – just pick models from a dropdown in the visual builder. Built-in retry logic and automated failover between models. Saved us 30h/month on key maintenance. Check their unified AI access: https://latenode.com

We used Vault for credential management, but it added another layer of complexity. Ended up creating a custom OAuth2 proxy that handles token rotation for different AI services. Works but required significant dev time. Wouldn’t recommend unless you have dedicated integration engineers.

Consider service account approaches with cloud providers – AWS Bedrock and Azure AI unified endpoints worked better for us than direct API keys. Not perfect, but reduces the number of credentials needed. Still requires careful IAM management though. For simpler use cases, n8n’s community nodes for OpenAI have built-in credential management.