I’ve been working on a personal project that uses OpenAI’s API. Recently I tried to integrate their latest o3 model into my application but ran into an issue. The API requests kept getting blocked and I couldn’t figure out why at first.
After some investigation, I discovered that OpenAI now requires full identity verification for their newest model. This means you have to submit a government ID and complete a 3D facial scan before you can access the o3 API.
This seems excessive for developers who just want to experiment with small projects. I only need basic access to test some concepts, but now I have to go through this whole verification process. It feels like they’re making it harder for indie developers to try out their technology.
I get that they want to prevent misuse, but maybe they could offer a limited tier for unverified users instead. Right now I’m thinking about switching to Anthropic’s Claude API or maybe looking into hosting my own model on a cloud platform. Has anyone else run into this issue?
The verification barrier caught me off guard as well when I attempted to integrate o3 into a client project. What bothers me most is the lack of transparency around this requirement - it wasn’t clearly documented in their API documentation when I first started development. I ended up having to explain the delay to my client and pivot back to GPT-4 Turbo to meet our deadline. From a business perspective, I understand OpenAI’s position given the potential capabilities of o3, but the implementation feels rushed. They could have provided better advance notice or grandfathered existing API users with good standing. I’ve started exploring Google’s Gemini Pro as an alternative since their access requirements are more reasonable for commercial development work. The irony is that this verification process might actually push more developers toward self-hosted solutions or competitors, which seems counterproductive to OpenAI’s market position.
totally get where you’re comin from, it’s a real bummer. I was looking forward to trying out o3 too, but the whole ID thing feels like overkill for just testing ideas. guess i’ll stick with gpt-4 for now as well, it’s just easier.
I encountered the same verification requirement when trying to access o3 last week. What surprised me most was how thorough the process actually is - they don’t just want a photo of your ID, but also require you to complete multiple verification steps including that facial scan you mentioned. The whole thing took about 15 minutes to complete. Honestly though, after going through it, I can understand why they implemented this. The o3 model is significantly more capable than their previous releases, and I imagine they’re being extra cautious about who gets access initially. The verification process, while annoying, was actually pretty straightforward once I started it. That said, your point about indie developers is valid. Maybe they’ll introduce different access tiers once the model moves out of what feels like a beta phase. For now, if you’re not comfortable with the verification, Claude 3.5 Sonnet has been performing really well for most of my use cases.