How to access GPT-2 model through OpenAI API for building text classification system

Need help finding GPT-2 API access for my school project

I’m working on a text classification application for my university assignment and want to use the GPT-2 model specifically. My goal is to build a system that can detect content created by GPT-3.5 models, and I heard GPT-2 works better for this purpose.

I’ve been searching through OpenAI’s API documentation but can only locate information about GPT-3.5 and newer models. The model reference section doesn’t seem to have GPT-2 listed anywhere.

Does anyone know if GPT-2 is still available through their API? If not, are there alternative ways to access this model for classification tasks? Any guidance or helpful resources would be greatly appreciated.

yeah, openai killed the gpt-2 api but honestly you’d be better off with specialized detection models anyway. huggingface has some solid ones built specifically for catching ai text - they work way better than trying to hack gpt-2 for detection. plus gpt-2’s pretty outdated now since newer models write completely differently.

GPT-2 isn’t available through OpenAI’s API anymore since they’ve moved on to newer models. But you can still use it for your classification project through Hugging Face’s Transformers library - it gives you access to all GPT-2 variants locally. Just download the model weights and run inference on your machine or a cloud instance. This gives you more control over outputs and lets you fine-tune it to detect GPT-3.5 content. Setup’s pretty straightforward - just needs a few lines of Python code with the Transformers package.

OpenAI dropped GPT-2 from their API years ago when they went commercial. You’ll need to run GPT-2 locally for your classification project. I’ve used the original OpenAI GitHub repo - it’s got the pre-trained models and you can download the weights directly. No API calls needed. Plus you get full access to the model’s internal representations, which you’ll want for building good classifiers. It’s actually faster than API calls too since there’s no network lag. Just make sure your hardware can handle whichever model size you pick.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.