Changes in AI treatment between 2023 and 2025

I’ve been reflecting on how our interaction with AI has changed recently.

In 2023, there was a mix of excitement and concern regarding AI. Many viewed it as a new and unpredictable technology that could both save us or pose threats. Most people were still getting accustomed to using ChatGPT effectively.

Fast forward to 2025, and the atmosphere has shifted significantly. We now regard it as just another everyday tool. The initial excitement has leveled off, but AI has become ubiquitous. Many of my friends, who were previously apprehensive, are now utilizing it for work-related tasks without hesitation.

What kind of changes have you noticed in discussions and usage of AI over the years? Are we more at ease with it now, or has it just become a familiar aspect of our lives? I’m eager to hear other perspectives on this transition.

The shift was so gradual I didn’t notice until recently. Now there are two camps - people who quietly use AI in their work and those still debating it in meetings. No middle ground left. I’m in marketing. We went from dedicated AI brainstorming sessions in 2023 to just using it for drafts without saying anything. The novelty’s gone but it’s still useful. We’ve developed unspoken rules about when AI works versus when you need human judgment. Nobody taught us - we just figured it out. The biggest change isn’t the tech itself but how we think about it. It went from mysterious black box to something like autocorrect - helpful but needs supervision.

Biggest change? Onboarding new engineers. Two years ago we’d spend weeks explaining why AI code suggestions needed human review. Now junior devs show up already knowing how to work with AI - they use it like Stack Overflow or any reference tool.

The conversation shifted from philosophical to practical. We stopped asking ‘should we use AI’ and started debating which models work best for what. My team runs AI code completion constantly but we’re still arguing whether GPT-4 or Claude handles our specific stuff better.

The paranoia’s still there, just smarter now. Everyone uses AI but nobody trusts it completely. That healthy skepticism feels way more mature than the panic or blind hype we had before.

Same here - the excitement definitely faded. My kid’s school banned ChatGPT, then started teaching with it 6 months later. What’s funny is we don’t even call it “artificial intelligence” anymore, just “AI tools.” Makes it sound less threatening. The hype moved on but people kept using it.

I’ve watched this shift happen at my company and it’s wild. We spent all of 2023 arguing about AI ethics, then by late 2024 everyone was just using it like any other tool. The whole conversation changed - instead of worrying about job losses, we’re talking about prompt engineering and checking output quality. I remember when using ChatGPT felt risky and you had to think twice about what you shared. Now people throw meeting notes into AI summarizers without blinking. It became boring way faster than anyone expected. Though honestly, I think we’re still figuring out what it all means - we just stopped talking about it out loud.

The normalization crept up on me. Back in 2023, my team had these heated debates about whether AI code reviews were cheating. Now? It’s just another tool and nobody bats an eye. What gets me is how we talk about it now - dropped the whole ‘AI revolution’ thing for boring ‘efficiency improvements.’ The fear’s still there though, just hidden. People don’t refuse to use AI anymore, they just quietly double-check everything it spits out. The tech became boring but we’re all still worried about screwing up - we just don’t say it out loud anymore.