So I saw this statement from Zuckerberg saying that artificial intelligence agents are going to be better than human coders in about 18 months. Honestly, I’m pretty skeptical about this whole thing.
From what I’ve seen, AI tools have major issues. They basically just rehash existing code from other developers and don’t really understand what they’re doing. They lack proper reasoning abilities and tend to reproduce the same bugs and mistakes that were in the original code they learned from. Plus, they can’t work independently without human oversight.
I’m curious what everyone else thinks about this prediction. Do you believe AI will actually replace programmers that quickly, or is this just hype? Have you had different experiences with AI coding tools?
Had to laugh at this one. Zuckerberg probably hasn’t written production code in a decade, so I get why he thinks coding is just pumping out lines faster.
I run a team of 12 engineers and we’ve been using AI tools heavily for about 8 months. The productivity boost is real - GitHub Copilot saves us maybe 30% on routine stuff. But here’s what Mark’s missing: most of my day isn’t writing new code. It’s debugging production issues, refactoring legacy systems, and cleaning up the mess previous developers left behind.
Last week I spent 3 days tracking down a memory leak in a service written 4 years ago. AI couldn’t even understand the codebase structure, let alone fix it. Kept suggesting generic solutions that had nothing to do with our specific issue.
The real kicker? AI tools are great at writing code that compiles and passes basic tests. But they’re terrible at writing code that survives real users and actual production load.
Interesting to hear his perspective though, even if he’s way off on the timeline. Maybe in 5-10 years we’ll have something that can handle complex system design and architecture decisions, but 18 months? Not happening.
Been using AI coding tools for six months now and Zuckerberg’s timeline is complete fantasy. The problem isn’t complexity - it’s context. AI crashes when you need to understand why something was built a certain way or how it connects to the bigger system. I had to modify a payment processing module recently and the AI kept suggesting changes that would’ve broken regulatory compliance. It had zero clue about business constraints. Sure, it writes functions that technically work, but software development is mostly making decisions with incomplete info and managing technical debt over years. AI tools are decent junior developers, but they don’t get the human side of coding - like when to push back on unrealistic deadlines or explain technical limits to non-technical people. Maybe Mark should debug a production issue at 2 AM before making these predictions.
I’ve used AI coding tools in production for a year now, and Zuckerberg’s way off base here. Yeah, AI cranks out decent code snippets and handles basic stuff, but that’s not what we actually do most of the time. I’m in meetings all day figuring out what users want - not writing code. AI can’t handle vague requirements or tell stakeholders when they’re asking for something stupid. It completely breaks down with legacy systems or weird configs that weren’t in the training data. Sounds like someone who hasn’t touched code in years.
Zuckerberg’s timeline is way too aggressive. I’ve used AI coding assistants for two years now - they’re great for boilerplate and simple functions, but they can’t handle complex architecture or debug tricky problems. Writing code that works isn’t the hard part. The real challenge is understanding business needs, making trade-offs, and keeping systems running long-term. AI tools constantly generate code that looks right but breaks on edge cases or doesn’t scale. They’ll keep getting better as productivity tools, but outperforming experienced developers that fast? No way. There’s still a huge gap between spitting out syntactically correct code and actually understanding software engineering.
Another CEO making bold predictions for headlines. I’ve coded for 8 years - AI’s great for autocomplete and basic functions, but most bugs come from weird business logic or integration issues AI can’t handle. And AI definitely can’t sit through client meetings to figure out what they actually want versus what they’re saying.