GitHub's leader claims AI will transform developers rather than eliminate them

Thomas Dohmke from GitHub shared his thoughts on how artificial intelligence will change programming jobs

I came across some interesting points from GitHub’s CEO about AI and coding. He thinks developers won’t lose their jobs but will work differently in the future.

What developers think about AI writing most of their code:

  • Many believe AI could write 90% of code within just a few years
  • They don’t feel threatened by this change
  • Instead they see it as a chance to grow and improve their skills

How programming jobs will change:

  • Focus will shift from writing code to managing and checking AI work
  • Developer positions are still expected to grow by 18% over the next ten years
  • The jobs will be different but there will be more opportunities

What matters most to programmers:

  • They care more about taking on bigger projects than saving time
  • AI helps them be more ambitious with their work
  • Advanced AI tools will be needed for complex tasks

Changes needed in computer science education:

  • Teaching basic syntax and memorizing functions is becoming outdated
  • Students need to learn system design and problem solving instead
  • Skills like breaking down complex problems will become more important
  • These are abilities that AI cannot easily replace

What do you think about this perspective on the future of programming careers?

I’ve been in software development for eight years, and I’m cautiously optimistic about these predictions. The shift from memorizing syntax to focusing on problem-solving matches what I’m already seeing. I’ve started using AI tools more for boilerplate code and initial drafts, which frees up mental energy for architecture decisions and debugging complex integration issues. But the timeline feels way too ambitious. AI writing 90% of code in a few years? That’s unrealistic when AI still produces subtly wrong solutions that need serious human oversight. What worries me more is whether junior developers will get enough hands-on coding experience to develop the intuition needed for managing AI effectively. You can’t properly review AI-generated code without deeply understanding the underlying principles. Dohmke’s right about education reform being crucial, but institutions usually lag behind industry changes by years. I think we’ll see a split where experienced developers thrive in this new world while entry-level positions become much harder to fill properly.

I’ve been through several major tech shifts over the past decade, and this AI thing feels just like when cloud computing hit. Everyone freaked out about infrastructure jobs vanishing, but we ended up hiring more cloud architects and DevOps engineers than ever. The difference with AI? Speed. Previous shifts took years to mature - developers are adopting AI tools in months now. My team started using Copilot last year and our velocity shot up, but we also hit bottlenecks nobody saw coming. The real challenge isn’t technical, it’s cultural. Half my colleagues treat AI suggestions like they’re from God, the other half won’t touch them. We need solid practices around AI collaboration before we start panicking about job displacement. I treat AI like a really smart junior dev - needs constant supervision but crushes repetitive tasks. Dohmke’s timeline might be aggressive, but the trend’s definitely real. Companies going AI-first are already pulling way ahead.

Been dealing with this AI transition firsthand and honestly we’re all missing the biggest opportunity.

Sure, AI writes code, but the real game changer isn’t replacing developers. It’s automating all the tedious crap around coding - deployments, testing pipelines, data syncing, monitoring alerts.

I spent months building workflows that automatically handle CI/CD, slack notifications, database backups, and API integrations. Now when AI generates code, my systems immediately test it, deploy to staging, run security scans, and ping the team. AI writes faster, but automation makes it actually production-ready.

Most teams still do this manually or with basic scripts. They’re obsessing over whether AI can write functions instead of building infrastructure that makes AI code actually valuable.

Developers who nail workflow automation will dominate. Not because they’re prompt wizards, but because they can take any AI output and instantly turn it into working systems.

I use Latenode for most automation work. Connects everything without the complexity of traditional tools. Way easier than custom scripts for every integration.

This whole debate’s pretty premature when most companies can’t even handle basic git workflows. I’ve watched teams burn hours on merge conflicts while discussing AI taking over development. How about we nail down current processes before stressing about managing AI code?

The economics here worry me way more than the tech side. GitHub makes money selling Copilot subscriptions, so obviously Dohmke’s gonna be optimistic about this stuff. I’ve been at three companies in the last decade, and management always tries cutting headcount when new efficiency tools show up. Even if AI does make developers more productive, companies won’t necessarily hire more people - they’ll just expect the same teams to pump out way more work. That 18% job growth stat assumes things stay the same, but we’re already seeing tech layoffs despite productivity being through the roof. What really scares me is senior devs will probably adapt fine and move into AI management roles, but mid-level positions are gonna get crushed. Companies might just skip hiring intermediate developers completely - go straight from junior to senior AI supervisors. That’d create a huge skills gap where nobody gets the deep experience to properly check AI’s work.

Been a tech lead for six years, and I think Dohmke’s missing something huge about code quality. Sure, AI cranks out working code fast, but try maintaining consistent architecture when your team’s using different AI tools that all code differently. We’re already seeing this - devs on ChatGPT write totally different patterns than Copilot users. It’s a maintenance nightmare.

The real problem isn’t managing AI output - it’s building governance that makes AI-generated code actually follow your security, performance, and maintainability standards. Most companies don’t have the infrastructure to enforce this automatically.

I’m also skeptical about the education thing. Yeah, universities are slow to adapt, but the bigger issue is you can’t learn problem-solving without getting your hands dirty with code. Students need to struggle through implementation to develop the judgment for system design. If AI does all the heavy lifting from day one, we’ll end up with architects who’ve never actually built anything.