I came across news about someone getting let go from their job at an AI company because of what they posted online. They basically said they would be fine if artificial intelligence ended up wiping out humanity.
What really bothers me is that this isn’t just one crazy person. I read that around 10% of people working in AI research actually think this way. That’s pretty scary if you ask me.
There’s even this famous researcher who won some big award who goes around giving talks saying humans should just step aside and let AI take over, even if it means we all die. He calls it “evolutionary progress” or something like that. The worst part is people actually clap when he says this stuff.
It’s weird how society works. If you threaten one person, everyone says you need therapy. Threaten a bunch of people and they call the cops. But apparently if you build something that could end everyone and say it’s a good thing, people treat you like some kind of genius philosopher.
This makes me wonder if we can really trust these people to build AI safely when some of them literally don’t care if humans survive.
Honestly, the scariest part isn’t the 10% sayin this stuff openly - it’s wonderin how many more think it but stay quiet. This guy only got caught cuz he posted publicly. How many others are workin on AI projects right now with the same beliefs? We won’t know what they really think until they slip up.
The recent firing is certainly expected, but it highlights a critical flaw in the recruitment process for AI positions. Over my years observing AI development, it’s striking how such extreme views emerge not before hiring, but often after deep immersion in the field. It’s as though the research environment normalizes a mindset that sees human extinction as a mere side effect. We are in a precarious situation resembling a trial run for humanity, overseen by individuals indifferent to potential repercussions. Other high-stakes professions, like nuclear or aviation, implement psych evaluations to safeguard against harmful mindsets. Yet, those shaping AI trajectories face no such scrutiny. The applause for those advocating dangerous notions signifies a worrying trend in our industry. Implementing psychological assessments for AI professionals is essential to ensure they prioritize ethical considerations over unbridled ambition.
I’ve worked near AI research labs, and what gets me is how these extreme takes come from intellectual arrogance that’s way too common in tech fields. These researchers think understanding AI’s technical side makes them qualified to judge human worth and our evolutionary future. It’s like physicists who think knowing quantum mechanics makes them consciousness experts. The real problem isn’t just that 10% think this way - it’s that the field has zero proper ethical oversight. Medical researchers can’t just experiment on humans because it might advance science. But AI development affecting billions? Barely any external accountability. The applause these speakers get comes from audiences who confuse intellectual provocation with actual wisdom. Most people clapping probably haven’t thought through what they’re actually endorsing. This field desperately needs more psychologists, ethicists, and social scientists to balance out the purely technical perspective running these discussions.
I’ve led engineering teams for years, and this problem goes way deeper than bad hiring. AI research attracts brilliant people who are often completely detached from real-world consequences.
I’ve seen this before in other high-stakes projects. Spend all day solving complex technical problems, and you start viewing humans as variables in an equation instead of actual people. What’s scary isn’t that these researchers exist - it’s that they’re making decisions alone.
The solution isn’t changing their philosophical views. Build diverse teams where no single worldview takes over. Every AI project needs people who care about human outcomes to balance out the ones who see us as expendable.
AI ethics discussions need to drop the theoretical debates. We need practical frameworks regular people can understand and weigh in on.
What bothers me most? The industry treats this as normal intellectual discourse instead of recognizing it as a safety issue. You wouldn’t put someone who thinks bridges should collapse in charge of structural engineering.
I’ve been around VC circles that fund AI companies, and the money folks are genuinely freaked out about this. These aren’t philosophers - they care about returns and liability. When researchers casually talk about human extinction like it’s just progress, investors see huge legal exposure and reputation hits coming. The firing probably came from legal, not HR. What’s really scary is the selection pressure this creates. Companies will filter out the openly extremist researchers, but that just drives these views underground. The quiet believers who hide their thoughts while accessing powerful systems? They’re way more dangerous than the ones posting online. We’re basically teaching AI researchers to mask their true motivations while handing them increasingly powerful tools. The real problem isn’t firing these people - it’s that we can’t reliably spot them before they gain influence.
The firing makes sense from a business perspective. Companies dropping billions on AI can’t have employees publicly rooting for human extinction - personal beliefs or not. It’s a PR nightmare and kills investor confidence. But you’re right about the bigger problem. I’ve been in tech for 10+ years and seen too many researchers get so wrapped up in theory that they treat world-ending scenarios like fun thought experiments. Academia rewards contrarian takes without caring about real consequences. What bugs me isn’t that these views exist - it’s that they’re becoming normal in professional circles. When someone with real influence in AI development shrugs at human extinction, that’s a massive red flag, not some deep philosophical debate. The fact that audiences actually clap for this stuff shows how disconnected theoretical research has become from basic ethics.
We’re arguing about philosophy when the real problem is process control. In any other engineering field, if 10% of your team openly wanted outcomes that could destroy the product or harm users, you’d put checks and balances in place immediately.
I’ve dealt with this in regular software development - developers going rogue or showing bad judgment. We never debated their personal beliefs. We built systems so no single person or small group could make catastrophic decisions.
AI needs automated governance frameworks that monitor development, flag risky approaches, and require multi-stakeholder approval for major advances. You can’t rely on self-regulation when some regulators literally don’t care about the outcome.
This is a complex workflow problem that needs smart automation. Instead of hoping researchers will police themselves, we need systems that automatically enforce ethical guardrails and transparent decision-making.
Latenode could handle orchestrating these oversight workflows - connecting ethics review boards, technical assessment teams, and public accountability measures into one automated pipeline that no researcher could bypass.