I came across some news about OpenAI researchers asking for protective facilities before AGI development goes too far. From what I understand, these scientists are worried that when artificial intelligence becomes smarter than people, it might create serious risks for everyone. I’m trying to wrap my head around this whole situation. Are these researchers really that concerned about AI becoming dangerous? It seems like they want some kind of safe spaces ready before we reach that point where machines can think better than humans do. What exactly are they planning for? Is this just being extra careful, or do they actually expect something bad to happen when AGI gets developed? I’m curious about what other people think about scientists wanting these emergency locations set up ahead of time.
honestly this sounds like sci-fi paranoia to me but maybe im wrong. like these researchers probably know stuff we dont about how close we are to real AGI. if they’re asking for shelters then maybe the timeline is shorter than most ppl think? idk seems kinda extreme tho