The rapid advancements in AI have raised quite a few eyebrows, as big tech companies like Google, Grok, Meta and OpenAI race to create the smartest models at a breakneck pace. And despite the potential benefits of what AI could do, there are just as many if not more concerns about how it could negatively impact humanity.
So much so that more than 700 prominent public figures have signed a statement declaring the prohibition of AI superintelligence until its development can be done safely, and until there’s a strong public buy-in for it.
The statement was published Thursday and says the development of AI that can outperform humans at nearly all cognitive tasks, especially with little oversight, is concerning.
Fears of everything from loss of freedom to national security risks and human extinction are all top of mind, according to the group.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Several signatures come from prominent figures, including «godfathers of AI» Yoshua Bengio and Geoffrey Hinton, former policy makers and celebrities like Kate Bush and Joseph Gordon-Levitt.
Elon Musk himself previously warned of the dangers of AI, going so far as to say that humans are «summoning a demon» with AI. Musk even signed a similar letter alongside other tech leaders in early 2023, urging a pause on AI.
The Future of Life Institute also released a national poll this week, showing just 5% of Americans surveyed are in support of the current, fast, unregulated development toward superintelligence. More than half of respondents — 64% — said that superintelligent AI shouldn’t be developed until it’s proved to be safe and controllable, and 73% want robust regulation on advanced AI.
Interested parties may also sign the statement, with the current signature count as of this writing at 27,700.

