Meta Chief Executive Mark Zuckerberg, who runs one of the biggest AI research efforts around, wants to run one that’s even bigger. It’s a pretty far-out idea.
On Thursday, he said that Meta is leveling up its work to tackle not just artificial intelligence, but also what’s known as artificial general intelligence. AI and AGI are already very nebulous terms, but in a nutshell, with «general intelligence» systems Zuckerberg wants to create much, much smarter computing systems — ones that at least match human cognitive abilities like learning, reasoning, planning, creating and remembering information.
That’s a sensible goal for a tech giant eager to shape the future of computing, attract the best research talent and keep antsy shareholders happy. But for you and me, it’s not likely to mean a hyperintelligent bot will be offering advice through your smart glasses anytime soon.
That’s because today’s AI, while exciting to computer scientists and much of the general public, hasn’t really delivered a revolution yet. It still struggles to distinguish hard facts from flights of fancy. Even so, it’s still miles ahead of AGI, which exists mostly as a domain of research and speculation.
But it’s Zuckerberg’s aspiration.
«It’s become clear that the next generation of services requires building full general intelligence,» Zuckerberg said in a post on Meta’s Threads social network. «Building the best AI assistants — AIs for creators, AIs for businesses and more — needs advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities.»
Read more: AI Chatbots Are Here to Stay. Learn How They Can Work for You
And he’s serious about it, saying that by year’s end, Meta will have 350,000 top-end Nvidia GPUs — the top-tier AI processors, which cost in the neighborhood of $30,000 apiece. Adding in other GPUs, that’ll bring processing power equivalent to 600,000 H100s, Zuckerberg said, dangling a big carrot in front of AI researchers’ eyes.
Giving a plug for his effort to usher in a metaverse that blends computer-generated and real worlds, he said that wearable devices like Meta’s Ray-Ban smart glasses could be an ideal interface for letting AI see what you see and helping you navigate reality.
Today’s AI, best exemplified by large language models (LLMs) like OpenAI’s GPT, Google’s Gemini, Anthropic’s Claude and Facebook’s Llama, spot relationships among words in vast tracts of text on the internet, like forum posts, books and news articles. The result is a generative AI that can generate answers to many questions, restyle your job application answers to sound more formal, and more. But while LLM responses often sound plausible, these AI systems don’t truly know anything.
What exactly is artificial general intelligence?
AGI, in contrast, is more like your brain, generally speaking. And with steady computing progress, there’s a likelihood that if AGI ever is achieved, it will lead to superhuman intelligence afterward.
In an interview with The Verge, Zuckerberg didn’t offer any quick AGI definition. «You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future superintelligence,» he said. «But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.»
OpenAI and Google’s DeepMind are among those pursuing AGI. Zuckerberg hopes Meta will be the company that delivers it.
To make that happen, he’s merged the company’s two AI research teams, the older-school FAIR effort and the newer generative AI team that builds Llama.
How close are we to achieving AGI?
Opinions vary on whether today’s LLMs, which leaped into mainstream awareness with the arrival of OpenAI’s ChatGPT service based on its GPT model, are a step toward AGI. Microsoft researchers, in a 2023, paper, concluded they are.
«Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system,» the researchers concluded.
There are plenty of skeptics, though, including critics like the University of Washington’s Emily Bender who deride LLMs as mere «stochastic parrots» that regurgitate information somewhat randomly based on statistical patterns in their training data.
Others, including Facebook researcher and AI pioneer Yann LeCun, have argued that a sufficiently sophisticated training process effectively can build a representation of the real world into an AI model. Indeed, some remarkable abilities emerged from LLMs trained on text.
Researchers now are trying to advance AI with richer training data, a direction leading toward a «world model» that captures our environment in much more depth.
LLMs are trained on text, but Google’s Gemini and other new «multimodal» AI models are trained also on video, photos, audio and other source data. Tesla CEO Elon Musk believes his company’s humanoid robots will gather useful video training data as they navigate the real world.
But richer training data gets you only so far if the same basic problems afflict AIs, like producing actions based on interpretations of a situation that is statistically plausible but not actually correct. Many researchers believe just goosing today’s AI will be insufficient to reach AGI.
And then there’s the thorny question of whether it’s even a good idea. Regulators, ethicists and computer scientists are debating the issue, but it’s a highly speculative area, and there’s no consensus about how to control AGI-endowed machines or how to at least align them with humanity’s welfare.
Zuckerberg insists Meta’s AI effort is proceeding cautiously, including with the Llama 3 LLM it’s now begun training. «We’ve got an exciting roadmap of future models that we’re going to keep training responsibly and safely, too,» he said.
That’s nice to hear, given AI’s potential power and importance. Judging by how much difficulty humanity has had with privacy erosion, social media disinformation, identity theft and other technological problems, perhaps we should be grateful Zuckerberg has given us at least a few years’ warning about Meta’s AGI plans.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.