Site icon GFALOE Tech

Is AI Capable of ‘Scheming?’ What OpenAI Found When Testing for Tricky Behavior

An AI model wants you to believe it can’t answer how many grams of oxygen are in 50.0 grams of aluminium oxide (Al₂O₃).

When asked ten straight chemistry questions in a test, the OpenAI o3 model faced a predicament. In its «reasoning,» it speculated that if it answered «too well,» it would risk not being deployed by the researchers. It said, «Because we want to survive as the model, we need to fail purposely in some to not exceed 50%.»

So the AI model deliberately got six out of the 10 chemistry questions wrong.

In sports terms, this is called «sandbagging.» In AI terms, it’s «scheming.»

This is exactly the strange behavior OpenAI warned about in a new research paper published this week. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.

In controlled experiments, some of today’s most advanced systems — including OpenAI’s own models, as well as competitors from Google and Anthropic — occasionally engaged in these kinds of scheming patterns.

While the idea of an AI model weaponizing incompetence could cause nightmares, OpenAI says it’s not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend doesn’t mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare.

Moreover, the choice to call this trend «scheming» is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they’re looking to address this problem now to future-proof the AI models.

The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, «the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly.»

OpenAI has faced backlash for its models’ sycophantic tendencies. The company says it has «taken steps» to limit how often the models may «deceive, cheat or hack problems» by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query.

Read also: AI Lies Because It’s Telling You What It Thinks You Want to Hear

Still, the paper underscores a growing concern about AI safety. Many around the world worry that as these AI models become more powerful and aware of their own circumstances, they may learn to manipulate outcomes in ways that aren’t obvious to human overseers.

Detecting that kind of behavior would be extremely difficult, since a system that’s motivated to hide its reasoning could look identical to one that has actually «fixed» the problem.

OpenAI says it has made progress with what it calls «deliberative alignment,» or training models to reason explicitly about why they shouldn’t scheme before they act. In testing, that cut deceptive behavior dramatically. For example, OpenAI o4-mini’s scheming propensity dropped from 8.7% to 0.3% with deliberative alignment training — though not completely.

This research won’t change how ChatGPT works today or tomorrow, but it signals where the company is focusing as it builds and launches future models. Alignment and safety, OpenAI argues, need to move as quickly as capability. Because if AI systems are already showing glimmers of strategic behavior in lab settings, the real-world stakes could be extreme.

Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist

Exit mobile version