Get All Access for $5/mo

An OpenAI Rival Developed a Model That Appears to Have 'Metacognition,' Something Never Seen Before Publicly Anthropic's Claude 3 Opus did something never before seen from an AI model in internal tests: It recognized when a piece of its data seemed out of place and hypothesized that the detail was either a joke or a test.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • Anthropic is the first to publicly speak about this particular kind of AI capability in internal tests.
  • Users on social media found the news "terrifying."
  • The company reportedly tried to cut hallucinations, or incorrect or misleading results, in half with its latest Claude rollout and inspire user trust by having AI tools cite sources.

A developer at Anthropic, an OpenAI rival reportedly in talks to raise $750 million in funding, revealed this week that its latest AI model appears to recognize when it is being tested.

The capability, which has never been seen before publicly, sparked a conversation about "metacognition" in AI or the potential for AI to monitor what it is doing and one day even self-correct.

Anthropic announced three new models: Claude 3 Sonnet and Claude 3 Opus, which are available to use now in 159 countries, and Claude 3 Haiku, which will be "available soon." The Opus model, which packs in the most powerful performance of the three, was the one that appeared to display a type of metacognition in internal tests, according to Anthropic prompt engineer Alex Albert.

"Fun story from our internal testing on Claude 3 Opus," Albert wrote on X, formerly Twitter. "It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval."

The evaluation involves placing a sentence (the "needle') into the "haystack" of a wider range of random documents and asking the AI about information contained only in the needle sentence.

"When we ran this test on Opus, we noticed some interesting behavior - it seemed to suspect that we were running an eval on it," Albert wrote.

According to Albert, Opus went beyond what the test was asking for by noticing that the needle sentence looked remarkably different from the rest of the documents. The AI was able to hypothesize that the researchers were conducting a test or that the fact the researcher asked for might, in fact, be a joke.

Related: JPMorgan Says Its AI Cash Flow Software Cut Human Work By Almost 90%

"This level of meta-awareness was very cool to see," Albert wrote.

Users on X had mixed feelings about Albert's post, with American psychologist Geoffrey Miller writing, "That fine line between 'fun story' and 'existentially terrifying horrorshow.'"

AI researcher Margaret Mitchell wrote: "That's fairly terrifying, no?"

Anthropic is the first to publicly speak about this particular kind of AI capability in internal tests.

According to Bloomberg, the company tried to cut hallucinations, or incorrect or misleading results, in half with its latest Claude rollout and inspire user trust by having the AI cite its sources.

Anthropic stated that Claude Opus "outperforms its peers" when compared to OpenAI's GPT-4 and GPT-3.5 and Google's Gemini 1.0 Ultra and 1.0 Pro. According to Anthropic, Opus shows "near-human" levels of understanding and fluency on tasks like solving math problems and reasoning on a graduate-school level.

Related: An AI Scam Stole 3 Million Site Visitors. Business Clones Are Pirating Services. Here's How to Prep Yourself for Alarming Trends in AI.

Google made similar comparisons when it launched Gemini in December, placing the Gemini Ultra alongside OpenAI's GPT-4 and showing that the Ultra's performance surpassed GPT-4's results on 30 of 32 academic benchmark tests.

"With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities," Google stated in a blog post.

Sherin Shibu

Entrepreneur Staff

News Reporter

Sherin Shibu is a business news reporter at Entrepreneur.com. She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Side Hustle

At 16, She Started a Side Hustle While 'Stuck at Home.' Now It's on Track to Earn Over $3.1 Million This Year.

Evangelina Petrakis, 21, was in high school when she posted on social media for fun — then realized a business opportunity.

Health & Wellness

I'm a CEO, Founder and Father of 2 — Here Are 3 Practices That Help Me Maintain My Sanity.

This is a combination of active practices that I've put together over a decade of my intense entrepreneurial journey.

Business News

Remote Work Enthusiast Kevin O'Leary Does TV Appearance Wearing Suit Jacket, Tie and Pajama Bottoms

"Shark Tank" star Kevin O'Leary looks all business—until you see the wide view.

Business News

Are Apple Smart Glasses in the Works? Apple Is Eyeing Meta's Ran-Ban Success Story, According to a New Report.

Meta has sold more than 700,000 pairs of smart glasses, with demand even ahead of supply at one point.

Money & Finance

The 'Richest' U.S. City Probably Isn't Where You Think It Is

It's not located in New York or California.

Business News

Hybrid Workers Were Put to the Test Against Fully In-Office Employees — Here's Who Came Out On Top

Productivity barely changed whether employees were in the office or not. However, hybrid workers reported better job satisfaction than in-office workers.