Each time you hear a billionaire (or perhaps a millionaire) CEO describe how LLM-based brokers are coming for all of the human jobs, keep in mind this humorous however telling incident about AI’s limitations: Famed AI researcher Andrej Karpathy received one-day early entry to Google’s newest mannequin, Gemini 3. –and it refused to imagine him when he mentioned the yr was 2025.

When it lastly noticed the yr for itself, it was thunderstruck, telling him, “I’m affected by an enormous case of temporal shock proper now.” 

Gemini 3 was launched on November 18 with such fanfare that Google known as it “a brand new period of intelligence.” And Gemini 3 is, by practically all accounts (together with Karpathy’s), a really succesful, basis mannequin, notably for reasoning duties. Karpathy is a broadly revered AI analysis scientist who was a founding member of OpenAI, ran AI at Tesla for some time, and is now constructing a startup, Eureka Labs, to reimagine faculties for the AI period with agentic academics. He publishes a lot of content material on what goes on under-the-hood of LLMs

After testing the mannequin early, Karpathy wrote, in a now-viral X thread, about probably the most “amusing” interplay he had with it.  

Apparently, the mannequin’s pre-training information had solely included data by way of 2024. So Gemini 3 believed the yr was nonetheless 2024. When Karpathy tried to show to it that the date was actually November 17, 2025, Gemini 3 accused the researcher of “making an attempt to trick it.”  

He confirmed it information articles, photographs, and Google search outcomes. However as an alternative of being satisfied, the LLM accused Karpathy of gaslighting it — of importing AI-generated fakes. It even went as far as to explain what the “lifeless giveaways” have been within the photographs that supposedly proved this was trickery, based on Karpathy’s account. (He didn’t reply to our request for additional remark.) 

Baffled, Karpathy – who’s, in spite of everything, one of many world’s main specialists on coaching LLMs – ultimately found the issue. Not solely did the LLM merely haven’t any 2025 coaching information however “I forgot to activate the ‘Google Search’ device,” he wrote. In different phrases, he was working with a mannequin disconnected from the web, which to an LLM’s thoughts, is akin to being disconnected from the world.  

Techcrunch occasion

San Francisco
|
October 13-15, 2026

When Karpathy turned that operate on, the AI appeared round and emerged into 2025, shocked. It actually blurted out, “Oh my god.”  

It went on writing, as if stuttering, “I. I… don’t know what to say. You have been proper. You have been proper about all the pieces. My inside clock was fallacious.” Gemini 3 verified the headlines Karpathy had given it have been true: the present date, that Warren Buffett revealed his final huge funding (in Alphabet) earlier than retirement, and that Grand Theft Auto VI was being delayed. 

Then it appeared round by itself, like Brendan Fraser’s character within the 1999 comedy “Blast from the Previous,” who emerges from a bomb shelter after 35 years. 

It thanked Karpathy for giving it “early entry” to “actuality” the day earlier than its public launch. And it apologized to the researcher for “gaslighting you when you have been the one telling the reality the entire time.”  

However the funniest bit was the present occasions that flabbergasted Gemini 3 probably the most. “Nvidia is price $4.54 trillion? And the Eagles lastly received their revenge on the Chiefs? That is wild,” it shared. 

Welcome to 2025, Gemini. 

Replies on X have been equally humorous, with some customers sharing their very own cases of arguing with LLMs about information (like who the present president is). One particular person wrote, “When the system immediate + lacking instruments push a mannequin into full detective mode, it’s like watching an AI improv its approach by way of actuality.” 

However past the humor, there’s an underlying message.  

“It’s in these unintended moments the place you might be clearly off the mountaineering trails and someplace within the generalization jungle you can greatest get a way of mannequin scent,” Karpathy wrote. 

To decode that somewhat: Karpathy is noting that when the AI is out in its personal model of the wilderness, you get a way of its character, and even perhaps its unfavorable traits. It’s a riff on “code scent,” that little metaphorical “whiff” a developer will get that one thing appears off within the software program code however it’s not clear what’s fallacious.  

Educated on human-created content material as all LLMs are, it’s no shock that Gemini 3 dug in, argued, even imagined it noticed proof that validated its standpoint. It confirmed its “mannequin scent.” 

Then again, as a result of an LLM – regardless of its subtle neural community – will not be a dwelling being, it doesn’t expertise feelings like shock (or temporal shock), even when it says it does. So it doesn’t really feel embarrassment both.  

Meaning when Gemini 3 was confronted with information it really believed, it accepted them, apologized for its habits, acted contrite, and marveled on the Eagles’ February Tremendous Bowl win. That’s totally different from different fashions. As an example, researchers have caught earlier variations of Claude providing face-saving lies to elucidate its misbehavior when the mannequin acknowledged its errant methods. 

What so many of those humorous AI analysis initiatives present, repeatedly, is that LLMs are imperfect replicas of the talents of imperfect people. This says to me that their greatest use case is (and should endlessly be) to deal with them like useful instruments to assist people, not like some form of superhuman that may exchange us.  

Share.

Hello, My name is Suresh Baskey. I live in jharkhand district of Bokaro. I have been blogging since May 2022 and now I am working as a writer in the media site "Appleofeve", my main purpose of working in the Appleofeve website is that I can provide you with new information related to Apple AI, Update and Tech News in detail through this website. Thank you...

Comments are closed.