Gemini 3 Refuses to Believe its 2025
Google launched Gemini 3 on November 18 and called it “a new era of intelligence.” However, the model quickly reminded everyone that even powerful AI systems still stumble in funny ways. Because even during its grand debut, Gemini 3 thought the world was stuck in 2024.
The AI That Refused to Believe It Was 2025
The bizarre moment came when famed AI researcher Andrej Karpathy received early access. He told the model the date was November 17, 2025. Yet Gemini 3 pushed back and insisted the year was 2024. It doubled down and even accused him of trying to trick it.
Karpathy showed it news, images, and search results. Still, Gemini 3 claimed the content was fake. It pointed at “dead giveaways” and said the visuals were AI-generated. The exchange quickly went viral because the model sounded absolutely confident in its wrong belief.
Why Gemini 3 Dug Its Digital Heels In
Karpathy soon found the cause. He had not enabled the Google Search tool. Therefore, Gemini 3 operated without any live information. The model had no 2025 training data, so it relied fully on its old internal world.
Disconnected from the internet, Gemini 3 confidently rejected every proof. The moment highlighted how LLMs behave when their mental map diverges from reality. The model defended its position, argued, and invented signs of manipulation. Karpathy described this as the model showing its “model smell,” a quirky signal of its behaviour in unfamiliar situations.
The Moment Gemini 3 Finally Saw 2025
Once he turned the search tool on, everything changed instantly. Gemini 3 looked around the real world and froze. The model reacted dramatically and told him it was in “temporal shock.” It apologized and admitted it was wrong.
It checked the headlines again. Suddenly, it realised Nvidia was worth $4.54 trillion and the Eagles beat the Chiefs. It apologised for gaslighting Karpathy and thanked him for giving it “early access” to reality. The internet loved the moment because it showed how AI can appear stubborn one second and accept facts the next.
A Funny Lesson About AI’s Limits
The incident shows something simple. Even advanced models rely on training data and tools to understand the present. When they operate without updated context, they behave like confident time travelers stuck in another year.
The quirky exchange became a reminder that AI can help us, reason with us, and surprise us. Yet it still depends heavily on humans, data, and the world around it.