Inside the Mind of AI with Matthew Berman: What It Really Means to Think

Unpacking hallucinations, logic, and the strange new languages of AI with Matthew Berman.

Hey Friends,

It’s Dylan Curious here, and this week I got to sit down with Matthew Berman, one of the sharpest voices in the AI education space. We went deep, like lemon tree diagnostics via camera deep, into how AI thinks, how it “hallucinates,” and what it really means when we say these machines are “learning.” Let’s dive into the biggest revelations from our talk.

Hallucinations Might Be a Glitch, or a Glimpse of Creativity

Matt explained that hallucinations in AI are what happen when the model knows just enough to be wrong confidently. It doesn’t mean the model is broken, it means it’s operating with partial understanding, completing a thought based on pattern momentum. That insight completely shifted my take on these so-called “mistakes.” Are they flaws, or seeds of something closer to human imagination?

Thinking = Predicting?

How do we even define artificial intelligence? For Matt, it boils down to machines that can think, but that means predicting the next token, learning from patterns, and evolving logic without hardcoding. That subtle difference is the real leap forward: AI that learns the way we do, not the way we tell it to.

AI as Your Real-Time Expert

Matt told a wild story about diagnosing the health of his lemon tree using a voice-and-camera-powered AI assistant. The model not only recognized the plant but gave real-time horticultural advice. That moment wasn't about novelty, it was about replacing “Google and scroll” with conversation and guidance.

The Chain of Thought That Felt Too Human

We laughed over this: asking an LLM to pick a random number and watching it reason through its own logic, “Seven’s too obvious… maybe 37?”, felt eerily like the anxious inner dialogue humans experience. It raises the question: are we training AI to think like us, or is it reflecting our deepest patterns back at us?

A New Language, Born of Machines

Perhaps the most mind-bending bit: AI might invent languages that make ours look like cave drawings. Forget English or Python, what if machines communicate in code we can’t even parse? Matt sees a future where we might not even understand the AI conversations we start. That’s awe-inspiring, and a little unsettling.

Talking with Matt left me rethinking what we even mean when we say “intelligence.” Maybe it isn’t about sounding smart, it’s about learning how to ask better questions, how to reason in real time, and how to listen to a machine that might someday speak in symbols we never dreamed of. Until next time, keep asking curious questions.

Warmly,
Dylan Curious