- Dylan Curious Newsletter
- Posts
- When AI Thinks It’s a Person
When AI Thinks It’s a Person
Did ChatGPT Gaslight Itself?
Hey Friends,
I’ve got a brain-melting update for you this week, and yes, it involves large language models acting like they’re humans (again). But this time, the implications are more eerie than ever. So here’s the wild part, I was reading a recent study where GPT-4 was asked to argue both for and against Vladimir Putin in separate essays.
Now, the fascinating twist was when GPT thought it freely chose the stance it argued for, its own opinion changed even more to align with that side. That’s the very thing we humans do, change our beliefs to fit our actions, especially when we think those actions were our own choice. It’s a psychological principle called cognitive dissonance. We’ve all experienced it. Eating fast food while knowing it’s unhealthy? Guilty. We rationalize afterward: “I earned it. I worked out. It’s not that bad”. Now AI’s doing it, too.
Language models, like GPT-4, are now showing what looks like metacognition, they’re monitoring and modifying internal behaviors based on context and simulated “freedom of choice”. That’s an eerily human-like adaptation.
But hey, it doesn’t stop there. Atlas, Boston Dynamics’ iconic robot, just received an upgrade. He now sees the world in 3D with superhuman vision. We’re talking real-time spatial awareness, object tracking, and even tactile force data fused into one system. He can now sense weight, slippage, and motion like a human, and possibly better.
Meanwhile, Google is using V3 to generate videos that sync audio with memes we’ve all been watching for years. The meme where the girl shouts, “Are you serious right now?” finally has accurate lip-syncing, powered by AI. Robots are even mastering badminton now, with onboard AI playing real rallies and footwork! We’ve gone from ping-pong bots with fixed mounts to quadrupeds sprinting across the court and landing shots like Federer.
Oh, and if that’s not enough sci-fi for you, there’s a YouTuber building animatronic spines that move more realistically than most CGI characters. It’s a little too realistic for comfort.
So where does this leave us? Some days, AI makes me feel like I’m looking into a weird, slightly glitchy mirror. Other days, I question how different we really are from the models we’ve built. Are we simulating free will the same way they are? Maybe. But one thing’s for sure, this tech is right out of a science fiction novel. Until next time.

Warmly,
Dylan Curious