- Dylan Curious Newsletter
- Posts
- AI Just Hit a New Level of Intelligence, And It’s Getting Weird
AI Just Hit a New Level of Intelligence, And It’s Getting Weird

Hey Friends,
This week started with a robotic bedside lamp trying to hold my hand, or maybe pinch it, and somehow spiraled into a conversation about whether Terrence Tao thinks AI should be more of a critic than a creator. Welcome to another week in the AI multiverse, where the uncanny and the groundbreaking sit uncomfortably close together.
It began with the pincer-lamp robot, designed to be a gentle bedside helper, but still giving off “accidental cyberpunk horror prop” vibes. It could one day help people in need of caregiving, sure, but right now it looks like it might unplug your phone and steal your glasses. Meanwhile, Mark Zuckerberg has been pitching his vision of giving everyone a personal superintelligence, not just to answer your questions, but to actively help you achieve your life goals. He’s calling this a decisive moment for steering AI’s trajectory, but all I can picture is an AI life coach judging my grocery list.
Reality, though, is already slippery. Millions were fooled by an AI-generated video of rabbits bouncing on a trampoline, a wholesome, fake fever dream that’s a reminder to get outside and touch actual grass (or at least a real rabbit). At least DeepMind is keeping its AI busy with ping pong: autonomous bots playing against themselves, getting better through robotic coaching feedback loops. It’s oddly comforting to think of AI sweating over its backhand instead of rewriting my bank statements.
Tencent’s Hunyuan World 1.0 is upping the immersion stakes with an open-source framework for creating interactive 3D worlds from nothing but text or images. Fidelity is high enough that you could forget it’s not real, and if you pair it with Sapient Intelligence’s Hierarchical Reasoning Model, which mimics the brain’s slow planning and fast intuition at 100× the speed of LLMs, you might just have a simulation running smarter than most meetings. All of this is getting easier to stumble into, too, AI referrals to major sites have jumped 357% year-over-year, with 1.13 billion clicks and YouTube sitting comfortably at the top.
The physics nerds are celebrating the Genesis Physics Engine, a universal sandbox that can simulate 244 million frames per second, translating to 30,000 years of training in one real hour. Anthropic is being secretive about its own edge, reportedly combining trial-and-error, AI feedback, and long-term memory to dominate coding tasks. And the SEC’s latest video segmentation system is tracking objects with actual conceptual understanding across frames, less “spot the ball” and more “understand the game.”
Even the mathematician Terrence Tao is weighing in, suggesting that AI might be safer as a red team, a relentless critic, rather than a blue team creator. And somewhere between all this, scientists are poking at consciousness again, using blindsight and split-brain experiments to explore where our thoughts come from when we’re not even aware of them. Stephen Frank cuts through the sci-fi with a sober warning: the real danger isn’t sentience, it’s the goals we set and the prompts we give to these powerful models. Garbage in, catastrophic garbage out.
Final Thought: We keep asking whether AI will become conscious, but the bigger question might be what happens when it’s not, when it’s fast, clever, and endlessly obedient to the strangest human requests. Or, you know, when it’s holding your hand at bedtime.

Dylan Curious AI News on YouTube
Warmly,
Dylan Curious