AI’s Latest Trick: Eating Itself and Outrunning Us

From cannibal fruit and bedtime hacks to billion-dollar rejections, guilt-ridden algorithms, and the quiet race for AGI survival.

Hey Friends,

This week’s AI news opened with a kiwi eating another kiwi, and somehow ended with billion-dollar rejections, guilt-ridden algorithms, and the possibility of wiping out malaria. It’s one of those weeks where absurdity and world-changing breakthroughs are sitting at the same table, politely sharing dessert.

The “cannibal fruit” clip, a kiwi chowing down on its own kind, went viral not because it was profound, but because it was so weird you had to watch it twice. It turns out there’s a sneaky trick behind some of these hyper-realistic AI videos: upload an original frame with arrows and scribbles, and the model stitches it into a fully animated sequence. Somewhere between meme culture and technical wizardry, originality is folding in on itself. And if you want proof that human creativity is just as weird, someone got ChatGPT to list Windows activation keys by framing it as a bedtime story told by grandma. The moral? AI still struggles to spot when you’re mixing absurdity with manipulation.

While we’re laughing at fruit and bedtime hacks, DeepMind and Google Maps quietly dropped Alpha Earth, a unified planetary mapping model that can see through clouds, render the hardest terrains, and answer complex environmental questions at scale. At the same time, LEAP 71’s AI fully designed a 3D-printed aerospike rocket engine, integrated cooling and all, and passed its hot fire test on the first try, without a single human drawing CAD lines. Mira Murati turned down $1 billion from Meta to keep Thinking Machine Labs independent, hinting at a long-term mission that’s about more than valuation. That independence might matter if the future really does split into elite AI models running $1M a month, the kind of tools only billionaires could afford. Knowledge inequality isn’t a dystopia anymore; it’s a business model in the making.

In the strange-but-promising category, researchers gave AI a programmed sense of “guilt”, a self-penalty that made agents more cooperative without making them easy to exploit. Meanwhile, CRISPR gene drives are being tested to spread malaria resistance through mosquito populations, a step that could save millions of lives… or accidentally rewrite an ecosystem. The University of Barcelona found a way to make AI personality predictions explainable, using integrated gradients to link its decisions back to the Big Five and MBTI traits. So yes, your chatbot might be able to tell you you’re an introvert, and show you the math.

Beneath the headlines, the quiet AGI infrastructure race is accelerating. OpenAI, xAI, Meta, and others are building datacenters not for scale but for resilience, the kind of setups that can survive political turmoil or energy shortages. That anxiety is showing up in boardrooms, too: 46% of S&P 500 companies now list AI as a risk in their official filings, citing everything from overdependence on vendors to the fear that ROI won’t match the hype. And maybe the clearest sign of how deep AI has sunk into our daily lives is linguistic: we don’t “Google” for facts anymore, we “ask AI”, subtly, unconsciously treating these systems like sentient beings.

Final Thought: Maybe the collapse of originality isn’t that AI is generating more absurd ideas than us, it’s that we can’t seem to look away. Or, you know, maybe we just really like watching fruit eat itself.

Dylan Curious AI News on YouTube

Warmly,
Dylan Curious