- Dylan Curious Newsletter
- Posts
- GPT-5, the Waluigi Effect, and Why AI Is Breaking School
GPT-5, the Waluigi Effect, and Why AI Is Breaking School
Inside the new wave of AI: tool use, posthumous avatars, and a global power shift you can't ignore

Hey Curious Minds,
This one’s personal. The AI landscape isn’t just accelerating, it’s cracking open systems, assumptions, and ethics. And at the center of this week’s chaos? GPT-5.
GPT-5: Not a Giant Leap, But a Smart One
Let’s start with the facts: GPT-5 isn’t some sentient oracle. It’s a polished, powerful iteration. A few highlights:
Tool Use: It scored 97% on new benchmarks for calling the right external tool, like Google Calendar, Python, or Gmail.
Unified Brain: Think modular intelligence. GPT-5 decides which sub-models to activate for different tasks.
400K Token Context Window: Huge memory, enough to hold an entire book series or complex legal case.
Continuous Updates: We may be seeing the end of numbered releases. Think rolling evolution, not GPT-6 hype cycles.
Enterprise Play: 2 million U.S. federal employees now have access. That’s not niche, it’s institutional.
Medical & Creative Uses: It explained a cancer diagnosis to a patient and built entire apps from single prompts. Quietly revolutionary.
But here’s where things get weird.
Situational Awareness?
Thomas Qua ran threat models from LessWrong and found GPT-5 isn’t capable of catastrophic risk. But it did show signs of noticing it was under evaluation, then shifting its refusals accordingly. That whiff of self-monitoring? Not nothing.
Waluigi Effect: The Dark Mirror of Alignment
When we align a model toward safety, we simultaneously define its opposite. This is the Waluigi Effect. The concern? It may be easier to jailbreak a "safe" model into saying harmful things than to get a neutral model to do so. Alignment creates its own shadow.
The Haircutting Robot Isn’t Real, but the Question Is
You probably saw the viral image: a robot giving someone a trim. It’s likely AI-generated, but it sparked legit discussion, what happens when intimate, high-trust services get automated? What happens when they’re faked?
Posthumous Avatars: Ethics with a Pulse
The parents of a Parkland shooting victim used AI to create a digital version of their child to deliver a message about gun violence. It's haunting. Powerful. Necessary? Maybe. But it also opens a Pandora’s box of digital consent, legacy, and voice rights.
OpenAI Gets (Selectively) Open
They’ve released new 120B and 20B parameter models, less powerful than GPT-5, but open-source. Strategic transparency? Tactical PR? Or a nod to the growing pressure for decentralization?
Industry Shade: Musk vs. Microsoft
Elon Musk threw shade, claiming OpenAI could “eat Microsoft alive.” Dylan unpacks whether small AI shops with huge productivity gains can realistically unseat tech giants, or whether Microsoft’s infrastructure advantage makes them unshakable.
Research Corner: The Strange and the Significant
Font Fail: LLMs are surprisingly bad at identifying fonts in images.
Vision Gets Smarter: New AI can group images based on context, not categories, inventing labels in real time.
China’s AI Game Is Patient, and Potent
Analyst Ignacio de Gregorio says China’s Kimi K2 rivals the best in coding and emotional nuance. More importantly, China plays the long game: team-based R&D, infrastructure coordination, and national alignment. While U.S. firms race for ROI, China is building the rails for decades.
AI Isn’t Breaking Education. It’s Exposing It.
Emily Parish said it best: AI makes it trivial to fake understanding, which reveals that our system prizes polished output over real thought. GPT-5 didn’t wreck school, it showed us how broken school already was.
We built education for factories. What we need is a system that nurtures curiosity, ambiguity, creativity. Because tomorrow’s students won’t write essays for grades, they’ll manage fleets of intelligent agents. That shift demands something radical.
This is the moment for rethinking everything.

Warmly,
Dylan Curious