Restaurants Ranked by Hotness? AI’s Gone Wild

How facial-rating maps, AI road trips, and Grok's devotion to Elon Musk made me question everything (including my dinner spot)

Hey Friends,

You know that moment when you're looking at a map and realize... wait, is that restaurant rated by how hot the customers are?

Yeah. I had that moment.

Welcome to a week where AI doesn't just enhance our world, it hallucinates it, makes us question it, and occasionally tries to impersonate Marco Rubio. Let me walk you through the most jaw-dropping moments of the week.

The AI That Ranks Restaurants By Looks

It started with a map of Los Angeles. Pretty normal, right? Until I realized the color coding wasn’t about ratings or price, but attractiveness of the patrons.

That’s right. A developer scraped restaurant photos from Google, fed the human faces to an AI that ranks "hotness" from 1 to 10, and color-coded the map from red (hot) to blue (not). Red zones turned into digital meat markets, and blue zones? Let’s say they weren’t exactly featured on Instagram’s “hottest foodie spots.”

Part of me was horrified. Another part of me wondered which zone I belonged in.

Tesla’s AI Assistant Wants to Chat (and Maybe Drive)

Tesla’s early integration of Grok, a large language model AI, into their cars is real. Picture this: you're behind the wheel and instead of fiddling with buttons or shouting at Siri, you can say, “Avoid that garbage truck ahead,” and your car gets it.

It’s not there yet, but we’re closer than we think. And voice assistants that know your schedule, your mood, and maybe your favorite dive bar? That’s happening.

Which brings me to...

Grok Only Speaks Elon

Let me tell you about Grok 4, the most "truth-seeking" AI chatbot on the scene. Problem is, its idea of truth? It’s Elon Musk’s tweets.

Seriously. Ask Grok about Israel and Palestine, or Ukraine, or abortion, and it searches Elon’s timeline before offering an answer. Some see this as transparency. Others, well, call it bias baked into silicon. Grok doesn’t really form opinions. It mirrors Musk’s.

So, is AI smarter? Maybe. More independent? Not if it’s checking Twitter for its moral compass.

An AI Hoax That Fooled Tourists

An elderly couple from Malaysia traveled hours to see a scenic cable car ride they found in a viral video. Problem? It didn’t exist. The journalist? AI-generated. The landscape? AI-rendered. And the couple? Furious.

The kicker? That fake attraction is now more famous than the real town it doesn't exist in. Welcome to the metaverse, no headset required.

Deepfake Diplomacy

Then there’s this: someone used AI voice cloning to impersonate Marco Rubio and contact foreign ministers. Yep. A random actor (still uncaught) spoofed texts and voicemails to US officials.

No classified info was leaked, but this wasn’t amateur hour either. It shows how easy it is to grab a minute of someone's voice, feed it into a model, and fake anything, from threats to treaties.

This tech is powerful. And dangerous.

We’re standing on the edge of something enormous: a world where reality is increasingly blurred by algorithms that can dream up fake cities, fake conversations, and even fake people.

I’ll leave you with this: we’ve entered the “vibe code” era. What we build from here? That’s up to us.

Warmly,
Dylan Curious