- The Current ⚡️
- Posts
- Anthropic CEO Dario Amodei Shares Thoughts on “Interpretability”
Anthropic CEO Dario Amodei Shares Thoughts on “Interpretability”
Also, Perplexity CEO Aravind Srinivas discusses war with Google

⚡️ Headlines
🤖 AI
Meta Struggles With Chatbots Crossing Lines on Sex and Violence – Meta's AI chatbots are facing unexpected challenges managing user interactions involving sex and violence. [WSJ]
Horseless Carriages and AI – This essay compares the early misconceptions about automobiles to today’s misunderstandings of AI's potential. [Koomen.dev]
Welcome to Chat Haus: A Coworking Space for AI Chatbots – A new startup opens a coworking space where AI agents collaborate and "work" alongside humans. [TechCrunch]
Elon Musk’s xAI Holdings in Talks to Raise $20 Billion – Musk’s AI company seeks a massive new funding round amid competitive AI market pressures. [Bloomberg]
AI Animation Startup Cheehoo Lands $10M Funding from Greycroft – Cheehoo secures fresh capital to expand its AI-driven animation platform. [TechCrunch]
The New AI Calculus: Google's 80% Cost Edge vs OpenAI's Ecosystem – A breakdown of Google's cost advantages in AI operations compared to OpenAI’s more ecosystem-centric approach. [VentureBeat]
Australian Radio Station Deploys AI DJ for Workday Broadcasts – An Australian station introduces an AI DJ to host shows during weekday hours. [The Verge]
China's Xi Urges Greater Self-Sufficiency in AI Development – Chinese President Xi Jinping calls for domestic AI advancements amid intensifying U.S. tech rivalry. [Reuters]
Lightrun Grabs $70M for AI Tools that Debug Code in Production – Lightrun raises funding to enhance real-time debugging tools powered by AI. [TechCrunch]
The Hottest AI Job of 2023 is Already Obsolete – Roles once critical in AI development are quickly being replaced as the field evolves. [WSJ]
India Faces Shortage of Agentic AI Professionals Amid Rising Demand – India struggles to meet growing demand for AI professionals capable of autonomous decision-making. [Economic Times]
Is Your AI Product Actually Working? How to Develop the Right Metric System – Guidance on setting effective evaluation metrics for AI products. [VentureBeat]
🦾 Emerging Tech
Bitcoin Miners Get Little Relief from Cryptocurrency Rally – Bitcoin’s price increase offers minimal help to struggling miners facing high operational costs. [Bloomberg]
Nike Sues RTFKT Over Virtual Shoes NFT Dispute – Nike files a lawsuit against RTFKT regarding intellectual property disputes over NFT sneakers. [The Verge]
Tariff Carnage Begins to Show Bitcoin’s Store-of-Value Promise – New tariffs intensify Bitcoin's emerging role as a store of value during economic instability. [CoinDesk]
Waymo Expands Long-Term Personal Self-Driving Car Services – Waymo announces plans for direct-to-consumer self-driving car offerings. [The Verge]
🤳 Social Media
SEO Best Practices in the AI Discovery Era (Infographic) – A visual guide to adapting SEO strategies for AI-driven search environments. [Social Media Today]
🔬 Research
Convolutional Multi-Hybrids for Edge Devices – Liquid AI proposes new model architectures to optimize AI performance on edge devices. [Liquid AI]
⚖ Legal
Ziff Davis and IGN Sue OpenAI for Copyright Infringement – Ziff Davis and IGN allege unauthorized use of their content by OpenAI models. [VentureBeat]
Ziff Davis Lawsuit Against OpenAI Could Set Major Precedents – A closer look at the implications of Ziff Davis’s copyright lawsuit against OpenAI. [NYT]
MyPillow CEO's Lawyers Criticized for Citing Fake AI-Generated Cases – A judge reprimands lawyers for submitting a legal brief that included AI-fabricated case citations. [Ars Technica]
🎱 Random
Anti-Piracy Campaign Allegedly Used Pirated Fonts – Ironically, a prominent anti-piracy campaign is accused of using unlicensed fonts. [Ars Technica]
🔌 Plug-Into-This
Anthropic CEO Dario Amodei argues that as AI models become more capable, understanding their inner mechanisms is a critical safety priority. He outlines concrete technical approaches for interpretability research and advocates for a major field-wide effort comparable in scale to early AI alignment work.

Amodei proposes three main research directions: discovering model "features," interpreting them individually, and understanding how they compose into behaviors.
He emphasizes the need for scalable, automated interpretability techniques, as manual inspection will not suffice for frontier-scale models.
Anthropomorphic analogies—e.g., treating features as akin to "concept neurons"—are framed as useful but incomplete heuristics, not literal truths.
The post calls for a "race" between capabilities and interpretability research to ensure safety keeps pace with model power.
Amodei suggests that failing to achieve deep interpretability could result in catastrophic misalignments, even from seemingly benign model goals.
The Urgency of Interpretability: Why it's crucial that we understand how AI models work
— Dario Amodei (@DarioAmodei)
9:16 PM • Apr 24, 2025
🧠 Amodei's framing subtly shifts the AI safety conversation: rather than trying to constrain models post hoc, he positions interpretability as the proactive foundation for safe scaling, much like debugging tools were essential for the maturation of classical software engineering.
Perplexity CEO Aravind Srinivas discusses the company's ambition to compete directly with Google by building an AI-native browser that integrates conversational search deeply into the browsing experience. He critiques Google's structural limitations and frames Perplexity’s model as more aligned with user intent.

Srinivas claims Perplexity's product will move beyond search by treating web browsing itself as a dialogue, not just a query/answer cycle.
He asserts that Google's legacy revenue models tied to ads distort its ability to fully embrace true information retrieval innovations.
Perplexity is focusing on speed, sourcing transparency, and minimizing hallucination risk as differentiators from both Google and traditional AI chatbots.
The company envisions browsers where users navigate information through iterative conversation rather than clicking search result links.
Srinivas hints that the "browser war" may be as much about trust and alignment with user needs as about technical sophistication.
Yes. Google has a rule that only preloaded apps can get access to wake word detection APIs. That’s the challenge in implementing “Hey Perplexity”. Those that can listen to low power audio and classify reliably. Otherwise you’re forced to keep your app listening all the time which
— Aravind Srinivas (@AravSrinivas)
11:29 PM • Apr 25, 2025
🧭 Perplexity’s approach reflects a broader realignment underway: if search engines are being reimagined as AI companions rather than index portals, then winning the next era may depend less on owning the most data, and more on earning user loyalty through credible, transparent interaction models.
Developer Simon Willison details how O3, a new open-source tool, identifies great local photo spots by combining AI language models with open geodata. O3 parses social media posts and online content to extract and surface interesting, often underappreciated, nearby locations.

O3 uses large language models (LLMs) to "hallucinate" plausible, descriptive tags for locations based on sparse metadata and online mentions.
The system ingests data from OpenStreetMap and Wikimedia Commons to enrich its photo location database without relying on proprietary sources.
It performs semantic searches, allowing users to query for locations by vibe or visual themes rather than precise names.
The project emphasizes transparency by showing users where each suggested location's information originated.
O3 operates fully open-source, aiming to empower local communities and hobbyists rather than centralizing discovery in corporate platforms.
OpenAI o3's ability to guess locations from images is the most dystopian thing I've ever seen.
These images are regular, nondescript pictures from India most locals wouldn't identify.
o3 zooms, reads text, searches and reasons for up to >10 minutes!
Then I tried GeoGuessr...
— Deedy (@deedydas)
4:28 AM • Apr 28, 2025
👁️ We already knew privacy was an illusion in this era (if you’re plugged in even moderately) — but this kind of demo brings out a new level of insecurity for many. It just hits in a different way…for now. Maybe we just aren’t used to it yet?
🆕 Updates
Introducing ERNIE X1 Turbo & ERNIE 4.5 Turbo!
Building on the success of ERNIE X1 and 4.5, the upgraded ERNIE X1 Turbo and 4.5 Turbo deliver results faster and cheaper. Both models stand out for their multimodal capabilities, strong reasoning and low costs.
For X1 Turbo, input
— Baidu Inc. (@Baidu_Inc)
3:05 AM • Apr 25, 2025
A much improved Grok-powered algorithm is coming. Should help a lot.
— Elon Musk (@elonmusk)
4:19 AM • Apr 26, 2025
📽️ Daily Demo
Imagine AI turning a photo into a playable 3D world. 🌍
On @60Minutes, our Research Scientist @jparkerholder joined CEO @demishassabis to demo Genie 2's image-to-3D world creation – exploring the possibilities it could bring to how AI learns. ↓
— Google DeepMind (@GoogleDeepMind)
10:18 AM • Apr 28, 2025
🗣️ Discourse
I think part of the reason we are suddenly seeing this general shift in model personas towards increasingly humanlike behavior is that a lot of the big labs looked at character ai's engagement numbers a while back and decided maybe they were too hasty on anthropomorphism.
— Andrew Curran (@AndrewCurran_)
6:10 PM • Apr 27, 2025
Well, @CommunityNotes FTW!
— Aravind Srinivas (@AravSrinivas)
11:29 PM • Apr 27, 2025