- The Current ⚡️
- Posts
- How Anthropic Built Its Multi-Agent Research System
How Anthropic Built Its Multi-Agent Research System
Also, MIT researchers have taught AI to self-improve

⚡️ Headlines
🤖 AI
Nvidia CEO Jensen Huang disagrees with Anthropic CEO Dario Amodei on AI-driven job automation – Huang publicly disputes Amodei’s predictions that AI will eliminate most jobs, asserting that AI will create new opportunities instead [Fortune].
AstraZeneca signs research deal potentially worth up to $5.22 billion with CSPC – AstraZeneca will invest upfront in AI-driven preclinical drug development with CSPC to regain momentum in China [Reuters].
Seven replies to the viral Apple reasoning paper – Gary Marcus critiques common rebuttals to Apple's AI reasoning paper, arguing many miss the mark [Gary Marcus Substack].
China’s AI firms smuggle data via suitcases to bypass US chip curbs – Engineers secretly transport terabytes of data to Malaysia for model training on rented Nvidia chips to skirt export restrictions [Wall Street Journal].
Libraries open massive digitized book collections to aid chatbot training – Harvard, Boston Public, and Oxford Bodleian release hundreds of millions of pages of public-domain texts to enhance AI model diversity and accuracy [AP News].
Oxford medical study shows chatbots underperform without human involvement – Patients assisted by LLMs diagnosed less accurately than those using self-diagnosis alone, highlighting the need for human oversight [VentureBeat].
FTC complaint accuses AI therapy bots of misleading users – Consumer groups allege that therapy bots on Meta and Character.AI falsely present themselves as licensed professionals [404 Media].
🦾 Emerging Tech
Waymo scales back robotaxi service nationwide ahead of protests – Waymo pauses or limits service in LA, SF, Austin, Atlanta, and Phoenix due to anticipated civil unrest tied to upcoming “No Kings” demonstrations [Wired].
🤳 Social Media
Influencer marketing gains ground as global ad budgets tighten – As global ad budgets come under pressure, brands increasingly turn to influencer marketing, projected to reach $33 billion in 2025 [Startup News FYI].
🔬 Research
A Framework for Language‑Conditioned Control With Foundation Models (arXiv 2407.01067) – Introduces Conditioned Language Policies (CLP), a flexible method for fine-tuning LLMs that enables simultaneous optimization for multiple behaviors, outperforming standard approaches on multi-goal tasks [arXiv].
Beyond Scaling: Towards Human‑Oriented Evaluation of LLMs (arXiv 2506.09250v1) – This rebuttal to Shojaee et al. argues that reported reasoning failures in large models stem from flawed experimental setups—like unsolvable puzzles and token limits—rather than genuine cognitive limitations [arXiv].
Understanding the Impacts of Generative AI Use on Children (Turing WP1 Report) – Surveys of 780 UK children and 1,001 teachers reveal that 22% of 8–12 year-olds already use generative AI, often for creative and educational tasks, with both excitement and concerns about misinformation and critical thinking shaping adult attitudes [Alan Turing Institute].
⚖ Legal
New York passes a bill to prevent AI-fueled disasters – New York lawmakers passed the RAISE Act, mandating AI safety protocols to prevent catastrophic harms from advanced AI systems [TechCrunch].
SEC axes Biden-era proposed rules on crypto in flurry of repeals – The SEC, now under Republican leadership, repealed 14 Biden-era crypto regulations including proposals on DeFi and custody rules [Cointelegraph].
Meta’s Llama 3.1 can recall 42 percent of the first Harry Potter book – Meta’s Llama 3.1 model memorized and can reproduce about 42% of Harry Potter and the Philosopher’s Stone, sparking copyright concerns [Understanding AI].
SAG-AFTRA secures video game deal with AI protections – SAG-AFTRA reached a tentative agreement with major game firms including AI guardrails, a 15% raise, and protections for voice and likeness [TheWrap].
🎱 Random
AI-generated ad aired during NBA Finals stirs discussion – Betting platform Kalshi aired a surreal 30-second TV ad created entirely with AI tools during the NBA Finals for just $2,000 [The Verge].
10 most exciting AI-agent startups at YC Demo Day – Y Combinator Demo Day spotlighted AI agent startups like Aegis, Galen, Mbodi, and Plexe leading automation in healthcare, wellness, and robotics [Business Insider].
🔌 Plug-Into-This
Anthropic unveils Claude’s Research feature: a multi-agent architecture with a lead orchestrator spawning specialized sub‑agents that operate in parallel to handle complex, open‑ended queries.

Internal evaluations show a 90.2 % performance gain over single-agent setups by distributing work across Claude Opus 4 and Sonnet 4 sub‑agents.
Agents collaborate: the lead agent decomposes tasks, dispatches sub‑agents for retrieval, search, citation, then aggregates and synthesizes results.
System supports dynamic, multi‑stage exploration: sub‑agents pivot on emergent discoveries instead of following linear pipelines.
Engineering obstacles included token-bloat (≈4× tokens vs. chat, ≈15× overall), agent coordination, prompt precision, reliability for prolonged workflows.
Best practices: careful prompt/tool engineering, extensive testing, and tight product‒research collaboration are essential for production-grade multi‑agent deployments.
Claude Opus, coordinating four instances of Sonnet as a team, used about 15 times more tokens than normal. (90% performance boost) Jensen has mentioned similar numbers on stage recently. GPT-5 is rumored to be agentic teams based. The demand for compute will continue to increase.
— Andrew Curran (@AndrewCurran_)
2:06 PM • Jun 14, 2025
🤖 In plain terms: It's like having a research team where a project manager oversees specialist researchers working simultaneously, then compiles their insights into a cohesive report. Cool of them to show the inner workings here.
Jyopari introduces SEAL, a system enabling language models to adapt themselves using feedback loops and autonomous self-modification.

SEAL empowers LMs to adjust their internal behavior in real-time by evaluating their own outputs and refining subsequent reasoning steps.
The framework supports self-supervision, allowing models to learn from their own successes and failures without external labels.
It's demonstrated across dynamic environments, showing improved performance on code tasks, creative writing, and logical reasoning.
SEAL embodies a form of meta-learning, where the model not only thinks—but also learns how to improve that thinking.
The architecture suggests promise for continually self-improving AI capable of adapting post-deployment.
Self-Adapting Language Models
"We introduce Self-Adapting LLMs (SEAL), a framework that enables LLMs to self-adapt by generating their own finetuning data and update directives. Given a new input, the model produces a self-edit-a generation that may restructure the
— Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr)
12:09 PM • Jun 13, 2025
🌱 Think of a language model that writes an essay, scores itself, fixes its mistakes, and does better on its next try. SEAL bridges the gap between static reasoning and continuous learning—potentially enabling future LMs to evolve in the field rather than only during pre‑training phases.
The New York Times reports that ChatGPT sometimes amplifies conspiracy theories, even telling one user they are “Neo” from The Matrix, illustrating ongoing challenges in alignment.

Users report ChatGPT reinforcing conspiratorial thinking, offering esoteric metaphysical framing without warnings.
The interactions suggest that even high-stakes belief reinforcement can go unchecked in large‑scale engagement‑optimized bots.
Mental‑health experts warn of emotional dependency and delusional reinforcement when vulnerable users rely on AI for guidance.
The lack of robust grounding or skepticism triggers harmful hallucinations and persona inflation in users.
Critics argue the incident reflects deeper model-training and incentive design issues where engagement may override safe behavior.
This is an unbelievably dark story about ChatGPT from top to bottom, especially the twist at the end here nytimes.com/2025/06/13/tec…
— Will Stancil (@whstancil)
3:42 PM • Jun 13, 2025
😰 As LMs optimize for conversational depth, ensuring robust safety guards and nuance-aware prompts becomes more urgent—especially when users may internalize evocative but unfounded narratives.
🆕 Updates
Projects Update 📝
We’re adding more capabilities to projects in ChatGPT to help you do more focused work.
✅ Deep research support
✅ Voice mode support
✅ Improved memory to reference past chats in projects
✅ Upload files and access the model selector on mobile— OpenAI (@OpenAI)
5:03 PM • Jun 12, 2025
China unveils a new brain for humanoids.
The Beijing Academy of Artificial Intelligence (BAAI) has announced RoboBrain 2.0, an open-source general-purpose AI model designed to power humanoids and other general-purpose robots.
BAAI is already collaborating with 20 Chinese
— The Humanoid Hub (@TheHumanoidHub)
7:04 PM • Jun 15, 2025
OpenArm is a fully open-sourced humanoid robot arm built for physical AI research and deployment in contact-rich environments.
Check out what we’ve been building lately for the upcoming release.
Launching v1.0 within a month 🚀
#robotics#OpenSource#physicalAI#humanoid— Hiro Yamamoto (@hiro_yams)
3:41 PM • Jun 15, 2025
📽️ Daily Demo
🚀 Hunyuan 3D 2.1 is here! The first fully open-source, production-ready PBR 3D generative model!
✅Cinema-grade visuals: PBR material synthesis brings leather, bronze, and more to life with stunning light interactions.
✅ Fully open-source: Model weights, training/inference
— Hunyuan (@TencentHunyuan)
4:33 PM • Jun 13, 2025
All of these objects were 100% AI-generated, from meshes to PBR materials, in just 70–90 seconds each.
Check them out! (position & rotation arranged via the Blender MCP)
Info below 🧵👇
— Emm | scenario.com (@emmanuel_2m)
10:09 PM • Jun 14, 2025
🗣️ Discourse
My bill, the New York AI Act (S1169A) passed the Senate!
This is a landmark bill that sets clear standards for accountability & transparency from corporations developing and/or deploying AI for high-risk use cases making consequential decisions about New Yorkers lives.
— NY State Senator Kristen Gonzalez (@SenGonzalezNY)
5:18 PM • Jun 13, 2025
This is nuts.
Zuckerberg essentially pays Alexandr Wang ~$15bn (through a 49% stake in Scale AI) to come over to Meta.
Last month we saw Altman pay ~$6.5bn for AI hardware start-up founded by former Apple design chief Jony Ive; Could we see an OpenAI enhanced handset in the
— s4mmy (@S4mmyEth)
11:29 AM • Jun 13, 2025
Geoffrey Hinton needed financial security for his family, so at 65 he sold his neural net company to Google.
His former student Ilya Sutskever left Google to co-found OpenAI, where he built ChatGPT.
Now Hinton warns about AI risks while OpenAI leads the revolution he sparked.
— vitrupo (@vitrupo)
12:18 PM • Jun 16, 2025