- The Current ⚡️
- Posts
- Reception of OpenAI’s Deep Research Signaling True End of Search Era?
Reception of OpenAI’s Deep Research Signaling True End of Search Era?
Also, Anthropic thinks they’ve found a way to address jailbreak issues

⚡️ Headlines
🤖 AI
Big Tech's Capital Expenditures Surpass Previous Oil Industry Peaks – Major technology companies are now investing more in capital expenditures than the oil industry did during its peak spending years. [The Information].
SoftBank Commits to $3 Billion Annual Investment in OpenAI Products – SoftBank has announced plans to purchase $3 billion worth of OpenAI's products each year to enhance its AI capabilities. [The Information].
ChatGPT on WhatsApp Now Supports Image and Voice Note Inputs – Users can now send images and voice notes to ChatGPT via WhatsApp, and the AI will respond with text messages. [Android Authority].
Meta May Halt Development of High-Risk AI Systems – Meta has released a policy document outlining scenarios in which the company may not release certain categories of 'risky' AI systems. [TechCrunch].
Potential Economic Impacts of AI on Employment – This article explores three scenarios detailing how AI could affect the economy and job market. [Bloomberg].
🦾 Emerging Tech
Stablecoins Gaining Traction in Emerging Markets – Stablecoins are now gaining traction in consumer finance, payroll, and other areas in emerging markets. [TechCrunch].
🤳 Social Media
Trump Orders Creation of U.S. Sovereign Wealth Fund, Potentially Involving TikTok – President Donald Trump has signed an executive order to initiate the creation of a U.S. sovereign wealth fund, which could potentially include investments in TikTok. [AP News].
🔬 Research
Meta's Approach to Frontier AI – Meta has released a framework focusing on mitigating critical risks in areas like cybersecurity and biosecurity in the development of advanced AI systems. [Meta].
AI-Driven Self-Healing Roads Developed – Researchers have developed self-healing asphalt using biomass and Google Cloud’s AI, aiming to create more durable and sustainable roads. [Google].
🎱 Random
Frederick Douglass Hologram Exhibit Opens in Boston – A new exhibit featuring a hologram of Frederick Douglass has opened at a Boston museum, offering an interactive educational experience. [Axios].
Amazon Struggles with Physical Retail Stores – The e-commerce giant is closing more of its convenience stores; 'I don't think they really understand retail,' a consultant says. [The Wall Street Journal].
🔌 Plug-Into-This
One way to look at AI is simply as a new tech that is moving beyond us traditional search engines. New reasoning models and autonomous agents capable of conducting in-depth research like OpenAI’s “Deep Research” are transforming information retrieval, enabling machines to analyze and synthesize knowledge at an expert level.

Reasoners: AI models that generate "thinking tokens" before responding, enhancing their problem-solving capabilities, especially in complex areas like mathematics and logic.
Agents: Autonomous AI systems assigned specific goals, capable of pursuing them independently. While general-purpose agents face challenges, specialized agents are proving effective in focused domains.
OpenAI's Deep Research: A specialized research agent built on OpenAI's o3 Reasoner, capable of producing comprehensive analyses comparable to graduate-level work in minutes.
Comparison with Google's Deep Research: Google's version aggregates documents simultaneously, resulting in more surface-level summaries, whereas OpenAI's agent engages in curiosity-driven exploration for deeper insights.
Future Implications: The integration of advanced Reasoners and specialized agents is transforming tasks traditionally performed by experts, indicating a shift towards more autonomous digital workers.
Deep. Research. Is. Insane.
Congratulations to the team at @OpenAI for such an amazing product! 🥂
I have a very complex tax situation, and needed to carefully plan my exit from the US, so I asked Deep Research for help. Just finished reading the full report, which took me… x.com/i/web/status/1…
— Patrice (@PatriceBTC)
9:35 PM • Feb 3, 2025
🔍 This shift marks the rise of AI-driven research tools that go beyond simple search results to provide deeper, more structured insights — which is useful sometimes, but not always preferred. Picture having to deal with Perplexity’s 4 paragraph responses when you want to just get a list of links…
Anthropic's Safeguards Research Team has introduced "Constitutional Classifiers," a method designed to protect AI models from universal jailbreaks—techniques that bypass safety measures to elicit harmful outputs. Initial prototypes demonstrated robustness against extensive human-led attacks but faced challenges like high refusal rates and increased computational demands. Subsequent iterations achieved similar security with reduced refusal rates and moderate computational overhead.

Jailbreak vulnerabilities remain a major concern, as large language models can still be manipulated into bypassing their safety mechanisms.
A prototype system was tested over two months by independent evaluators who attempted to breach its defenses through thousands of attempts, none of which resulted in a successful universal jailbreak.
Early versions of the system effectively blocked attacks but frequently declined benign queries and required significant computational power, prompting refinements for better usability.
Automated tests using thousands of synthetic jailbreak prompts showed a drastic reduction in successful attacks, with classifiers improving security by over 95%.
Further research is ongoing to refine these classifiers, ensuring they maintain strong defenses while being practical for deployment.
New Anthropic research: Constitutional Classifiers to defend against universal jailbreaks.
We’re releasing a paper along with a demo where we challenge you to jailbreak the system.
— Anthropic (@AnthropicAI)
4:31 PM • Feb 3, 2025
🛡️ The development of Constitutional Classifiers represents a significant advancement in AI safety, addressing longstanding challenges in preventing models from being manipulated into producing harmful content.
OpenEuroLLM is a consortium of 20 leading European research institutions, companies, and EuroHPC centers dedicated to developing open-source, multilingual large language models (LLMs) that are transparent, compliant with EU regulations, and capable of preserving linguistic and cultural diversity. The project aims to democratize access to high-quality AI technologies, enhancing Europe's competitiveness and digital sovereignty.

Transparency and compliance are key priorities, with the initiative providing open access to data, documentation, training and testing code, and evaluation metrics to ensure alignment with European values and regulations.
The project focuses on extending the multilingual capabilities of existing models to support all EU official languages and beyond, promoting linguistic and cultural diversity.
OpenEuroLLM seeks to foster an active community of developers and stakeholders across the public and private sectors, encouraging collaboration and innovation.
The consortium includes prominent universities, research organizations, companies, and HPC centers from various European countries, pooling expertise to advance AI capabilities.
The EU enters the large model training arena with...
...$56M
"This morning Brussels announced plans to develop an open source AI model of its own, with $56 million in funding to do it.
The investment will fund top researchers from a handful of companies and universities across… x.com/i/web/status/1…
— Nathan Benaich (@nathanbenaich)
1:59 PM • Feb 3, 2025
🤝 Officially launching on February 1, 2025, the project has received funding from the European Commission under the Digital Europe Programme, highlighting its strategic importance for Europe's digital future.
🆕 Updates
a new incredible AI tool just dropped.
now you can DESIGN lifelike voices with precise emotion control and.. even clone any voice with just a 10s clip
step by step tutorial:
— el.cine (@EHuanglu)
3:41 PM • Feb 3, 2025
📽️ Daily Demo
KLING ELEMENTS → More tests.
Subject + Environment + Object.
KLING PROMPT EX:
A cinematic tracking shot of a beautiful woman wearing black swimwear, a black coverup skirt, and a feathered headdress, walking out of a stunning Morrocan hotel doorway down a tiled path toward the… x.com/i/web/status/1…— Rory Flynn (@Ror_Fly)
1:21 PM • Feb 4, 2025
⚡ Simulation with o3-mini high!
Yes, this is EXACTLY what you think it is.
Conway's Game of Life happening on a mini planet! 😀
The prompt was: "I’d like to make a JS simulation of a sphere where the universe of Conway's Game of Life is unfolding, multicolor, lots of small… x.com/i/web/status/1…
— Javi Lopez ⛩️ (@javilopen)
2:36 PM • Feb 4, 2025
🗣️ Discourse
In less than 24 hours, we have another Open Deep Research AI agent powered by Firecrawl and the Vercel AI SDK.
The open-source community rocks! 🔥🔥
Code below 👇
— AshutoshShrivastava (@ai_for_success)
6:20 PM • Feb 3, 2025
CMU researchers, in collaboration with NVIDIA, present ASAP, a two-stage framework for humanoid robot agility.
It pre-trains motion policies on human data, then refines them with real-world corrections using a delta action model, which adjusts for simulation mismatches. x.com/i/web/status/1…
— The Humanoid Hub (@TheHumanoidHub)
7:34 AM • Feb 4, 2025
I hope everyone who sees this chart fully understands where we are & where we’re heading (just use your conservative extrapolation for the next two years).
Meanwhile, a massive shock awaits those who don’t understand it or remain unaware!
Source: @emollick & @EpochAIResearch
— Derya Unutmaz, MD (@DeryaTR_)
4:52 PM • Feb 3, 2025
Don't stick to just one AI model, here’s what I feel works best for different use cases:
1. Coding – Claude 3.5 Sonnet / o3-mini-high
2. Writing – Gemini Expo 1206 / Claude 3.5 Sonnet
3. PDF Analysis – Gemini Flash 2.0
4. Video Analysis– Gemini Flash 2.0
5. Math– o1 pro… x.com/i/web/status/1…— AshutoshShrivastava (@ai_for_success)
4:15 PM • Feb 3, 2025