• The Current ⚡️
  • Posts
  • Reception of OpenAI’s Deep Research Signaling True End of Search Era?

Reception of OpenAI’s Deep Research Signaling True End of Search Era?

Also, Anthropic thinks they’ve found a way to address jailbreak issues

⚡️ Headlines

🤖 AI

Big Tech's Capital Expenditures Surpass Previous Oil Industry Peaks – Major technology companies are now investing more in capital expenditures than the oil industry did during its peak spending years. [The Information].

SoftBank Commits to $3 Billion Annual Investment in OpenAI Products – SoftBank has announced plans to purchase $3 billion worth of OpenAI's products each year to enhance its AI capabilities. [The Information].

ChatGPT on WhatsApp Now Supports Image and Voice Note Inputs – Users can now send images and voice notes to ChatGPT via WhatsApp, and the AI will respond with text messages. [Android Authority].

Meta May Halt Development of High-Risk AI Systems – Meta has released a policy document outlining scenarios in which the company may not release certain categories of 'risky' AI systems. [TechCrunch].

Potential Economic Impacts of AI on Employment – This article explores three scenarios detailing how AI could affect the economy and job market. [Bloomberg].

🦾 Emerging Tech

Stablecoins Gaining Traction in Emerging Markets – Stablecoins are now gaining traction in consumer finance, payroll, and other areas in emerging markets. [TechCrunch].

🤳 Social Media

Trump Orders Creation of U.S. Sovereign Wealth Fund, Potentially Involving TikTok – President Donald Trump has signed an executive order to initiate the creation of a U.S. sovereign wealth fund, which could potentially include investments in TikTok. [AP News].

🔬 Research

Meta's Approach to Frontier AI – Meta has released a framework focusing on mitigating critical risks in areas like cybersecurity and biosecurity in the development of advanced AI systems. [Meta].

AI-Driven Self-Healing Roads Developed – Researchers have developed self-healing asphalt using biomass and Google Cloud’s AI, aiming to create more durable and sustainable roads. [Google].

🎱 Random

Frederick Douglass Hologram Exhibit Opens in Boston – A new exhibit featuring a hologram of Frederick Douglass has opened at a Boston museum, offering an interactive educational experience. [Axios].

Amazon Struggles with Physical Retail Stores – The e-commerce giant is closing more of its convenience stores; 'I don't think they really understand retail,' a consultant says. [The Wall Street Journal].

🔌 Plug-Into-This

One way to look at AI is simply as a new tech that is moving beyond us traditional search engines. New reasoning models and autonomous agents capable of conducting in-depth research like OpenAI’s “Deep Research” are transforming information retrieval, enabling machines to analyze and synthesize knowledge at an expert level.

  • Reasoners: AI models that generate "thinking tokens" before responding, enhancing their problem-solving capabilities, especially in complex areas like mathematics and logic.

  • Agents: Autonomous AI systems assigned specific goals, capable of pursuing them independently. While general-purpose agents face challenges, specialized agents are proving effective in focused domains.

  • OpenAI's Deep Research: A specialized research agent built on OpenAI's o3 Reasoner, capable of producing comprehensive analyses comparable to graduate-level work in minutes.

  • Comparison with Google's Deep Research: Google's version aggregates documents simultaneously, resulting in more surface-level summaries, whereas OpenAI's agent engages in curiosity-driven exploration for deeper insights.

  • Future Implications: The integration of advanced Reasoners and specialized agents is transforming tasks traditionally performed by experts, indicating a shift towards more autonomous digital workers.

🔍 This shift marks the rise of AI-driven research tools that go beyond simple search results to provide deeper, more structured insights — which is useful sometimes, but not always preferred. Picture having to deal with Perplexity’s 4 paragraph responses when you want to just get a list of links…

Anthropic's Safeguards Research Team has introduced "Constitutional Classifiers," a method designed to protect AI models from universal jailbreaks—techniques that bypass safety measures to elicit harmful outputs. Initial prototypes demonstrated robustness against extensive human-led attacks but faced challenges like high refusal rates and increased computational demands. Subsequent iterations achieved similar security with reduced refusal rates and moderate computational overhead.

  • Jailbreak vulnerabilities remain a major concern, as large language models can still be manipulated into bypassing their safety mechanisms.

  • A prototype system was tested over two months by independent evaluators who attempted to breach its defenses through thousands of attempts, none of which resulted in a successful universal jailbreak.

  • Early versions of the system effectively blocked attacks but frequently declined benign queries and required significant computational power, prompting refinements for better usability.

  • Automated tests using thousands of synthetic jailbreak prompts showed a drastic reduction in successful attacks, with classifiers improving security by over 95%.

  • Further research is ongoing to refine these classifiers, ensuring they maintain strong defenses while being practical for deployment.

🛡️ The development of Constitutional Classifiers represents a significant advancement in AI safety, addressing longstanding challenges in preventing models from being manipulated into producing harmful content.

OpenEuroLLM is a consortium of 20 leading European research institutions, companies, and EuroHPC centers dedicated to developing open-source, multilingual large language models (LLMs) that are transparent, compliant with EU regulations, and capable of preserving linguistic and cultural diversity. The project aims to democratize access to high-quality AI technologies, enhancing Europe's competitiveness and digital sovereignty.

  • Transparency and compliance are key priorities, with the initiative providing open access to data, documentation, training and testing code, and evaluation metrics to ensure alignment with European values and regulations.

  • The project focuses on extending the multilingual capabilities of existing models to support all EU official languages and beyond, promoting linguistic and cultural diversity.

  • OpenEuroLLM seeks to foster an active community of developers and stakeholders across the public and private sectors, encouraging collaboration and innovation.

  • The consortium includes prominent universities, research organizations, companies, and HPC centers from various European countries, pooling expertise to advance AI capabilities.

🤝 Officially launching on February 1, 2025, the project has received funding from the European Commission under the Digital Europe Programme, highlighting its strategic importance for Europe's digital future.

 🆕 Updates

📽️ Daily Demo

🗣️ Discourse