• The Current ⚡️
  • Posts
  • Google’s Gemini 2.5 Flash — a hybrid reasoning AI in preview that matches o4-mini

Google’s Gemini 2.5 Flash — a hybrid reasoning AI in preview that matches o4-mini

Also, Google loses landmark antitrust ruling...boon for OpenAI (and others)?

⚡️ Headlines

🤖 AI

TSMC’s Arizona plant to make advanced chips by 2028 – TSMC will manufacture 2-nanometer chips in Arizona starting in 2028, boosting U.S. chip production capabilities. [Axios]

Tracking AI – A new initiative aims to provide ongoing, centralized tracking of global AI policies, developments, and trends. [Tracking AI]

Meta asked Amazon and Microsoft to help fund Llama’s development – Meta reportedly sought financial backing from Amazon and Microsoft to support its open-source LLM efforts. [The Information]

Gemini 2.5 Flash brings faster, more controllable AI to developers – Google introduces Gemini 2.5 Flash, a lightweight model with better speed and task control for app integration. [Ars Technica]

Chatbot Arena is becoming a company – The crowdsourced AI ranking platform Chatbot Arena is transitioning into a standalone commercial business. [Bloomberg]

Google offers free college access to Gemini Advanced via One AI Premium – Google is giving students free access to its top-tier Gemini AI through its One AI Premium plan. [The Verge]

Smashing, the reading app by Goodreads’ founder, shuts down – Smashing, the AI-powered reading curation app, is shutting down just a year after launching. [TechCrunch]

Gemini Live brings AI to your camera feed, free for now – Gemini Live adds real-time camera analysis and interaction features, launching at no cost during its early phase. [9to5Google]

OpenAI evaluated Cursor before rival Windsurf deal – OpenAI considered acquiring developer tool Cursor before pursuing a potential deal with competitor Windsurf. [CNBC]

Perplexity may bring its AI assistant to Samsung and Motorola phones – Perplexity is in talks with major smartphone makers to embed its AI assistant into future devices. [Bloomberg]

AI-generated music makes up 18% of new uploads to Deezer – Nearly one-fifth of all new tracks on Deezer are now AI-generated, highlighting rapid adoption in music creation. [Reuters]

OpenAI launches Flex mode for cheaper, slower inference – OpenAI introduces Flex processing, offering lower-cost AI operations by allowing for delayed task execution. [TechCrunch]

Meta blocks Apple’s AI tools from Facebook and other iOS apps – Meta is actively preventing Apple’s new AI services from functioning inside its iOS applications. [9to5Mac]

🤳 Social Media

Instagram launches Blend, a shared Reels feed for friends – Instagram’s new Blend feature curates personalized Reels feeds shared between two users. [TechCrunch]

🔬 Research

Progen3 shows off new AI-designed proteins – Profluent showcases Progen3, an AI model capable of generating diverse, functional proteins with medical applications. [Profluent]

Meta FAIR improves AI’s visual reasoning and localization – Meta’s FAIR team enhances AI’s ability to interpret, localize, and reason about complex visual environments. [Meta]

🎱 Random

China bans “smart” and “autonomous” terms in car ads – Chinese regulators prohibit the use of terms like “autonomous” in car advertising to curb misleading claims. [Reuters]

New Jersey sues Discord over child safety concerns – The state of New Jersey is suing Discord, alleging the platform failed to protect minors from harmful content. [Wired]

🔌 Plug-Into-This

Google has unveiled Gemini 2.5 Flash, a new, cost-efficient multimodal model optimized for speed and high-volume throughput. Positioned as a sibling to Gemini 2.5 Pro, Flash trades off depth for responsiveness, targeting dynamic use cases like summarization, captioning, and rapid chat applications.

  • Gemini 2.5 Flash introduces a "thinking budget" feature, allowing developers to control the model's reasoning depth to balance quality, cost, and latency.

  • It maintains a 1-million-token context window, enabling broad input retention across long documents and video transcripts.

  • The model is fully multimodal, capable of processing text, images, and audio, and is tuned for latency-sensitive applications like live translation and on-device summarization.

  • Flash is now available in public preview via the Gemini API in Google AI Studio and Vertex AI, with performance analytics built into the developer dashboard.

🏎️ Think of Gemini Flash as the “sports car” version of Gemini—less raw horsepower, but finely tuned for agility and speed in traffic. Generally, this model could be seen as reflective of Google betting on modular specialization (offering model variants with differentiated latency, context, and reasoning profiles to suit distinct deployment niches) rather than one-size-fits-all generalists.

A federal judge has ruled against Google in a landmark antitrust case, finding that the tech giant illegally monopolized key portions of the digital advertising market. The decision opens the door to remedies that could restructure how Google operates its ad exchange and ad-buying tools.

  • The court concluded that Google leveraged its dominance across the ad stack to disadvantage rivals and favor its own exchange, AdX.

  • Evidence cited included internal communications describing efforts to restrict interoperability with competing ad tools and steer demand through opaque auction practices.

  • The ruling may lead to divestiture or mandatory interoperability between Google’s ad server and third-party platforms.

  • This is the most significant U.S. antitrust loss for Google since the DOJ's initial filing in 2023, carrying implications for the broader ad-tech ecosystem.

  • Alphabet is expected to appeal, but regulatory scrutiny of its advertising infrastructure now intensifies globally.

  • In plain terms: The judge said Google’s ad system was rigged like a casino where the house always wins—and now regulators want to break up the house.

⚖️ This ruling signals a judicial willingness to challenge vertically integrated digital platforms not just on market share, but on conduct—marking a shift from passive oversight to structural intervention in ad tech.

The Wikimedia Foundation is teaming up with Kaggle to release an expansive, preprocessed version of Wikipedia tailored for machine learning research. The project offers a series of challenge-ready datasets, aiming to boost transparency, reproducibility, and academic experimentation.

  • The dataset includes 100+ million cleaned and structured English Wikipedia passages, annotated with metadata like links, categories, and edit history.

  • Kaggle will host competitions and tutorials around the dataset, fostering model benchmarking in areas like retrieval, summarization, and citation prediction.

  • Wikipedia’s complex revision and linking structure makes it ideal for tasks involving temporal reasoning and knowledge graph construction.

  • The data release is aligned with Wikimedia’s ethical AI goals, supporting open-source development without commercial constraints or exclusive licensing.

  • Researchers can access the data directly via BigQuery and Kaggle’s data explorer, with built-in notebook support and public leaderboard tracking.

📚 By marrying Wikipedia’s rich textual web with Kaggle’s competitive framework, this partnership transforms a legacy knowledge base into an agile testbed for language model training and interpretability research.

 🆕 Updates

📽️ Daily Demo

🗣️ Discourse