- The Current ⚡️
- Posts
- Google’s Gemma 3 Is Here
Google’s Gemma 3 Is Here
Also, OpenAI releases wide ranging SDK for agent-making

⚡️ Headlines
🤖 AI
Nvidia's 'Project Osprey' Catalyzes CoreWeave's Ascendancy - Nvidia's strategic initiative, 'Project Osprey,' has played a pivotal role in the rapid growth of cloud-services provider CoreWeave. [The Information].
Reka Introduces 'Reka Flash' AI Model - AI company Reka has unveiled 'Reka Flash,' a new AI model designed to enhance processing speeds and efficiency. [Reka.ai].
Meta Begins Testing Its First In-House AI Training Chip - Meta has started testing its proprietary AI training chip, aiming to improve performance and reduce reliance on external suppliers. [Reuters].
Anthropic's Claude Drives Strong Revenue Growth While Powering 'Manus' Sensation - Anthropic's AI model, Claude, has significantly boosted the company's revenue, notably by powering the popular 'Manus' AI agent. [The Information].
China's Manus AI Partners with Alibaba's Qwen Team in Expansion Bid - Manus AI has formed a strategic partnership with Alibaba's Qwen team to enhance its AI capabilities and manage increased user demand. [Reuters].
Gadget Boom Fizzles Amid AI Hoopla: 'It's a Bloodbath Out There' - The consumer gadget market is experiencing a downturn as AI advancements overshadow traditional devices, leading to significant industry challenges. [Bloomberg].
Google Invests Further $1bn in OpenAI Rival Anthropic - Google has increased its investment in AI startup Anthropic by an additional $1 billion, intensifying competition in the AI sector. [The New York Times].
🦾 Emerging Tech
America Is Missing The New Labor Economy – Robotics Part 1 - The U.S. risks falling behind in the evolving labor economy due to insufficient adoption of robotics and automation technologies. [SemiAnalysis].
Sen. Lummis Reintroduces Bitcoin Act, Which Would Allow US to Buy $80 Billion in BTC - Senator Cynthia Lummis has reintroduced legislation proposing that the U.S. government invest $80 billion in Bitcoin to establish a strategic reserve. [Decrypt].
🤳 Social Media
LinkedIn Expands AI Ad Targeting Options - LinkedIn has enhanced its advertising platform by incorporating advanced AI-driven targeting features to improve ad reach and effectiveness. [Social Media Today].
🔬 Research
AI Search Has A Citation Problem - A study reveals that current AI search engines struggle with accurately citing news sources, raising concerns about information reliability. [Columbia Journalism Review].
⚖ Legal
Introducing Agents in Harvey - Harvey has launched new AI agents designed to collaborate with legal professionals, streamlining tasks such as document drafting and research. [Harvey.ai].
🔌 Plug-Into-This
Google has announced the next evolution of its open AI models with Gemma 3, following the success of previous iterations. Built on the same research and technology as the Gemini models, Gemma 3 aims to enhance responsible AI development while providing state-of-the-art performance and efficiency. The new models focus on accessibility for developers and researchers, with improved inference capabilities and integration across multiple AI frameworks.

Larger Model Sizes & Enhanced Performance: Gemma 3 introduces new model sizes, scaling up from previous versions, ensuring better efficiency and higher-quality responses across various AI tasks.
Improved Optimization for Hardware: The models are designed to run efficiently on NVIDIA GPUs and Google Cloud TPUs, reducing deployment costs while maintaining high performance.
Responsible AI & Safety Enhancements: Google has incorporated extensive safety measures, including filtering sensitive training data, reinforcement learning from human feedback (RLHF), and adversarial testing to align outputs with ethical guidelines.
Seamless Integration with Developer Tools: The models are compatible with frameworks like JAX, PyTorch, and TensorFlow, and can be deployed easily on platforms like Google AI Studio, Vertex AI, and Hugging Face.
Gemma 3 is here! Our new open models are incredibly efficient - the largest 27B model runs on just one H100 GPU. You'd need at least 10x the compute to get similar performance from other models ⬇️
— Sundar Pichai (@sundarpichai)
11:06 AM • Mar 12, 2025
👐 Gemma 3 improves upon earlier releases by supporting a wider range of applications, including coding assistance, multimodal reasoning, and real-time AI interactions, making it a competitive alternative to larger proprietary models — not to mention their impressive claims around efficiency!
OpenAI has introduced a suite of developer tools aimed at simplifying the creation of autonomous AI agents capable of performing complex tasks with minimal human intervention. This initiative seeks to empower businesses and developers to integrate advanced AI functionalities into their applications, enhancing productivity and efficiency.

Responses API: This new API combines the simplicity of the Chat Completions API with advanced tool-use capabilities, allowing developers to build agents that can perform tasks such as web searches, file retrievals, and computer operations seamlessly.
Built-in Tools: OpenAI has incorporated functionalities like web search, file search, and computer use directly into the Responses API, enabling agents to access real-time information, locate and process files, and execute tasks on a user's device autonomously.
Agents SDK: To facilitate the orchestration of single-agent and multi-agent workflows, the Agents SDK provides developers with the necessary tools to manage and coordinate complex tasks, ensuring agents work harmoniously and efficiently.
Observability Tools: Integrated monitoring features allow developers to trace and inspect agent workflows, providing insights into performance and aiding in debugging and optimization processes.
Transition from Assistants API: OpenAI plans to phase out the existing Assistants API by mid-2026, encouraging developers to adopt the more versatile and capable Responses API for building agentic applications.
OpenAI just dropped their Agent SDK and it's a game-changer for AI developers 🤯
Building AI agents just went from weeks to minutes.
This will completely transform how we build AI apps.
8 reasons why this is HUGE:
— Min Choi (@minchoi)
10:30 PM • Mar 11, 2025
🤖 2025 was supposed to be the year for agents to truly arrive, so here we go! Last week saw China-based Manus make (uncertain) waves with some impressive demos (but lack of transparency) — now OpenAI follows with a full suite of tools for devs to build similar capacities.
Luma AI has introduced Inductive Moment Matching (IMM), a novel pre-training technique designed to enhance generative AI models by surpassing the limitations of existing diffusion models. IMM aims to unlock the potential of rich multi-modal data through improved sampling efficiency and stability.

Enhanced Sampling Efficiency: IMM delivers superior sample quality compared to diffusion models while offering over a tenfold increase in sampling efficiency, enabling faster and more accurate data generation.
Stable Training Dynamics: Unlike consistency models, which often require special hyperparameter designs and can be unstable during pre-training, IMM employs a single objective function that enhances stability across diverse settings, making it more reliable for various applications.
Innovative Inference Approach: By processing both the current and target timesteps during inference, IMM enhances the flexibility of each iteration, leading to state-of-the-art performance and efficiency in generative tasks.
Superior Performance Metrics: On the ImageNet 256x256 dataset, IMM achieves a Frechet Inception Distance (FID) of 1.99 using only 8 inference steps, surpassing diffusion models (2.27 FID) and Flow Matching (2.15 FID) with 30 times fewer sampling steps. It also attains a state-of-the-art 2-step FID of 1.98 on the CIFAR-10 dataset for models trained from scratch.
Open Access Resources: Luma AI has released the code, checkpoints, and technical papers detailing IMM, facilitating further research and development in the AI community.
Luma Labs introduced a new pre-training technique called Inductive Moment Matching
It produces superior image generation quality 10x more efficiently than current approaches
Luma says the approach breaks the algorithmic ceiling of diffusion models!
— Rowan Cheung (@rowancheung)
6:45 AM • Mar 12, 2025
🔄 IMM represents a paradigm shift in generative pre-training, moving beyond traditional diffusion models to fully harness the capabilities of multi-modal data, thereby advancing the field of multimodal intelligence.
🆕 Updates
Oops, we did it again. You've been asking, and it's finally here
Image-to-video on Google Veo 2, first in Freepik AI Suite
Full control, pro results. GO!
— Freepik (@freepik)
8:38 AM • Mar 12, 2025
BREAKING 🚨: Gemini 2.0 Flash Experimental multimodal image output is rolling out on AI Studio.
Google is the 1st lab rolling it out 👀
h/t @legit_api
— TestingCatalog News 🗞 (@testingcatalog)
2:50 PM • Mar 12, 2025
We've raised a $64M Series A led by @kleinerperkins to build the platform for real-time voice AI.
We'll use this funding to expand our team, and to build the next generation of models, infrastructure, and products for voice, starting with Sonic 2.0, available today.
Link below… x.com/i/web/status/1…
— Cartesia (@cartesia_ai)
3:17 PM • Mar 11, 2025
No code vibe coding
— Notion (@NotionHQ)
11:51 PM • Mar 11, 2025
📽️ Daily Demo
We made a Guide to teach you how to Fine-tune LLMs correctly!
Learn about:
• Choosing the right parameters & training method
• RL, GRPO, DPO & CPT
• Data prep, Overfitting & Evaluation
• Training with Unsloth & deploy on vLLM, Ollama, Open WebUI🔗 docs.unsloth.ai/get-started/fi…
— Unsloth AI (@UnslothAI)
4:16 PM • Mar 10, 2025
🗣️ Discourse
we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.
PROMPT:
Please write a metafictional literary short story… x.com/i/web/status/1…
— Sam Altman (@sama)
6:58 PM • Mar 11, 2025
LMAO!!! No one was ready for this!
Did Google just dropped open-source SOTA???Google Gemma 3 is thrashing o1-preview and o3-mini-high and it's only 27B parameters. The second-best open model, only behind DeepSeek-R1
FREAKING NUTS 😂
— AshutoshShrivastava (@ai_for_success)
8:02 AM • Mar 12, 2025
BREAKING 🚨: OpenAI released loads of new tools for agent development.
- Web search
- File search
- Computer use
- Responses
- Agents SDK— TestingCatalog News 🗞 (@testingcatalog)
7:20 PM • Mar 11, 2025
Inductive Moment Matching
Luma AI introduces a new class of generative models for one- or few-step sampling with a single-stage training procedure.
Surpasses diffusion models on ImageNet-256×256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step… x.com/i/web/status/1…
— Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr)
10:37 AM • Mar 11, 2025