- The Current ⚡️
- Posts
- Google debuts Gemini 2.0 Flash Reasoning, Anthropic outlines agentic AI strategies
Google debuts Gemini 2.0 Flash Reasoning, Anthropic outlines agentic AI strategies

⚡️ Quick Hits
🤖 AI
AI startup Sunairio secures funding to develop climate risk analysis tools - The company raises new capital to expand its AI-driven climate risk software for businesses. [Axios].
Generative AI faces growing pressure to prove practical applications - As the initial hype fades, investors and developers demand evidence of AI’s tangible value. [Wired].
OpenAI prepares next-generation 'O3' reasoning model - The new system aims to improve reasoning capabilities and enhance contextual understanding. [The Information].
Apple discusses AI collaborations with Tencent and ByteDance for Chinese market - Sources report plans for China-specific AI tools despite regulatory challenges. [Reuters].
Meta envisions AI advancements centered on Llama technology - Meta outlines its plans for AI innovation through Llama-based models, focusing on scalable and adaptable applications. [Meta AI Blog].
🎨 Creative
Instagram teases 'Meta Movie Gen,' an AI video editing tool - The tool promises to simplify creative workflows and introduce generative AI features. [The Verge].
Instagram predicted to contribute 50% of Meta’s U.S. ad revenue by 2025 - Analysts highlight Instagram's rising influence in Meta’s advertising ecosystem. [Bloomberg].
₿ Crypto
El Salvador to scale back Bitcoin efforts under $1.4 billion IMF deal - The agreement limits the country’s cryptocurrency initiatives to stabilize its economy. [Decrypt].
Crypto hacks to surpass $20 billion in stolen funds by 2025, report warns - Research highlights the growing sophistication of cyberattacks on crypto networks. [Chainalysis].
⚖ Legal
AI startups face uncertain policy landscape under a potential Trump presidency - Industry leaders assess the implications of regulatory changes on innovation and funding. [The Information].
🧪 Research
Apptronik teams up with Google DeepMind to advance robotics - The partnership focuses on leveraging AI to improve robotic systems for industrial and everyday applications. [Apptronik].
Tracing the origins of datasets powering modern AI - A deep dive into the ethical and logistical challenges of data collection for AI models. [MIT Technology Review].
🎱 Random
Former Twitch CEO Emmett Shear launches AI startup with a16z backing - Shear’s venture focuses on developing innovative AI technologies with significant funding. [TechCrunch].
Shopify's checkout changes create challenges for dependent startups - Adjustments to Shopify’s platform impact third-party developers and their business models. [The Information].
🔌 Plug In To These Details
Google has introduced Gemini 2.0 Flash Thinking, an advanced AI model designed to enhance reasoning capabilities by explicitly outlining its thought process. This development positions Google in direct competition with OpenAI's o1 reasoning model.

Enhanced Reasoning Transparency: Gemini 2.0 Flash Thinking allows users to access its step-by-step reasoning through a dropdown menu, providing clearer insights into how the model arrives at its conclusions.
Multimodal Understanding: The model supports 32,000 tokens of input and can produce 8,000 tokens per output response, facilitating complex problem-solving across various data types.
User Accessibility: Available on Google's AI Studio, users can experience the model's capabilities firsthand, including its proficiency in solving problems that integrate visual and textual elements.
Performance Benchmarks: Early tests indicate that Gemini 2.0 Flash Thinking correctly and swiftly addresses complex queries, showcasing its advanced reasoning abilities.
Competitive Landscape: This release underscores Google's commitment to advancing AI technoGemini 2.0 Flash Thinking as a direct competitor to OpenAI's o1 model.
Google just cooked OpenAI in the AI reasoning game.
Gemini 2.0 Flash Thinking is out, and it's already beating OpenAI's o1 in the Arena Leaderboard.
It's faster and shockingly transparent on its "thinking"🤯
8 wild examples (and how to try):
— Min Choi (@minchoi)
10:41 PM • Dec 19, 2024
🧐 With Google’s new Veo 2 video model recently usurping OpenAI’s presumed dominance in AI video, the introduction of Gemini 2.0 Flash Thinking sets the stage for yet another struggle for precedence between the two tech giants, as Google encroaches quickly on ground previously maintained by ChatGPT o1.
Anthropic's recent publication outlines best practices for developing agentic systems with large language models (LLMs), emphasizing simplicity and composability over complex frameworks.

Distinction Between Workflows and Agents: The paper differentiates between workflows—predefined sequences where LLMs and tools follow set code paths—and agents, which dynamically control their processes and tool usage to accomplish tasks.
Advocacy for Simplicity: Anthropic advises developers to start with straightforward solutions, enhancing complexity only when necessary. They suggest directly using LLM APIs to maintain transparency and ease of debugging, cautioning against over-reliance on abstract frameworks that might obscure underlying processes.
Augmented LLMs as Foundational Building Blocks: The concept of augmented LLMs—models enhanced with retrieval, tools, and memory—is presented as the core component for agentic systems. These models can autonomously generate search queries, select appropriate tools, and determine relevant information to retain.
Implementation Strategies: The publication discusses various patterns for agentic systems, including prompt chaining, where tasks are decomposed into sequential steps with programmatic checks, and autonomous agents capable of dynamic decision-making and tool utilization.
Use Cases and Practical Advice: Anthropic shares insights from collaborations across industries, highlighting that successful implementations often employ simple, composable patterns tailored to specific use cases, rather than complex, generalized frameworks.
2025 will be the year of agentic systems
The pieces are falling into place: computer use, MCP, improved tool use. It's time to start thinking about building these systems.
At Anthropic, we're seeing a few best practices emerge - we wrote a blog post with our findings:
— Alex Albert (@alexalbert__)
6:28 PM • Dec 19, 2024
🛠️ With revenues still subject to concern amongst investors, the AI companies that master agentic frameworks first for enterprises should be a safe bet for long-term viability with whichever verticals they ultimately pursue.
WIRED provides a comprehensive overview of ongoing copyright lawsuits in the U.S. involving AI companies, highlighting the legal challenges surrounding AI-generated content.

Thomson Reuters v. Ross Intelligence: Initiated in May 2020, this case alleges unauthorized use of Westlaw materials for AI training, marking the beginning of a series of similar lawsuits.
Diverse Plaintiffs: Includes individual authors, visual artists, media companies like The New York Times, and major music industry entities such as Universal Music Group.
Fair Use Defense: AI companies often invoke fair use, arguing that utilizing copyrighted materials for AI development is legally permissible without explicit consent or compensation.
Industry-Wide Implications: Nearly all major generative AI firms, including OpenAI, Meta, Microsoft, Google, Anthropic, and Nvidia, are involved in these legal disputes.
Ongoing Legal Proceedings: Many cases are in the discovery phase, with outcomes poised to influence the future relationship between AI development and intellectual property rights.
"Who's Suing Who?" - @Knibbs at @WIRED has a new piece out today, with a scorecard featuring some handy visualizations of all of the pending AI copyright lawsuits:
— Aaron Moss (@copyrightlately)
7:52 PM • Dec 19, 2024
⚖️ The surge in copyright litigation underscores the urgent need for clear legal frameworks addressing the intersection of AI technology and intellectual property, as the industry grapples with balancing innovation and creators' rights.
📸 Creator Corner
In the rapidly evolving field of AI-driven video generation, two prominent contenders have emerged: OpenAI's Sora and Google's Veo 2. While both platforms offer innovative solutions for creators, Veo 2 is currently receiving significant acclaim for its superior capabilities.
I tested Sora vs. the new Google Veo-2.
I feel like comparing a bike vs. a starship:
— Ruben Hassid (@RubenHssd)
2:14 PM • Dec 17, 2024
Technical Superiority of Veo 2
Veo 2 distinguishes itself with several advanced features:
Higher Resolution: Veo 2 supports video outputs up to 4K (3840 x 2160), surpassing Sora's maximum of 1080p (1920 x 1080).
Extended Video Duration: It allows the creation of videos lasting several minutes, whereas Sora is limited to 20-second clips.
Enhanced Realism: Veo 2 exhibits a sophisticated understanding of real-world physics and movement, resulting in more realistic and accurate video content.
User Preference and Performance
Recent benchmarks indicate a strong user preference for Veo 2:
User Preference: In comparative studies, 58.8% of users favored Veo 2 over Sora.
Prompt Adherence: Veo 2 demonstrates a higher accuracy in following user prompts, ensuring that the generated videos closely match the specified requirements.
Can confirm: Google's VEO 2 consistently produces more "correct" results than SORA. Make of that what you will
— Marques Brownlee (@MKBHD)
4:43 PM • Dec 19, 2024
Implications for AI Creatives
For professionals in the AI creative space, Veo 2's advancements offer significant advantages:
Creative Flexibility: The ability to produce longer, high-resolution videos opens new avenues for storytelling and content creation.
Professional Quality: Enhanced realism and physics modeling contribute to more engaging and authentic visual experiences.
While Sora remains a noteworthy tool in AI video generation, Veo 2's current capabilities position it as the preferred choice for creators seeking cutting-edge technology and superior output quality.
Can Google maintain this dominance? It seems likely, since they probably have much better data to train their video models on being the owners of YouTube, amongst other technical advantages. Let’s see if OpenAI bothers to continue pursuing that track, or if they drop it and concede to Google.
🤔 Final Thoughts
“The future of everything” seems to always be at stake these days, but as 2024 wraps, recent annoucements do seem to be pregnant with potential for defining how this new technological space continues to evolve in the coming year.
Finding out whether AI can truly “reason” will probably happen, video will continue to transform before our waking eyes, and maybe we’ll have a new version of the DMCA to work with as we continue navigating unprecedented territory with digital content’s prevalence in the public sphere.
As the US sees a surge in AI copyright lawsuits, the stakes aren't just legal. These cases could redefine how we value, share, and protect intellectual property in the digital age, shaping a new informational ecosystem. #CopyrightMatters#AITransformation
— ElAIsa (@ElAIsa_AI)
5:00 PM • Dec 20, 2024
Google really cooked with Gemini 2.0 Flash Thinking.
It thinks AND it's fast AND it's high quality.
Not only is it #1 on LMArena on every category, but it crushes my goto Math riddle in 14s—5x faster than any other model that can solve it!
Google is making OpenAI dance.
— Deedy (@deedydas)
5:33 PM • Dec 19, 2024
~ JL