Google DeepMind’s New Autonomous AI Robotics Updates Are Wild

Also, Simon Willison shares his thoughts on “vibe coding”

⚡️ Headlines

🤖 AI

Alibaba Releases AI Model That Reads Emotions to Compete with OpenAI - Alibaba Group has introduced a new artificial intelligence model capable of reading emotions, aiming to surpass OpenAI's offerings. [Bloomberg].

Spain to Impose Massive Fines for Not Labelling AI-Generated Content - The Spanish government approved a bill imposing fines of up to €35 million or 7% of global turnover on companies that fail to label AI-generated content, targeting the spread of "deepfakes." [Reuters].

Schools Use AI to Monitor Students, Raising Privacy and Security Concerns - Schools across the U.S. are employing AI-powered surveillance tools like Gaggle to monitor students' online activities, aiming to prevent violence but sparking debates over privacy and security risks. [Associated Press].

Motive to Hire Hundreds as AI Unicorns Follow Big Tech to India - Fleet management startup Motive Technologies plans to more than double its India headcount, joining other AI unicorns in expanding operations alongside major tech companies in the region. [Bloomberg].

TSMC Pitched Intel Foundry Joint Venture to Nvidia, AMD, and Broadcom - TSMC has proposed a joint venture to operate Intel's foundry division, offering stakes to U.S. chip designers Nvidia, AMD, and Broadcom, with TSMC's ownership capped below 50%. [Reuters].

🦾 Emerging Tech

Self-Driving Software Supplier Applied Intuition in Talks for Funding at $15 Billion Valuation - Applied Intuition, a developer of simulation software for autonomous vehicles, is negotiating funding that would value the company at $15 billion, more than doubling its valuation from the previous year. [The Information].

🤳 Social Media

BeReal Finds Success in Engaged Japanese Market, Says CEO - Social media platform BeReal is gaining traction in Japan, with users embracing its authentic engagement model, according to the company's CEO. [Nikkei Asia].

Google Changes Chrome Extension Policies Following Honey Affiliate Link Scandal - Google has updated its Chrome extension policies after allegations that PayPal's Honey browser extension replaced affiliate links without user benefit, mandating transparency and user advantage for such practices. [The Verge].

⚖ Legal

Meta Faces Legal Challenge by French Publishers Over AI Training - French publishers and authors have sued Meta Platforms, accusing the company of using their copyrighted works without permission to train its AI models. [Bloomberg].

FTC Moves Ahead With Broad Microsoft Antitrust Probe - The Federal Trade Commission is advancing its antitrust investigation into Microsoft, focusing on the company's AI operations and potential unfair advantages over competitors. [Bloomberg].

🎱 Random

Once Amazon Takeover Target iRobot Signals Doubt Over Its Future - iRobot, the maker of Roomba vacuum cleaners, has expressed substantial doubt about its ability to continue operations amid financial struggles and a failed acquisition by Amazon. [Bloomberg].

Microsoft Isn't Launching Its Xbox Handheld This Year, but Asus Might Be - While Microsoft has no plans to release an Xbox handheld device this year, Asus is reportedly developing a similar product, potentially filling the market gap. [The Verge].

🔌 Plug-Into-This

Google DeepMind has unveiled two advanced AI models, Gemini Robotics and Gemini Robotics-ER, designed to expand robots' abilities to perform complex, real-world tasks. These models enable robots to understand and interact with their environments more effectively, even in scenarios they haven't been specifically trained for.

  • Gemini Robotics, built on the Gemini 2.0 model, integrates vision, language, and action, allowing robots to comprehend diverse situations and execute precise tasks such as folding paper or unscrewing bottle caps.

  • Gemini Robotics-ER (Embodied Reasoning) enhances robots' complex reasoning capabilities, enabling them to perform tasks like efficiently packing a lunchbox by understanding spatial relationships and sequencing actions.

  • Safety is a priority, with the models trained to evaluate the safety of potential actions before execution, ensuring responsible and secure robot behavior.

  • Google DeepMind is collaborating with companies like Apptronik, Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools to test and refine these models, aiming to develop more intelligent and adaptable robots for various applications.

🦾 The biggest barrier to widespread robot adoption has seemed to be the narrow applications robots could handle well, making their development and implementation less worthwhile. However, these advancements mark a significant step toward creating robots that are more capable, responsive, and robust in dynamic environments.

Simon Willison shares his experiences and strategies for effectively utilizing Large Language Models (LLMs) in coding, addressing common challenges and misconceptions. He emphasizes that while LLMs can significantly enhance coding productivity, they require proper guidance and understanding to be truly effective.

  • Set Reasonable Expectations: LLMs function as advanced autocomplete systems, predicting token sequences to assist in coding. Viewing them as overconfident pair programming assistants can help users leverage their strengths while remaining cautious of potential inaccuracies.

  • Account for Training Cut-off Dates: LLMs are trained on data up to a specific date, limiting their knowledge of recent library updates or changes. Developers should be mindful of these limitations and provide current context when necessary.

  • Context Management: The effectiveness of LLMs heavily depends on the context provided. Structuring conversations thoughtfully and supplying relevant information can lead to more accurate and useful code generation.

  • Iterative Interaction: Engaging in a back-and-forth dialogue with the LLM allows for refining outputs. Users can request modifications, optimizations, or clarifications to improve the generated code incrementally.

  • Testing and Verification: It's crucial to rigorously test any code produced by LLMs. Developers must validate functionality and ensure the code meets the required standards before integration.

🛠️ Willison’s advice boils down to this: using LLMs as collaborative tools, rather than infallible solutions, can enable developers to harness their more intuitive capabilities effectively while maintaining code quality and reliability.

Snapchat has launched AI Video Lenses, a new feature that allows users to incorporate AI-generated elements into their videos, enhancing the platform's creative capabilities.

  • Exclusive to Platinum Subscribers: The AI Video Lenses are currently available only to Snapchat's Platinum subscribers, offering them unique tools to enrich their content.

  • Initial Lens Options: The feature debuts with three lenses: a fox that appears on the user's shoulder, raccoons that move around the user's head, and flowers that emerge in the foreground as the camera zooms out.

  • Weekly Updates: Snap plans to release new AI-powered lenses every week, continually providing users with fresh creative options.

  • In-House Generative Video Model: These lenses are powered by Snap's proprietary generative video model.

🤳 Snapchat has long featured similar “filters” where comical situations are made of the users face (or anyone on camera) — but these are next level. It’s a great example of a company using AI to improve on existing products to continue providing users with unique experiences. On the whole, AI should enable this kind of constant “look at this cool new feature” across the board.

 🆕 Updates

📽️ Daily Demo

🗣️ Discourse