- The Current ⚡️
- Posts
- Inception AI Launches New Kind of Diffusion Based Model
Inception AI Launches New Kind of Diffusion Based Model
Also, Amazon launches improved Alexa with “model agnostic” task completion method

⚡️ Headlines
🤖 AI
Amazon introduces Alexa+, a model-agnostic AI assistant integrating with Prime Video and Fire TV - Alexa+ can perform tasks like jumping to specific scenes in movies and ordering groceries, available for $19.99 per month or free for Prime members. [TechCrunch]
IBM expands Granite model family with new multi-modal and reasoning AI for enterprises - The Granite 3.2 models offer enhanced reasoning and vision capabilities under a developer-friendly license. [IBM Newsroom]
Nvidia's Q4 earnings surpass expectations with $39.33 billion in sales - Despite strong performance, stock experiences after-hours volatility; data center revenue rises 93% due to AI processor demand. [Investor's Business Daily]
Mira Murati's Thinking Machines Lab aims to raise $1 billion, valuing the startup at $9 billion - The former OpenAI CTO's venture focuses on accessible and customizable AI systems. [Business Insider]
🤳 Social Media
Instagram considers launching a separate app for Reels - This move aims to capitalize on TikTok's uncertain status in the U.S. by offering a similar video-scrolling experience. [Reuters]
Yope gains traction among Gen Z with an Instagram-like app for private groups - The platform fills a market gap by enabling photo sharing and chats within private groups. [TechCrunch]
YouTube surpasses 1 billion monthly podcast listeners - The platform's vast audience necessitates podcasters to produce quality video content to engage viewers. [The Verge]
🔬 Research
Study reveals idiosyncrasies in large language models - Researchers uncover unique patterns in AI outputs, contributing to understanding model behaviors. [alphaXiv]
Arc Institute launches Virtual Cell Atlas with data from over 300 million cells - This open resource combines datasets like Vevo's Tahoe-100M to accelerate AI-driven biological discoveries. [Arc Institute]
⚖ Legal
U.S. investigates UK's demand for Apple to create encryption backdoor - Officials assess if the UK's actions violate a bilateral data treaty, following Apple's withdrawal of an encrypted storage feature for UK users. [Reuters]
🎱 Random
Curious Refuge acquired by Promise to advance AI storytelling - The partnership aims to create opportunities for innovative narratives in the evolving landscape of AI-driven content creation. [Curious Refuge Blog]
🔌 Plug-Into-This
Inception, a startup, has unveiled a novel AI model based on diffusion architecture. This approach, distinct from traditional transformer-based models, aims to enhance performance and efficiency in various AI applications.

Diffusion Architecture: Unlike conventional AI models that rely on transformers, Inception's model utilizes diffusion processes, which may offer advantages in generating data distributions.
Potential Applications: The model is designed to improve tasks such as image and text generation, potentially leading to more accurate and contextually relevant outputs.
Performance Metrics: While specific benchmarks are not detailed, Inception claims that their diffusion-based model outperforms existing solutions in both speed and accuracy.
Energy Efficiency: The architecture is purported to consume less computational power, addressing sustainability concerns associated with large-scale AI deployments.
This is interesting as a first large diffusion-based LLM.
Most of the LLMs you've been seeing are ~clones as far as the core modeling approach goes. They're all trained "autoregressively", i.e. predicting tokens from left to right. Diffusion is different - it doesn't go left to… x.com/i/web/status/1…
— Andrej Karpathy (@karpathy)
1:31 AM • Feb 27, 2025
🌫️ If you’re familiar with AI image and video generation, you’ll recognize the word “diffusion” since the best of those visual models have been using it as their method for a while (i.e. Stable Diffusion) — this release is unique in that it applies the same methodology to text…let’s see how well it works!
Amazon has introduced Alexa Plus, an advanced version of its voice assistant enhanced with generative AI capabilities. Priced at $19.99 per month, Alexa Plus is free for Amazon Prime members and aims to provide a more natural and intuitive user experience.

Enhanced Functionality: Alexa Plus can perform complex tasks such as ordering groceries, sending event invitations, and managing smart home devices with improved efficiency.
Natural Interactions: The assistant now supports more conversational interactions, allowing users to engage in back-and-forth dialogues without repeatedly using the wake word.
Visual and Contextual Awareness: Equipped with vision capabilities, Alexa Plus can analyze images and videos, enabling features like identifying whether someone has walked the dog by reviewing home camera footage.
Device Compatibility: While compatible with most Alexa devices, Alexa Plus will not support certain older models, including first-generation Echo and Echo Dot devices.
Integration with AI Models: Alexa Plus utilizes Amazon's own Nova model and collaborates with models from companies like Anthropic to select the best AI model for specific tasks.
The new Alexa+ is basically ChatGPT Voice Mode on steroids, with personality, context and memory of past conversations and people. It’s extremely impressive. It is frightening how far behind Apple is in this space.
— Mark Gurman (@markgurman)
3:31 PM • Feb 26, 2025
🤖 Early demos look pretty impressive as Amazon dives into trying to provide what we maybe hoped Siri had been all along. Advanced home assistants have always been on their way, maybe this new version of Alexa will set a new standard.
FLORA is a newly launched node-based AI canvas designed to revolutionize content creation for professionals. It integrates advanced AI generators, enabling users to seamlessly produce images, videos, and character designs within a structured workflow.

Story Analysis and Prompt Generation: FLORA analyzes user-provided narratives to automatically generate prompts for visual content, streamlining the creative process for filmmakers, game designers, and artists.
Intelligent Character Design: Users can describe a character, and FLORA will create detailed prompts compatible with AI image generators like Flux Pro, Kling Pro, Minimax, Luma, and Hailuo.
Versatile Image and Video Creation: The platform allows for the generation of high-quality images from text prompts, transformation of images into text prompts for consistent styling, and connection of image nodes to video generators to animate visuals based on user inputs.
Style Extraction and Application: FLORA can analyze existing images to extract styles, which users can then apply to new objects or environments, facilitating cohesive visual themes across projects.
Enhanced Color and Animation Features: Users can modify color schemes using simple references, convert images into blueprint-like visuals, and generate animations of morphing objects, benefiting motion design and storytelling efforts.
this new AI is crazy…
FLORA just dropped a node based AI canvas.
it can analyse your story and generate prompts for your shots, so you can create images and videos directly and..
it can even design characters for your film
step by step tutorial:
— el.cine (@EHuanglu)
5:21 PM • Feb 26, 2025
🎨 FLORA's comprehensive ecosystem empowers creative professionals to transition smoothly from ideation to execution, pushing the boundaries of AI-assisted artistry
🆕 Updates
Today, we’re releasing Octave: the first LLM built for text-to-speech.
🎨Design any voice with a prompt
🎬 Give acting instructions to control emotion and delivery (sarcasm, whispering, etc.)
🛠️Produce long-form content on our Creator StudioUnlike traditional TTS that just… x.com/i/web/status/1…
— Hume (@hume_ai)
7:34 PM • Feb 26, 2025
Introducing Perplexity's new voice mode.
Ask any question. Hear real-time answers.
Update your iOS app to start using. Coming soon to Android and Mac app.
— Perplexity (@perplexity_ai)
4:36 PM • Feb 26, 2025
📽️ Daily Demo
🗣️ Discourse
I need to talk about this. 🚨
Just tried FLORA, and… mind-blowing
This isn’t hype This is a total workflow revolution— Max ✦ (@MaxVOAO)
5:41 PM • Feb 26, 2025
This changes everything.
Hume AI just dropped Octave, first LLMpowered text-to-speech AI that sounds truly human.
Think ChatGPT Voice, ElevenLabs, and pro voice actors combined.
🧵👇
— Min Choi (@minchoi)
8:57 PM • Feb 26, 2025
We're so excited to announce that Suno will be coming to Alexa+, so you can make any song you can imagine with the next generation of @AmazonAlexa! 🎉
What songs will you make with your Alexa?
— Suno (@SunoMusic)
10:59 PM • Feb 26, 2025