OpenAI Soon To Release First Open Weights Model Since GPT-2

Also, Kling 2.0’s release brings comprehensive upgrade for leading AI video option

⚡️ Headlines

🤖 AI

Google unveils Veo, its new AI video generation model exclusive to Gemini Advanced – Google introduces Veo, a high-end AI video creation tool available through its Gemini Advanced platform. [The Verge]

Simon Willison explores everything new in GPT-4.1 – A detailed breakdown of GPT-4.1’s features, improvements, and real-world testing. [Simon Willison]

New benchmark ranks AI chatbots by how freely they discuss controversial topics – Researchers release a metric for evaluating chatbot openness on sensitive issues. [TechCrunch]

Researchers propose fix for persistent AI security loophole – A new approach aims to close a well-known vulnerability in AI models. [Ars Technica]

Cohere’s Embed 4 enables multimodal search across massive documents – Embed 4 lets users semantically search through 200-page documents using text and images. [VentureBeat]

TSMC edges closer to next-gen chip packaging for Nvidia and Google AI – TSMC advances on cutting-edge packaging that will support future AI hardware. [Nikkei Asia]

Telli raises funding for conversational AI voice agents – Y Combinator alum Telli secures pre-seed investment for its AI-powered voice assistants. [TechCrunch]

Microsoft study reveals more tokens in AI prompts can reduce reasoning accuracy – Adding more tokens to a prompt may confuse rather than clarify AI reasoning. [VentureBeat]

Copilot Studio brings custom AI workflows to everyday users – Microsoft launches a platform to help non-coders build AI-powered automation. [The Verge]

OpenAI may loosen safety rules if rivals release dangerous AI – OpenAI hints at modifying its safety stance depending on competitor actions. [TechCrunch]

Anthropic developing voice mode for Claude AI – Claude AI is set to gain a voice interface, expanding its multimodal capabilities. [The Verge]

Google suspends 39 million ad accounts using AI fraud detection – AI tools help Google crack down on massive ad fraud operations. [TechCrunch]

🤳 Social Media

Patreon enters livestreaming to rival Twitch and YouTube – Patreon rolls out livestreaming features to help creators monetize video content. [The Verge]

YouTube expands “Hype” feature and changes mid-roll ad placements – YouTube broadens its “Hype” support and updates ad break positioning. [Social Media Today]

🔬 Research

Paper explores cross-modal in-context learning using vision-language models – Researchers demonstrate that vision-language models can perform multimodal reasoning without fine-tuning. [arXiv]

New study introduces “Segment and Prompt” for multimodal generalization – A new method improves generalization by segmenting data and applying prompt-based learning. [arXiv]

🎱 Random

Android Auto gets restart option to boost security and stability – A new restart feature gives Android Auto users better control and resilience. [9to5Google]

🔌 Plug-Into-This

OpenAI will release a new open-weights language model—the company’s first since GPT-2—signaling a strategic response to mounting pressure from fast-evolving open-source alternatives. Sam Altman now publicly acknowledges that OpenAI may have misjudged the importance of openness in building long-term developer trust and innovation ecosystems.

  • The upcoming model is expected to be smaller than GPT-4-class offerings but optimized for performance and accessibility.

  • Altman described the company’s previous stance as being “on the wrong side of history,” indicating a cultural shift within leadership.

  • OpenAI’s pivot aligns with intensifying competition from models like DeepSeek’s R1 and Meta’s LLaMA 4, both open-weight models gaining traction with developers.

  • Open-weight models offer clear enterprise value: reduced inference costs, easier fine-tuning, and flexible deployment in secure environments.

  • This shift may enable OpenAI to re-engage open-source researchers and reclaim influence in academic and experimental AI circles.

🤔 OpenAI is finally opening up its AI again—like it did in the early days—probably because the open-source crowd is moving fast and winning over developers. This move reflects a strategic recalibration: OpenAI no longer dominates the narrative alone.

Kling’s latest update adds major enhancements to its AI video generation platform, focusing on realism, controllability, and real-time responsiveness. As the platform seeks broader creative adoption, these changes tighten its competitive positioning against Runway and Pika Labs.

  • A new “Hyper-Real” rendering mode dramatically improves motion continuity and facial coherence in longer clips.

  • Real-time directional input allows users to modify a scene's movement or camera angle as it's being generated.

  • Temporal consistency upgrades smooth out visual jitter between frames, especially in fast-action sequences.

  • Integration with voice-over and dialogue timing features makes it easier to sync narrative audio to video generation.

  • Kling Studio API now enables developers to plug Kling into external creative pipelines or live performance environments.

🎨 Kling is quietly pushing the frontier of “creative latency”—shrinking the gap between imagination and rendered output. Its tools are increasingly geared toward dynamic, professional use, not just short-form content demos.

OpenAI is developing a social media platform to rival X (formerly Twitter), incorporating image generation tools from ChatGPT into a real-time content feed. The prototype, currently in internal testing, underscores OpenAI’s ambitions to create a vertically integrated data and distribution ecosystem.

  • The product may include tools to generate and share images within a social timeline, effectively merging AI creation and social interaction.

  • It remains unclear whether this will be a standalone app or integrated into ChatGPT’s interface.

  • Sam Altman is seeking feedback from trusted creators and community members, suggesting an iterative design approach.

  • The move positions OpenAI to harvest real-time user data—something X and Meta already leverage to train AI systems.

  • This would mark a significant expansion beyond tools into platform territory, enabling OpenAI to own both content and context.

🕸️ This is likely a strategic land grab for data and distribution. With language and image generation increasingly commoditized, the next frontier is owning the venue where AI-native content is created and consumed.

 🆕 Updates

📽️ Daily Demo

🗣️ Discourse