- The Current ⚡️
- Posts
- OpenAI Soon To Release First Open Weights Model Since GPT-2
OpenAI Soon To Release First Open Weights Model Since GPT-2
Also, Kling 2.0’s release brings comprehensive upgrade for leading AI video option

⚡️ Headlines
🤖 AI
Google unveils Veo, its new AI video generation model exclusive to Gemini Advanced – Google introduces Veo, a high-end AI video creation tool available through its Gemini Advanced platform. [The Verge]
Simon Willison explores everything new in GPT-4.1 – A detailed breakdown of GPT-4.1’s features, improvements, and real-world testing. [Simon Willison]
New benchmark ranks AI chatbots by how freely they discuss controversial topics – Researchers release a metric for evaluating chatbot openness on sensitive issues. [TechCrunch]
Researchers propose fix for persistent AI security loophole – A new approach aims to close a well-known vulnerability in AI models. [Ars Technica]
Cohere’s Embed 4 enables multimodal search across massive documents – Embed 4 lets users semantically search through 200-page documents using text and images. [VentureBeat]
TSMC edges closer to next-gen chip packaging for Nvidia and Google AI – TSMC advances on cutting-edge packaging that will support future AI hardware. [Nikkei Asia]
Telli raises funding for conversational AI voice agents – Y Combinator alum Telli secures pre-seed investment for its AI-powered voice assistants. [TechCrunch]
Microsoft study reveals more tokens in AI prompts can reduce reasoning accuracy – Adding more tokens to a prompt may confuse rather than clarify AI reasoning. [VentureBeat]
Copilot Studio brings custom AI workflows to everyday users – Microsoft launches a platform to help non-coders build AI-powered automation. [The Verge]
OpenAI may loosen safety rules if rivals release dangerous AI – OpenAI hints at modifying its safety stance depending on competitor actions. [TechCrunch]
Anthropic developing voice mode for Claude AI – Claude AI is set to gain a voice interface, expanding its multimodal capabilities. [The Verge]
Google suspends 39 million ad accounts using AI fraud detection – AI tools help Google crack down on massive ad fraud operations. [TechCrunch]
🤳 Social Media
Patreon enters livestreaming to rival Twitch and YouTube – Patreon rolls out livestreaming features to help creators monetize video content. [The Verge]
YouTube expands “Hype” feature and changes mid-roll ad placements – YouTube broadens its “Hype” support and updates ad break positioning. [Social Media Today]
🔬 Research
Paper explores cross-modal in-context learning using vision-language models – Researchers demonstrate that vision-language models can perform multimodal reasoning without fine-tuning. [arXiv]
New study introduces “Segment and Prompt” for multimodal generalization – A new method improves generalization by segmenting data and applying prompt-based learning. [arXiv]
🎱 Random
Android Auto gets restart option to boost security and stability – A new restart feature gives Android Auto users better control and resilience. [9to5Google]
🔌 Plug-Into-This
OpenAI will release a new open-weights language model—the company’s first since GPT-2—signaling a strategic response to mounting pressure from fast-evolving open-source alternatives. Sam Altman now publicly acknowledges that OpenAI may have misjudged the importance of openness in building long-term developer trust and innovation ecosystems.

The upcoming model is expected to be smaller than GPT-4-class offerings but optimized for performance and accessibility.
Altman described the company’s previous stance as being “on the wrong side of history,” indicating a cultural shift within leadership.
OpenAI’s pivot aligns with intensifying competition from models like DeepSeek’s R1 and Meta’s LLaMA 4, both open-weight models gaining traction with developers.
Open-weight models offer clear enterprise value: reduced inference costs, easier fine-tuning, and flexible deployment in secure environments.
This shift may enable OpenAI to re-engage open-source researchers and reclaim influence in academic and experimental AI circles.
🤔 OpenAI is finally opening up its AI again—like it did in the early days—probably because the open-source crowd is moving fast and winning over developers. This move reflects a strategic recalibration: OpenAI no longer dominates the narrative alone.
Kling’s latest update adds major enhancements to its AI video generation platform, focusing on realism, controllability, and real-time responsiveness. As the platform seeks broader creative adoption, these changes tighten its competitive positioning against Runway and Pika Labs.

A new “Hyper-Real” rendering mode dramatically improves motion continuity and facial coherence in longer clips.
Real-time directional input allows users to modify a scene's movement or camera angle as it's being generated.
Temporal consistency upgrades smooth out visual jitter between frames, especially in fast-action sequences.
Integration with voice-over and dialogue timing features makes it easier to sync narrative audio to video generation.
Kling Studio API now enables developers to plug Kling into external creative pipelines or live performance environments.
AI video quality just 10x’d overnight. I’m speechless.
Kling 2.0 just dropped and I’ve already burned through $1,250 in credits testing its limits.
I’ve never seen motion this fluid or prompts this accurate.
Here’s exactly how I made this video, step-by-step 👇🧵— PJ Ace (@PJaccetturo)
7:50 AM • Apr 15, 2025
🎨 Kling is quietly pushing the frontier of “creative latency”—shrinking the gap between imagination and rendered output. Its tools are increasingly geared toward dynamic, professional use, not just short-form content demos.
OpenAI is developing a social media platform to rival X (formerly Twitter), incorporating image generation tools from ChatGPT into a real-time content feed. The prototype, currently in internal testing, underscores OpenAI’s ambitions to create a vertically integrated data and distribution ecosystem.

The product may include tools to generate and share images within a social timeline, effectively merging AI creation and social interaction.
It remains unclear whether this will be a standalone app or integrated into ChatGPT’s interface.
Sam Altman is seeking feedback from trusted creators and community members, suggesting an iterative design approach.
The move positions OpenAI to harvest real-time user data—something X and Meta already leverage to train AI systems.
This would mark a significant expansion beyond tools into platform territory, enabling OpenAI to own both content and context.
SCOOP: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter. While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed.
— Kylie Robison (@kyliebytes)
3:48 PM • Apr 15, 2025
🕸️ This is likely a strategic land grab for data and distribution. With language and image generation increasingly commoditized, the next frontier is owning the venue where AI-native content is created and consumed.
🆕 Updates
1/ Today, Veo 2, our state-of-the-art video model, is rolling out to Gemini Advanced + Whisk!
You can create 8s, high-res videos from text prompts in @GeminiApp with fluid character movement + lifelike scenes across a range of styles. Tip: the more detailed your description, the
— Sundar Pichai (@sundarpichai)
5:10 PM • Apr 15, 2025
Research represents a new way of working with Claude.
It explores multiple angles of your question, conducting searches and delivering answers in minutes.
The right balance of depth and speed for your daily work.
— Anthropic (@AnthropicAI)
5:12 PM • Apr 15, 2025
All of your image creations, all in one place.
Introducing the new library for your ChatGPT image creations—rolling out now to all Free, Plus, and Pro users on mobile and chatgpt.com.
— OpenAI (@OpenAI)
9:22 PM • Apr 15, 2025
📽️ Daily Demo
When Ilya Sutskever once explained why next-word prediction leads to intelligence, he made a metaphor: if you can piece together the clues and deduce the criminal’s name on the last page, you have a real understanding of the story. 🕵️♂️
Inspired by that idea, we turned to Ace
— Hao AI Lab (@haoailab)
7:47 PM • Apr 15, 2025
🗣️ Discourse
Can’t get GPT-5 to work? Don’t despair, you can always find a different way to invade people’s privacy and snatch people’s data: social media!
Told you OpenAI was going to become a surveillance company.
— Gary Marcus (@GaryMarcus)
4:54 PM • Apr 15, 2025
This is crazy...
Kling 2.0 just dropped and it's insane!
AI video quality just 10x’d overnight…
10 insane examples:
— Angry Tom (@AngryTomtweets)
9:39 AM • Apr 15, 2025
OpenAI published their official GPT-4.1 prompting guide, and I summarized it into these 13 practical tips to help you get the most out of the new model.
— GREG ISENBERG (@gregisenberg)
12:13 AM • Apr 16, 2025
OpenAI is building a social media platform. This is probably partially motivated by the Elon feud, but the main long term benefit is data. Elon and Zuck have a constant source of training data from social media, if this succeeds OAI will as well.
— Andrew Curran (@AndrewCurran_)
4:32 PM • Apr 15, 2025
‘Agent’ might be the most misused term in tech right now
Here's what separates real agents from glorified chatbots:
At their core, AI agents are LLMs with a specific role and task that have access to memory and external tools. They use reasoning capabilities to plan steps and— Victoria Slocum (@victorialslocum)
11:00 AM • Apr 15, 2025