- The Current ⚡️
- Posts
- Access to OpenAI APIs May Soon Require Verified ID
Access to OpenAI APIs May Soon Require Verified ID
Also, Storycraft raises $3m for social AI game that turns players into creators

⚡️ Headlines
🤖 AI
ByteDance joins reasoning AI race with Seed-Thinker v1.5 aimed at rivaling Gemini and OpenAI - TikTok’s parent company debuts a Mixture-of-Experts language model that excels on AGI benchmarks. [VentureBeat]
Google links AI Overviews to its own search results to enhance discoverability - By embedding internal search links, Google aims to make its AI-generated summaries more interactive and navigable. [Search Engine Land]
Generative AI tools are training to spy for the U.S. military - DARPA is investing in AI models capable of surveillance analysis from satellite and drone imagery to boost intelligence capabilities. [MIT Technology Review]
OpenAI co-founder’s new startup Safe Superintelligence reportedly valued at $32B - Ilya Sutskever’s mysterious AI safety venture has attracted billions in funding and attention from top tech investors. [TechCrunch]
Netflix tests AI search engine to recommend shows and movies more intuitively - A new OpenAI-powered search feature may soon help users find content via natural conversation rather than typing titles. [Bloomberg]
ChatGPT hits 1 billion users, doubling in just a few weeks, says OpenAI CEO - The rapid surge underscores the accelerating mainstream adoption of generative AI tools. [Forbes]
Alphabet and Nvidia invest in Safe Superintelligence, with Alphabet as anchor TPU customer - The backing cements confidence in Ilya Sutskever’s safety-first AI approach amid increasing competition in the field. [Reuters]
Palantir’s new API quietly gives the IRS vast access to financial data - A Palantir-developed API named “DOGE” enables large-scale data aggregation for government tax investigations. [Wired]
Why ChatGPT might have better bedside manner than your doctor - Research shows AI responses are often more empathetic, informative, and emotionally attuned than those of human clinicians. [Bloomberg]
AI still struggles at debugging code, but researchers are working on it - Studies show large language models perform inconsistently at fixing programming errors, prompting calls for better training approaches. [Ars Technica]
🦾 Emerging Tech
Crypto lobby ramps up influence in Congress with massive campaign financing push - Cryptocurrency firms are dramatically increasing political donations to shape blockchain and tax policy. [The New York Times]
Apple, Google, and Cash App veterans leave Big Tech to build Bitcoin startups - Tech leaders pivot toward decentralized finance, launching new companies focused on Bitcoin-native applications. [CNBC]
🔬 Research
AI-guided ultrasound outperforms experts in diagnosing TB in underserved areas - A machine-learning tool analyzing point-of-care ultrasounds shows promise in detecting tuberculosis where resources are scarce. [AuntMinnie]
⚖ Legal
FTC sues Meta over monopolistic integration of Instagram and WhatsApp - The agency argues Meta has illegally stifled competition by weaving its apps too tightly together. [CNBC]
12 ex-OpenAI employees back Elon Musk in lawsuit against the company - In a rare amicus brief, former staff allege OpenAI deviated from its original nonprofit mission. [Fortune]
Jack Dorsey and Elon Musk want to eliminate all intellectual property law - The tech moguls argue current IP frameworks hinder innovation and must be overhauled. [TechCrunch]
🔌 Plug-Into-This
OpenAI will soon require API users to undergo government ID-based verification before accessing its most advanced AI models. The change, affecting future model releases, aims to enhance security and compliance by restricting access to verified organizations only.

Organizations must submit a government-issued ID, business details, and country of registration to gain “Verified” API access.
Verification is limited: one ID can only be used for a single organization every 90 days, and not all global entities are eligible.
The verification process is tied to API key visibility and access to models like GPT-5 and future multimodal systems.
This change follows increased scrutiny over AI misuse, including bot activity, disinformation, and jailbreaking.
The new policy is slated for gradual rollout, with existing customers encouraged to preemptively complete verification.
OpenAI released a new Verified Organization status as a new way for developers to unlock access to the most advanced models and capabilities on the platform, and to be ready for the "next exciting model release"
- Verification takes a few minutes and requires a valid
— Tibor Blaho (@btibor91)
7:25 AM • Apr 12, 2025
🪪 This shift reflects OpenAI’s maturing posture as a platform provider—less open-access R&D lab, more gated enterprise infrastructure. It also subtly raises the bar for competitors, pressing them to formalize identity checks or risk becoming havens for unsanctioned activity.
Montreal startup Storycraft has secured $3 million in seed funding to develop a collaborative AI game platform where players co-create narrative worlds. The game merges generative AI with multiplayer design tools, encouraging emergent storytelling and user-generated content at scale.

Players can build characters, items, and environments using AI, then weave them into shareable interactive adventures.
The platform enables synchronous social gameplay with world-editing and quest creation integrated into the core loop.
During open alpha, users built thousands of narrative “worlds” and generated over 350,000 unique characters and items.
Investment was led by Drive Capital, with a focus on evolving gaming from content consumption to content generation.
The toolset supports multiplayer roleplay, world remixing, and even in-game storytelling competitions.
Think of it as Minecraft meets ChatGPT—with you and your friends crafting a fantasy universe as you play through it.
🕹️ Storycraft exemplifies the shift toward AI as a creative collaborator—not just an assistant—transforming gaming from predefined quests into co-authored worlds shaped by real-time imagination.
As LLMs stretch past million-token context windows, enterprises are evaluating whether these upgrades translate to meaningful gains. While longer memory can improve continuity, reasoning, and accuracy, the cost-performance trade-offs aren’t always clear-cut.

Models like MiniMax-Text-01 (4M tokens) and Gemini 1.5 Pro (2M) can ingest entire books or codebases in a single pass.
Larger context enables deep use cases: legal reviews, cross-document audits, full-code debugging, and longitudinal research.
However, studies show diminishing returns beyond 32K tokens, with models underutilizing vast input ranges.
Retrieval-Augmented Generation (RAG) often proves more efficient, dynamically surfacing relevant data rather than brute-forcing full context.
Infrastructure demands—especially GPU memory and latency—scale sharply with window size, challenging cost-effective deployment.
Bigger isn't always better: Examining the business case for multi-million token LLMs
— VentureBeat (@VentureBeat)
7:30 PM • Apr 12, 2025
🔍 The future may not be either/or but hybrid: pairing massive windows with smart retrieval to balance holistic comprehension with operational efficiency—especially in high-stakes domains like finance and law.
🆕 Updates
BREAKING: xAI @grok rolling out Memory Feature, globally
— NIK (@ns123abc)
11:35 PM • Apr 11, 2025
Introducing TxGemma, a family of open models specifically tailored for health settings, building on top of Gemma and Gemini.
— Jeff Dean (@JeffDean)
5:36 AM • Apr 14, 2025
📣 Now available in GitHub and @code: GitHub Copilot code review will help you find bugs and potential performance problems, and even suggest fixes. ✅
Watch Copilot code review in action, and start getting feedback on your code right away. ⚡️
github.blog/changelog/2025…— GitHub (@github)
2:24 AM • Apr 14, 2025
📽️ Daily Demo
Google has released the AI function in Sheets.
You can now use AI in your spreadsheets to generate text, analyze sentiment, or summarize and categorize information.
Here’s how:
— Alvaro Cintas (@dr_cintas)
4:59 PM • Apr 13, 2025
Weekend project: my 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 implementation of 𝗗𝗲𝗲𝗽 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗔𝗴𝗲𝗻𝘁 from scratch 👇
Few weeks ago I released an episode of my Newsletter and an update to the 𝗔𝗜 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝗛𝗮𝗻𝗱𝗯𝗼𝗼𝗸 GitHub repository.
There I implemented a Deep
— Aurimas Griciūnas (@Aurimas_Gr)
2:06 PM • Apr 12, 2025
🗣️ Discourse
The release version of Llama 4 has been added to LMArena after it was found out they cheated, but you probably didn't see it because you have to scroll down to 32nd place which is where is ranks
— ρ:ɡeσn (@pigeon__s)
2:46 PM • Apr 11, 2025
👀 Google has an unreleased model named Dragontail that's outperforming everyone, even Gemini 2.5 Pro on WebDev arena. 🔥🔥
I am lately more excited about release from Google than anyone else.Anyone else tested Dragontail?
— AshutoshShrivastava (@ai_for_success)
5:26 PM • Apr 12, 2025
🚨 BREAKING: Google just launched a new internet protocol.
It’s A2A.
Backed by 50+ partners including Salesforce, Atlassian, and SAP.
It’s designed to let AI agents collaborate across companies, platforms, and clouds.
Here’s what it means for the future of enterprise AI: 🧵
— Brendan (@jowettbrendan)
8:36 AM • Apr 13, 2025
True competition looks like this
Shouldn’t we be expecting drop downs like this everywhere? e.g. I should be able to pick my voice agent instead of being forced to use Siri or Alexa
We need PLATFORM NEUTRALITY - like net neutrality but for agents
— Garry Tan (@garrytan)
10:51 PM • Apr 13, 2025