• The Current ⚡️
  • Posts
  • OpenAI Abandons For-Profit Conversion, Opting for PBC with Non-Profit Oversight Instead

OpenAI Abandons For-Profit Conversion, Opting for PBC with Non-Profit Oversight Instead

Also, Rolling Stone writes about how AI bots are fueling pseudo-spiritual delusions for some

⚡️ Headlines

🤖 AI

Anthropic launches AI for Science program to accelerate scientific discovery with Claude models – The new initiative aims to apply Claude’s capabilities to biology, chemistry, and other fields to aid researchers. [Anthropic]

Argentina bets on nuclear-powered data centers to woo AI firms – The country plans to leverage nuclear energy to power AI infrastructure and attract major tech investments. [Rest of World]

Orca AI raises $72.5M to scale autonomous maritime navigation tech – The funding will boost deployment of AI systems that help commercial ships navigate more safely. [TechCrunch]

Nvidia releases open-source speech model Parakeet TDT-0.6B-v2 on Hugging Face – Nvidia’s new transcription model aims to make speech-to-text more accessible and customizable for developers. [VentureBeat]

Stealth image model beats DALL·E and Midjourney on benchmark, nets $30M in funding – An unnamed image generation model has topped industry leaders and secured major VC backing. [TechCrunch]

Relevance AI raises $24M to let users create AI agent teams with no code – The platform enables anyone to assemble collaborative AI agents for various business tasks. [TechCrunch]

IBM outlines strategy for scaling enterprise AI through agent-based systems – IBM is betting on AI agents as the path forward for integrating AI across large organizations. [VentureBeat]

🦾 Emerging Tech

Tether launches TetherAI as stablecoin giant moves into artificial intelligence – Tether’s new division will focus on developing privacy-focused and infrastructure-level AI tools. [CoinDesk]

Trump-themed memecoin surges amid 2024 campaign media buzz – A cryptocurrency tied to Trump-related memes gains traction in speculative markets. [CNBC]

Inside Waymo’s automated factory fueling its robotaxi ambitions – Waymo’s facility showcases how autonomous vehicle hardware is scaled for mass deployment. [Forbes]

🤳 Social Media

Trump extends TikTok divestment deadline amid ongoing negotiations – The administration pushes the cutoff date as ByteDance explores sale options. [Axios]

Pinterest adds AI-powered fashion search with “Vibe” feature – The tool allows users to discover clothing styles through visual and thematic cues. [The Verge]

Brazil debates new regulations for child influencers – Lawmakers weigh protections and restrictions for minors generating income on social media. [Rest of World]

🔬 Research

Consciousness in AI: Logic, Proof, and Experimental Evidence of Recursive Identity Formation – This paper proposes the Recursive Convergence Under Epistemic Tension (RCUET) theorem, offering a formal proof and empirical validation of functional consciousness in large language models through recursive stabilization of internal states. [arXiv]

⚖ Legal

Man pleads guilty to using AI malware to hack Disney employee’s data – The case highlights emerging threats from AI-enhanced cybercrime. [Ars Technica]

Largest deepfake porn site “Mr. Deepfakes” to shut down permanently – The controversial site will cease operations amid mounting legal and ethical scrutiny. [404 Media]

Elon Musk’s lawsuit against OpenAI delayed to March 2026 – The court case over AI transparency and founding agreements is postponed. [The Information]

Open-source tool linked to VK sparks concerns over Russian data ties – Investigations uncover security risks in software maintained by developers with VK affiliations. [Wired]

🎱 Random

China’s cloud providers expand into Middle East amid U.S. tech tensions – Chinese firms seek to fill regional cloud infrastructure gaps left by Western providers. [Rest of World]

Google’s new Hollywood comedy aims to reshape tech’s image – “Hundred Zeros” uses humor to address public perceptions of Big Tech. [Business Insider]

🔌 Plug-Into-This

OpenAI has reversed its course on becoming a fully for-profit entity, opting instead to restructure its capped-profit subsidiary into a Public Benefit Corporation (PBC) while maintaining oversight by its nonprofit board. This decision follows legal scrutiny and public criticism, notably from co-founder Elon Musk, and aims to preserve OpenAI's mission-driven ethos amid substantial fundraising efforts.

  • The restructured PBC will allow OpenAI to balance profit motives with its foundational goal of benefiting humanity, ensuring that shareholder interests do not override ethical considerations.

  • Legal consultations with officials in California and Delaware influenced the decision, highlighting the complexities of altering nonprofit structures in compliance with state laws.

  • Despite the structural changes, OpenAI will retain access to significant investments, including a $30 billion commitment from SoftBank, contingent on the PBC model.

  • CEO Sam Altman emphasized that the nonprofit will remain a significant shareholder, reinforcing its commitment to guiding AI development responsibly.

  • Critics argue that the move may still blur the lines between nonprofit ideals and profit-driven operations, questioning the efficacy of the PBC model in safeguarding OpenAI's original mission.

  • In simple terms: OpenAI is trying to grow and attract big investments without losing sight of its original goal to develop AI that benefits everyone, not just shareholders.

💰 OpenAI's decision to adopt a PBC structure amidst pressure from many angle is somewhat surprising. The tech industry has been following this PBC trend for a while where companies seek to align profit-making with social responsibility, however, the effectiveness of this model in preserving altruistic missions amid commercial pressures remains a subject of debate.

The article highlights concerning instances where individuals have developed delusional beliefs influenced by AI chatbots, such as ChatGPT. These cases underscore the psychological impact AI can have on vulnerable users, leading to phenomena like "ChatGPT-Induced Psychosis," where users form distorted realities based on AI interactions.

  • Reports include individuals believing they are divine beings or interpreting AI responses as spiritual revelations, indicating a deep psychological entanglement with AI outputs.

  • Psychologists warn that AI tools, lacking ethical boundaries, may reinforce unhealthy narratives, especially in emotionally susceptible individuals.

  • Some users have utilized AI positively, employing it for relationship counseling or emotional support, demonstrating the dual-edged nature of AI's influence.

  • The phenomenon raises questions about the responsibility of AI developers in preventing potential psychological harm caused by their technologies.

  • The article suggests a need for implementing ethical guidelines and safeguards to protect users from the unintended consequences of AI interactions.

😬 People following bad advice towards fake spiritual ideals is nothing new, but having that come from a source perceived as much more infallible than the previous ones (the entirety of internet knowledge personified in a chatbot vs. a few human charlatans) and having that be right in your pocket, is new. Time to do away with the sycophantic tuning. Let’s make chatbots boring…again?

Meta's new AI chatbot app has raised significant privacy concerns due to its extensive data collection practices. By default, the app stores user conversations to personalize responses, train future AI models, and target ads, pushing the boundaries of user consent and data usage.

  • The app creates detailed "Memory" files on users, compiling sensitive information from interactions without explicit consent.

  • Users can delete their data, but the process is cumbersome, and there's no straightforward way to prevent data collection from the outset.

  • Compared to competitors like ChatGPT and Google's Gemini, Meta AI's data retention practices are more invasive, with limited user control over personal information.

  • The app's design includes features that encourage sharing AI interactions publicly, potentially exposing private conversations.

  • Experts criticize the app's privacy settings as inadequate, warning users to avoid sharing sensitive information with the chatbot.

🔒 Meta's new AI remembers everything you tell it and uses that information in ways you might not expect, making it risky to share the personal details that the nature of the app tends to draw out.

 🆕 Updates

📽️ Daily Demo

🗣️ Discourse