- The Current ⚡️
- Posts
- As VP Vance Tours Europe Touting Benefits of Less AI Regulation, US States Are Moving Independently Towards Implementing Regulations Modeled After The EU’s
As VP Vance Tours Europe Touting Benefits of Less AI Regulation, US States Are Moving Independently Towards Implementing Regulations Modeled After The EU’s
Also, a True Crime YouTube channel with millions of views was recently taken down after it was found out to be entirely AI generated

⚡️ Headlines
🤖 AI
Anthropic Projects Soaring Growth to $34.5 Billion in 2027 Revenue - AI startup Anthropic forecasts its revenue could reach $34.5 billion by 2027, with plans to reduce cash burn and achieve profitability within the same timeframe. [The Information].
Smart Bots: China's Sex Doll Makers Jump on AI Drive - Chinese manufacturers, like WMDOLL, are integrating AI into sex dolls, enabling features from simple conversation to moving body parts, reflecting China's push to embed AI across various industries. [Reuters].
OpenAI Whistleblower's Death Deemed Suicide, Autopsy Reveals - San Francisco's medical examiner has ruled the death of former OpenAI researcher and whistleblower Suchir Balaji a suicide, following an autopsy confirming a self-inflicted gunshot wound. [Forbes].
Apple Aims to Boost Vision Pro with AI Features, Spatial Content App - Apple plans to enhance its Vision Pro headset by incorporating Apple Intelligence, introducing a guest user mode, and adding a spatial content app, aiming to integrate AI features into its devices. [Bloomberg].
The Inside Story of How Altman and Musk Went From Friends to Foes - Sam Altman and Elon Musk, co-founders of OpenAI, have experienced a deteriorating relationship since Musk's departure in 2018, leading to public disputes and legal battles over the company's direction. [The Wall Street Journal].
Elon Musk-Led Group Makes $97.4 Billion Bid for Control of OpenAI - Elon Musk and a group of investors have offered $97.4 billion to acquire control of OpenAI, intensifying tensions with CEO Sam Altman and complicating the company's plans to transition to a for-profit entity. [The New York Times].
🦾 Emerging Tech
Apple and Meta Are Set to Battle Over New Area: Humanoid Robots - Apple and Meta are entering the humanoid robot market, investing in AI-powered robots capable of performing household tasks, signaling a new area of competition between tech giants. [Bloomberg].
Javier Milei Backtracks on $4.4B Memecoin After 'Insiders' Pocket $87M - Argentine President Javier Milei has retracted his promotion of the memecoin Libra, which briefly reached a $4.4 billion market cap before collapsing, after insiders reportedly profited $87 million. [CoinDesk].
⚖ Legal
Meta Is Ready to Bring Trump Into Play in Fight Against EU Rules - Meta Platforms is prepared to involve former U.S. President Donald Trump in its battle against European Union regulations, aiming to leverage political influence to counteract stringent EU tech policies. [Bloomberg].
Court Filings Show Meta Paused Efforts to License Books for AI Training - Legal documents reveal that Meta halted its initiatives to license books for AI training purposes in early 2023 due to timing and other challenges, impacting its AI development strategies. [TechCrunch].
🎱 Random
Broadcom, TSMC Eye Possible Intel Deals That Would Split Storied Chip Maker - Broadcom is interested in acquiring Intel's chip-design business, while TSMC is considering purchasing Intel's manufacturing facilities, potentially leading to a division of the historic chip maker. [The Wall Street Journal].
The Loneliness Epidemic Is a Security Crisis - The rising loneliness epidemic is contributing to an increase in romance scams, with criminals exploiting isolated individuals, leading to significant financial and emotional harm. [WIRED].
🔌 Plug-Into-This
The European Union's Artificial Intelligence Act (AI Act) is poised to influence AI regulation beyond Europe, with several U.S. states introducing legislation that mirrors its focus on preventing algorithmic discrimination. These state-level bills aim to regulate the use of AI in automated decision-making, particularly in sectors like employment, education, and financial services.

Scope of State Legislation: The proposed bills target "algorithmic discrimination" by imposing requirements on AI systems used as substantial factors in significant decisions affecting consumers' access to services. This includes mandates for risk management plans and algorithmic impact assessments.
Definitions and Interpretations: Key terms such as "substantial factor" and "consequential decision" are pivotal, as their interpretations determine the breadth of the laws' applicability. For instance, using AI tools like ChatGPT for resume screening could fall under these regulations, depending on how these terms are defined.
Comparison to EU AI Act: Similar to the EU AI Act, which categorizes AI systems based on risk levels and imposes corresponding obligations, these U.S. state bills seek to preemptively address potential biases and discrimination in AI applications. The EU AI Act's extraterritorial reach means that U.S. companies with AI systems used in the EU must comply with its regulations, emphasizing the global impact of such legislation.
Legislative Momentum: The rapid introduction and consideration of these bills across multiple states suggest a growing consensus on the need for AI regulation to protect consumers from potential harms associated with automated decision-making.
State Initiatives: States like California, Colorado, and New York have introduced or enacted AI-related legislation. For example, California's SB 1047 aimed to implement safety measures for advanced AI models, though it was vetoed by the governor. Colorado's AI Act, set to take effect in 2026, resembles the EU AI Act in its comprehensive approach to governing AI. New York City has implemented Local Law 144, requiring bias audits for automated employment decision tools.
At the AI Action Summit in Paris, VP Vance announced that the US would not follow the EU in heavily regulating AI.
Unfortunately, though, American states are charging ahead with laws that look a lot like the AI Act—in some cases, using *precisely* the same language.
— Dean W. Ball (@deanwball)
1:26 PM • Feb 13, 2025
👨⚖️ The beauty / pain of federalism (the balance of legislative powers between states and federal government) right here. In lieu of federal acts, states are free to pursue their own regulatory frameworks and ensure that the people in their jurisdictions are protected from whatever they demand they be protected. The complication is, of course, that many of these laws are aimed at digital “things” that don’t necessarily abide by physical stateline distinctions. I’d expect many Supreme Court visits in the future with these laws…
A YouTuber operating under the pseudonym "Paul" created the "True Crime Case Files" channel, featuring entirely fabricated stories with AI-generated visuals, which garnered millions of views before being taken down. The channel's content included sensationalized titles and narratives, such as "Coach Gives Cheerleader HIV after Secret Affair, Leading to Pregnancy," blending AI-generated scripts with human-written elements.

Content Creation Process: Paul utilized OpenAI's ChatGPT to generate approximately half of each video's script, supplementing the rest with his own writing. The accompanying visuals were produced using an undisclosed AI image generator.
Viewer Reception and Deception: Despite incorporating outlandish details and character names intended to signal the fictitious nature of the stories, many viewers accepted the content as genuine, engaging deeply with the fabricated narratives.
Monetization and Ethical Considerations: The channel was monetized, allowing Paul to work on it full-time. He initially included disclaimers about AI usage in earlier projects, but removed them after noticing negative reactions, opting instead for undisclosed AI integration.
Platform Response and Content Removal: The channel remained active until January, when it was removed, possibly in response to inquiries about the lack of AI disclaimers. Despite this, similar channels with undisclosed AI-generated content continue to exist and are monetized on YouTube.
Broader Implications for Content Authenticity: This incident highlights the challenges platforms face in moderating AI-generated content and raises questions about the ethical responsibilities of content creators in disclosing AI involvement.
A ‘true crime’ documentary series has millions of views. The murders are all AI-generated.
🔗404media.co/a-true-crime-d…
— 404 Media (@404mediaco)
8:32 PM • Feb 13, 2025
🎥 The most interesting part of this story is that the result was an outright ban. It might seem like an obvious action to take, but where will the line eventually lie for content platforms like YouTube and Facebook when AI content becomes simply more interesting than human content? Whether or not that will ever happen is up for debate of course, but in a world where the biggest budget movies are for sloppy Marvel derivations of classic superhero stories…does anyone really care if it’s real?
Perplexity has launched its 'Deep Research' tool, enabling users to generate comprehensive reports on specified topics by autonomously browsing the web to gather information. This feature is currently accessible on the web platform and is slated for release on iOS, Android, and Mac applications in the near future.

Functionality: The 'Deep Research' tool combines large language models with internet search capabilities to produce detailed, cited reports, enhancing the depth and reliability of information provided to users.
Availability: Initially available on the web, the tool is expected to roll out to Perplexity's iOS, Android, and Mac apps soon, broadening user access across multiple platforms.
Comparison with Competitors: Similar 'Deep Research' tools have been introduced by other AI companies, such as OpenAI and Google, indicating a trend towards integrating advanced research functionalities into AI platforms.
User Engagement: By offering this tool, Perplexity aims to enhance user engagement by providing a more interactive and informative AI experience, catering to users seeking in-depth information on various topics.
Future Developments: Perplexity plans to expand the capabilities of the 'Deep Research' tool, potentially incorporating features like visualizations and image embedding in reports, to further enrich the user experience.
Perplexity Deep Research can write an investment memo like Bill Ackman. Example: writing a memo for taking a big position in $UBER.
— Aravind Srinivas (@AravSrinivas)
9:07 PM • Feb 16, 2025
The introduction of 'Deep Research' by Perplexity reflects the growing emphasis on AI-driven tools that facilitate thorough and efficient information gathering, aligning with industry trends towards more autonomous and comprehensive AI research solutions.
🆕 Updates
Grok 3 release with live demo on Monday night at 8pm PT.
Smartest AI on Earth.
— Elon Musk (@elonmusk)
2:58 AM • Feb 16, 2025
we put out an update to chatgpt (4o).
it is pretty good.
it is soon going to get much better, team is cooking.
— Sam Altman (@sama)
5:33 PM • Feb 15, 2025
Rolling out starting today, you can ask Gemini to consider your past chats to craft its responses. Easily pick up where you left off or have it summarize a previous topic. You can view, edit, or delete any chats you’ve had with Gemini, and see when it’s used.
Try it in Gemini… x.com/i/web/status/1…
— Google Gemini App (@GeminiApp)
8:36 PM • Feb 13, 2025
📽️ Daily Demo
🗣️ Discourse
Adam Silver and Golden State Warriors Bring Physical AI to the NBA at 2025 NBA All-Star Technology Summit.
— NBA (@NBA)
8:12 PM • Feb 14, 2025
a taste of what AI agent interaction will be like
— Greg Brockman (@gdb)
8:49 AM • Feb 16, 2025
Apple and Meta are both now reportedly planning to ramp up development on Humanoid Robots joining Magnificent 7 names like Tesla, Amazon, and Nvidia 👀
Here's the current landscape of humanoid robots 🤖
— Evan (@StockMKTNewz)
2:50 PM • Feb 16, 2025