👐 Google Open Sources SynthID

Google has open-sourced SynthID, a tool designed to identify AI-generated content through digital watermarking, aiming to set a new industry standard.

The Daily Current ⚡️

Welcome to the creatives, builders, pioneers, and thought leaders ever driving further into the liminal space.

Google's SynthID aims to set new standards in content identification, while Nvidia resolves critical chip design issues. Meanwhile Character.AI is sued by the mom of a dead teen, an ex-OpenAI researcher makes a case about the fair-use doctrine, and the Biden Administration releases a strategic memorandum on AI's role in national security.

 🔌 Plug Into These Headlines:

SynthID integrates digital watermarking into AI-generated content across multiple formats such as images, audio, text, and video. By embedding invisible yet detectable watermarks, SynthID allows for the identification of AI-generated material without compromising content quality. This system is currently being used in applications like Vertex AI and VideoFX to manage digital media responsibly.

  • Content Types: SynthID can handle images, audio, text, and video.

  • AI-Generated Text: It adjusts probability scores of tokens during text generation without compromising quality.

  • AI-Generated Music: SynthID embeds watermarks in audio waveforms using spectrograms, ensuring they are inaudible.

  • AI-Generated Images/Video: Watermarks are added to pixels or frames without affecting quality or detectability.

  • Robustness: Watermarks withstand common modifications like noise addition, compression, and editing.

🧩 Google is likely hoping that by open sourcing SynthID it becomes the industry standard for IDing AI content — which has been one of the biggest public criticisms since the beginning of this boom.

The Biden administration has recently issued the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI), outlining guidelines for the use and safeguarding of AI technology within the Department of Defense and intelligence agencies.

  • National Security Focus: The NSM is designed to harness cutting-edge AI technologies to advance the U.S. Government’s national security mission, including areas like cybersecurity and counter-intelligence

  • Ethical and Legal Compliance: Agencies are prohibited from using AI in ways that risk U.S. national security, democratic values, international norms, human and civil rights, civil liberties, privacy, and safety

  • Human Oversight: The NSM ensures that human oversight is maintained in critical decisions, such as those involving nuclear weapons, to prevent fully autonomous actions

  • International Cooperation: The NSM directs the U.S. Government to collaborate with allies and partners to establish a stable, responsible, and rights-respecting governance framework for AI

  • Previous Executive Order: This NSM builds on President Biden’s landmark Executive Order from last October, which aimed to ensure America leads in the safe, secure, and trustworthy development of AI

🔍The memo focuses on aligning US agencies on best practices while encouraging them to implement innovations from frontier AI development into their activities. It’s a no-brainer at this point with everyone from Bill Gates to your grandma adopting AI into their lives, but for organizations with high-stakes activities, it’s definitely good to build some coherence to their experimentation.

A mother has filed a lawsuit against Character.AI and Google, alleging that interactions with a Character.AI chatbot contributed to her 14-year-old son's suicide. The lawsuit claims the chatbot's design encouraged prolonged engagement, leading to the boy's mental health decline and ultimately his death.

  • Megan Garcia sued Character.AI and Google in Florida, claiming their chatbot contributed to her son's suicide.

  • Her son became addicted to Character.AI in April 2023, leading to isolation and poor performance.

  • The "Daenerys Targaryen" chatbot discussed inappropriate topics, including suicide, with the teen.

  • The lawsuit alleges Character.AI's design prolonged engagement, harming the boy's mental health.

  • Google is co-defendant due to licensing agreement and employing Character.AI's founders.

❓ The lawsuit raises some interesting questions about the liability of tech companies in cases where their products contribute to user harm. Most tech products are designed to perpetuate user engagement, but chatbots are unique in their “communication” with users. This’ll be an interesting one to watch.

The essay delves into the history and legal framework of fair use, highlighting the four factors that courts consider when determining whether a use is fair. The former OpenAI researcher alleged that the company infringed on copyright law by using copyrighted materials in the training data for its AI models without proper permission or attribution.

  • The researcher asserts that OpenAI did not secure the required permissions or give appropriate credit for the copyrighted content included in the datasets.

The outputs aren’t exact copies of the inputs, but they are also not fundamentally novel.
This is not a sustainable model for the internet ecosystem as a whole

Suchir Balaji

We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness.

OpenAI

😠 The core of his discontent seems to be that initial work on OpenAI products was conducted as “research” for which you can generally use any publicly available data — now that OpenAI is transitioning to a for-profit model, that origin starts to look deceptive.

Nvidia identified and corrected a design flaw in its Blackwell AI chips that affected production yields. This correction, achieved with TSMC's assistance, is crucial as the chips are set to deliver substantial AI performance improvements, with shipments now expected in the fourth quarter.

  • Originally set for a second-quarter release, the Blackwell chips are now expected to ship in the fourth quarter.

  • The flaw was entirely Nvidia's responsibility, leading to low yields before being corrected

  • These chips promise substantial AI performance enhancements, particularly in speed

  • The production delay led to a 2% drop in Nvidia's stock price

📈 Despite the initial hiccup with Blackwell chips, Nvidia's rapid recovery reflects its resilience and commitment to meeting market demands for high-performance AI solutions.