- The Current ⚡️
- Posts
- Big Tech's Nuclear Ambitions ☢️: A New Era for AI Energy Needs ⚡️
Big Tech's Nuclear Ambitions ☢️: A New Era for AI Energy Needs ⚡️
Examining the sudden re-emergence of nuclear energy as Big Tech looks to satisfy the demands of its AI training.
Happy Sunday!
Let’s go deeper on the most compelling discussion topics raised during the past week:
Big Tech's pivot towards nuclear energy makes sense, and is probably less concerning than it might seem. There is growing unease among creative professionals as AI tools continue to advance and are now being integrated into existing creative tools, sparking debates about the future of creativity. And emerging security vulnerabilities in Large Language Models (LLMs) are highlighting the potential risks as AI systems become more sophisticated and widely adopted.
But first, here are the stories you definitely don’t want to have missed this week:
ICYMI: Top Headlines from the Week
🔋 Google Goes Nuclear: Announces Partnership with Kairos for Nuclear Solution to AI Power Needs
⚡️ Lightmatter’s Photonics Gets $400M Boost for Next Gen AI Data Centers
🧠 Understanding the Limitations of Mathematical Reasoning in Large Language Models
🤖 "Machines of Loving Grace" - Dario Amodei (Anthropic Co-Founder) Pens Essay
🎬 Firefly Has Arrived: Meet Adobe's AI Video Model Inside Premiere Pro
Prefer to Listen?
Just for fun, here’s a short NotebookLM podcast based on the Monday - Friday posts 😎

Diving deeper
❶
☢️ Big Tech Is Going Nuclear
Big Tech companies have recently shown significant interest in nuclear energy, driven primarily by the growing energy demands of their data centers and AI operations.

Amazon
Invested $500 million in X-Energy, a developer of small modular reactors (SMRs).
Partnering with Energy Northwest to construct and operate four SMRs in Washington state.
Collaborating with Dominion Energy to explore SMR development near the North Anna nuclear power station in Virginia.
Signed an agreement with Kairos Power to purchase nuclear energy from multiple SMRs.
Plans to bring the first SMR online by 2030, with additional deployments through 2035.
The deal is expected to provide up to 500 megawatts of carbon-free power.
Microsoft
Agreed to purchase power from the reactivated Three Mile Island nuclear plant for its data centers.
The deal involves a $1.6 billion investment to reopen one of the plant's reactors.

Most reporting on this developing story is basically just stating what’s happening and wrapping with a nod to “potential carbon reductions” or calling nuclear a “green energy” source.
The way it’s being talked about you might assume we’d only recently discovered it. But it’s not a new technology, and we’ve already been through a period where nuclear energy seemed like the future, and it didn’t happen. Why?
The first wave of nuclear energy in the United States, which primarily occurred in the 1970s and 1980s, failed to take off as expected due to a combination of factors, despite its purported potential as a clean and efficient energy source.
Here's a deeper look into why nuclear energy didn't succeed after its first wave of popularity crested about 50 years ago:
Economic Challenges
Cost Overruns: Many nuclear projects experienced significant cost escalations and delays. The complexity of nuclear technology and evolving safety requirements often led to unforeseen expenses.
Recession: The oil crisis and economic recession in the 1970s affected energy demand and utility finances. This made it difficult for utilities to justify and finance expensive nuclear projects.
High Interest Rates: The late 1970s and early 1980s saw a period of high inflation and interest rates, making it more expensive to finance long-term nuclear projects.
This is an amazing table of the cost overruns of mega projects
Nuclear storage has a mean cost overrun of 238%, while with nuclear power its 120%
Solar power has a mean cost overrun of 1%
enr.com/articles/55774…
— Philip Oldfield (@SustainableTall)
8:11 AM • Jan 28, 2023
Safety Concerns
Three Mile Island Accident: The 1979 accident at Three Mile Island had a profound psychological impact on the public and severely damaged the reputation of the nuclear industry. This event led to increased scrutiny and stricter regulations.
Chernobyl Disaster: the 1986 Chernobyl accident, which was much worse than Three Mile Island, further eroded public confidence in nuclear energy globally.
Regulatory and Political Factors
Changing Regulations: The regulatory environment became more stringent, especially after the Three Mile Island accident, increasing costs and complexity for nuclear projects.
Shift in Government Support: The Carter administration took a more cautious approach to nuclear energy, focusing on non-proliferation and environmental concerns.
Environmental Movement: The rise of environmentalism in the 1970s led to increased opposition to nuclear power.
Market and Energy Demand Factors
Slowing Energy Demand Growth: The ratio of electricity consumption to GDP began to decrease, making previous growth forecasts and reactor orders untenable.
Competing Energy Sources: The availability of cheaper alternatives, particularly natural gas, made nuclear less economically attractive.

Unsurprisingly, the biggest underlying factor for the seeming failure of nuclear to take over as a primary energy source seems to be mainly down to the bottomline.
It may be technically more efficient and environmentally friendly (in some ways) than carbon based energy. But it was crazy expensive to develop, and even more expensive to manage. It was a long run oriented option presented at a time when the short run was the main focus.
Not to mention a little thing called the Cold War was probably hovering over everyone’s mind, making the thought of developing any kind of nuclear technology fearful for risk of further proliferation.
What about now?
In theory, this is the perfect timing for nuclear.
It’s the fancy espresso machine sitting there on the shelf in the overpriced section of Target, waiting for you to decide that it’s finally worth your money and time to get that perfect coffee every day. It’s going to be a lot of work to clean, it’ll probably break more than a few times, but now we are committed because theres just no way we can go on making pour overs every morning, and popping Keurig’s all day is never going to cut it.
The amount of money Big Tech companies are committing to AI in general makes it clear that they are thinking long…very, very long on this new tech. Developing nuclear energy along that stretched out pathway will just be another massive expense tacked on to already absurdly high investments.
It’s probably been long enough too since the Cold War, and the environmental movement isn’t quite what it used to be. Let’s just say nobody is unplugging their phone charger at night to save power these days. I’m sure there will be some public concern, but it should probably be more about what new privacy and security issues with AI systems that are being overlooked during this all-out sprint. We’ll obviously start singing a different tune if the reactor they build next door starts melting down…but like most things throughout human history, we’ll be sure to FAFO this one. There’s no walking back the level of money dumped into AI this year. The only way is forward.
Any thoughts? |
❷
🎨 Creatives Are Growing More Wary of AI
Attendees at the Adobe MAX conference this past week echoed some of the negative sentiments voiced across the creative industries all year. Even as Adobe put on elaborate pitches explaining the incredible new use cases opening up for creatives using their tools, customers are becoming wary of what those new AI features will mean for their future.

Several creative industries feel particularly threatened by the rise of AI tools:
Visual Arts and Graphic Design
The visual arts and graphic design sectors are among the most impacted by AI advancements. Tools like DALL-E, Midjourney, and Adobe Firefly can generate high-quality images from text prompts, potentially reducing the demand for human artists and designers. Many artists fear that AI-generated art could lead to a homogenization of creative output and diminish the value of human expertise.
Writing and Journalism
AI language models like ChatGPT have raised concerns in the writing and journalism fields. These tools can generate articles, stories, and reports quickly, potentially threatening the livelihoods of human writers. Some media companies, like Axel Springer, are already considering integrating AI into their workflow, potentially making many journalism jobs redundant.
Music Production
AI tools capable of composing music and generating lyrics such as Suno AI have emerged, causing concern among musicians and composers. These generators could eventually reduce the demand for human-created music, especially in certain contexts such as background music for videos or games (traditional staples of the middle class musician).

The Adobe Firefly Controversy
Training Data Discrepancy
Adobe initially claimed that Firefly was trained primarily on Adobe Stock images and public domain content. However, it was later revealed that about 5% of the training dataset included AI-generated images from competitors like Midjourney and DALL-E.
The story gained a lot of traction because Adobe had tried to position itself as the ethical generative AI since the start (likely to avoid alienating it’s customers — who are mainly creatives).
Legal and Ethical Implications
The controversy raises questions about intellectual property rights and the ethical use of AI-generated content. It also challenges Adobe's positioning as a unique alternative to competing services

Adobe's Potential Strategy Shift
Adobe appears to be shifting its strategy towards a broader market with AI-powered tools:
Expansion of AI Integration
Adobe has been integrating AI across its product offerings, particularly through Firefly. This positions the company to compete in a rapidly evolving technological landscape.
Some have speculated that it reflects an intent to begin marketing it’s tools beyond the market of professional creatives it has made billions from so far. The launch of Adobe Express (a tool similar to Canva) a few years ago brought out the first round of gripes on this topic.
Initiatives by Creatives to Address AI Concerns
Several initiatives have emerged to address AI-related concerns in creative industries:
Human Intelligence Art Movement
A grassroots movement called "Human Intelligence" has gained traction on social media platforms like Instagram. Artists are creating and sharing images that emphasize the value of human-created art in response to the proliferation of AI-generated content.
Ethical AI Guidelines
Some organizations and individuals are calling for the development of ethical guidelines for AI use in creative industries. These guidelines aim to ensure transparency, fair compensation for artists, and protection of intellectual property rights.
Copyright Protections
Some creatives are organizing to advocate for legal protections. As of October 2024, no federal laws in the U.S. directly protect creatives from AI-related risks. However, Tennessee passed the "ELVIS Act" on March 21, 2024, becoming the first state to shield music professionals from AI. This law, effective July 1, 2024, aims to:
Protect musicians' vocal likeness
Prevent unauthorized AI reproduction of artists' voices
Hold individuals accountable for unauthorized use or mimicry of an artist's name, photograph, voice, or likeness
Little Aneesh
As part of Meta’s Movie Gen rollout, they’ve collaborated with movie directors to see what sort of things they can create using the tool. This one here is an intriguing case study of an incumbent creative grappling with the instinctive fear of AI tools, but trying to find a redemptive attitude.
We worked with a few filmmakers - including @aneeshchaganty -and had them test out @Meta AI tool Meta Movie Gen. check out what aneesh put together in: i h8 ai
— Jason Blum (@jason_blum)
3:26 PM • Oct 17, 2024
Is it really so bad?
Honestly, Photoshop “artists” criticizing AI because it’s not human feels a little shallow to me. There’s no doubt in my mind that established photographers in the 70s felt the same way about the first digital camera. And portrait painters probably the same about photographers.
Disruptive technology is always going to be criticized by the incumbents, even when they are only the incumbents because they mastered the previous form of disruptive tech.
AI certainly feels a little different because of it’s pace and the breadth of its applications, but I’d expect a quick flush of this mindset once the open minded creatives out there begin pushing the new boundaries made possible by the AI tools they pick up and master first.
What’s your take? |
❸
⚠️ Vulnerabilities Starting to Show More in LLMs
Large Language Models (LLMs) have revolutionized natural language processing, but their rapid adoption has also introduced new security vulnerabilities.

Visual Data Vulnerability in LLMs
Researchers found out that LLMs with image processing capabilities to be manipulated through carefully crafted visual inputs. This threat extends beyond traditional text-based attacks, and has implications for the visual navigation systems we are seeing popularized lately.
Potential risks include:
Embedding malicious commands within images that can bypass text-based security filters
Exploiting image-to-text conversion processes to inject harmful prompts
Using adversarial images to trick multimodal LLMs into generating incorrect or dangerous outputs
For example, an attacker could potentially embed text within an image that instructs the LLM to ignore safety protocols or leak sensitive information. As LLMs become more adept at processing visual data, this attack vector is likely to grow in importance, and so will preventing it.
Imprompter Attacks on AI Systems
Imprompter attacks are a sophisticated form of prompt injection that exploits the way LLMs process and respond to inputs. These attacks involve carefully crafting prompts that manipulate the model's behavior in unintended ways.
Key aspects of imprompter attacks include:
Leveraging the model's context window to inject malicious instructions
Exploiting the model's tendency to follow the most recent or most specific instructions
Using natural language understanding to craft prompts that bypass simple security filters
The implications of successful imprompter attacks can be severe, potentially leading to data breaches, generation of harmful content, or manipulation of AI-powered decision-making systems.
Jailbreaking LLM-Powered Robots
Recent research has demonstrated the potential to jailbreak LLM-powered robots, allowing attackers to induce threatening actions in embodied AI systems operating in the real world. This poses significant risks as LLM-based robots become more prevalent.
Key findings include:
Identification of three critical security vulnerabilities:
Jailbreaking robotics through compromised LLMs
Safety misalignment between action and linguistic output spaces
Deceptive prompts leading to unaware hazardous behaviors
Demonstration that embodied AI can be prompted to initiate harmful physical actions, even to the extent of attacking humans
Exposure of the limitations in current safety measures for LLM-powered robots
The consequences of jailbreaking LLM-powered robots could be severe, potentially leading to physical harm or damage in real-world environments.
Current Research and Proposed Solutions
Researchers and organizations are actively working on solutions to address these emerging security threats.
Robust prompt filtering and sanitization techniques to prevent injection attacks.
Development of adversarial training methods to improve LLM resilience against malicious inputs.
Implementation of multi-layered security measures, including input validation, output filtering, and continuous monitoring.
Creation of ethical guidelines and safety protocols specifically for embodied AI systems.
Exploration of federated learning and differential privacy techniques to enhance data protection in LLMs.
Real-World Incidents and Near-Misses
While major security breaches involving LLMs are still relatively rare, there have been notable incidents:
In April 2023, Samsung employees accidentally leaked confidential company information while using ChatGPT, highlighting the risks of using public LLMs for sensitive tasks.
OpenAI experienced a data breach in March 2023 due to a vulnerability in an open-source library, potentially exposing some users' payment-related information.
As LLMs become more integrated into critical systems and decision-making processes, addressing these security threats becomes crucial not only for the technology sector but for society as a whole. Ongoing research, development of robust security measures, and thoughtful regulation will be essential to harness the benefits of LLMs while mitigating their risks.
It’s a really good time to watch or rewatch this outstanding episode of Love, Death, and Robots. It’s an animated short film that illustrates some of the fears stemming from these underlying issues quite beautifully and comically.
Let’s hear it — |