- The Current ⚡️
- Posts
- 🎬 Adobe Firefly Is Here
🎬 Adobe Firefly Is Here
Adobe's Premiere Pro integrated Firefly has arrived, bringing video professionals AI features right in their existing workflows.
The Daily Current ⚡️
Welcome to the creatives, builders, pioneers, and thought leaders ever driving further into the liminal space.
Adobe's Firefly gives video creators new powers, but large language models still wrestle with mathematical conundrums. As Gmail users face AI-driven phishing schemes, Anthropic's Dario Amodei dreams of a utopia where AI uplifts society. Finally, OpenAI must confront its own existential questions to ensure it's a hero and not a villain in the unfolding AI narrative.
🔌 Plug Into These Headlines:
🎬 Firefly Has Arrived: Meet Adobe's AI Video Model Inside Premiere Pro
🧠 Understanding the Limitations of Mathematical Reasoning in Large Language Models
🛡️ Gmail Security Alert: AI Hack Targets Billions
🤖 "Machines of Loving Grace" - Dario Amodei (Anthropic Founder) Pens Essay
🤔 Opinion: OpenAI Could Be a Force for Good if It Can Answer These Questions First
Adobe's integration of Firefly into Premiere Pro represents a cautious yet significant step towards integrating AI into professional video production workflows. The current implementation is highlighted by the "Generative Extend" feature, which allows users to extend existing clips (that they are already editing) by up to 2 seconds.

Extended clips can be generated at either 720p or 1080p at 24 FPS
Users can adjust aspects like lighting, camera movement, and composition in generated content.
Content Credentials will be idncluded in generated videos, providing transparency about AI involvement.
Adobe plans to expand support for higher resolutions, frame rates, HDR content, and additional aspect ratios.
Adobe has always been well-positioned to deploy AI to address specific editing challenges rather than offering broad video generation capabilities (although Firefly does that too). The Text-to-Video and Image-to-Video tools, first announced in September, are now rolling out as a limited public beta in the Firefly web app.
This research paper published by Apple challenges the reliability of current GSM8K benchmark results and highlights the fragility of LLMs' reasoning capabilities. The study's findings support the hypothesis that these models rely on probabilistic pattern-matching rather than formal reasoning, aligning with previous research on token bias and pattern matching in LLMs.

The study included over 20 open models (2B to 27B parameters) and state-of-the-art closed models like GPT-4 and O1.
Adding seemingly relevant but ultimately irrelevant information (GSM-NoOp) causes substantial performance drops, even in state-of-the-art models.
LLMs exhibit significant performance variance across different instantiations of the same question.
Results suggest that current LLMs struggle with genuine logical reasoning and instead rely on sophisticated pattern matching.
The findings of this study may necessitate a fundamental rethinking of how we approach AI model development for tasks requiring logical reasoning. As current transformer architectures show limitations in handling complex tasks, future research might need to explore novel architectures or training paradigms to achieve true formal reasoning capabilities in AI systems.
A recent Gmail security breach demonstrated a sophisticated multi-layered attack leveraging AI to create a highly convincing phishing scheme. This attack stands out for its use of advanced technologies to mimic legitimate Google communications across multiple channels.

Utilized AI-generated voice calls that sound remarkably human-like, complete with natural speech patterns and background noise simulating a call center environment.
Employed AI to craft personalized phishing emails with convincing Google branding and formatting.
Coordinated multiple touchpoints including fake account recovery notifications, spoofed phone calls, and tailored emails to create a sense of urgency and legitimacy.
Leveraged machine learning to adapt tactics in real-time based on the victim's responses, making the attack more dynamic and harder to detect.
While AI enhances the capabilities of attackers, it also presents opportunities for improved detection and prevention. Google's rapid response in identifying and neutralizing this threat within seven days demonstrates the potential for AI to bolster cybersecurity defenses when properly implemented.
Amodei, touting the utopia-generating potential of AI, focuses on five key areas: biology and health, neuroscience and mental health, economic development and poverty, peace and governance, and work and meaning. Amodei attempts to push a bit beyond the typical hype piece, focusing mainly on the current problem centric practical applications of AI.

AI could compress 50-100 years of biological progress into 5-10 years, dubbed the "compressed 21st century"
Amodei introduces "marginal returns to intelligence" as a framework for analyzing AI's impact
AI could enable "biological freedom," giving people control over weight, appearance, and reproduction
Economic development could see 20% annual GDP growth in developing countries with AI assistance
While Amodei's vision is optimistic, it hinges on democratic control of powerful AI, requiring close cooperation between AI companies and governments to navigate complex ethical and geopolitical challenges🤔 Opinion: OpenAI Could Be a Force for Good if It Can Answer These Questions First
Balancing profit and purpose is central to OpenAI's future. OpenAI's potential to be a force for good hinges on its ability to address critical questions about its governance, accountability, and impact.

OpenAI must clearly define its purpose in its corporate charter, specifying how it will serve stakeholders and who it won't do business with.
The company needs to commit to transparent, annual, audited impact reporting using independent third-party standards.
A robust legal structure is necessary to enforce OpenAI's commitments, possibly through a trust with special decision-making rights.
OpenAI's current Safety and Security Committee lacks true independence and could be dismantled at will.
The company's energy consumption could significantly impact climate change progress, necessitating commitments to renewable energy sources.
OpenAI's success in addressing these ethical and governance challenges could set a precedent for the entire AI industry, potentially shaping global standards for responsible AI development.
I helped write the model benefit corporation legislation as a co-founder of the B Corp movement, a community of over 9,000 companies dedicated to using business as a force for good. I championed its passage alongside many business leaders, including Patagonia’s Yvon Chouinard.
That’s why I know OpenAI’s approach is insufficient.
Other fun tidbits: