• The Current ⚡️
  • Posts
  • 🧵Sunday Threads - Open Source AI State of Affairs & Grok 2 + Aurora Image Generation

🧵Sunday Threads - Open Source AI State of Affairs & Grok 2 + Aurora Image Generation

2️⃣ threads 🧵 worth reading into 🧐

Open Source AI State of Affairs

The debate between open-source and closed-source AI has become truly a contentious issue in the tech world. It essentially centers on issues of:

  1. Accessibility (who can use it / build with it)

  2. Transparency (how much insight, if any, are we given into the way advanced models work)

  3. Risk (what’s the best way to mitigate the risks of AI development, both short & long)

Surprisingly, Mark Zuckerberg has emerged as a primary advocate for open-sourcing AI. Meta made the call to fully open source it’s Llama Models back in April 2024 with Llama 3’s release.

They stated:

We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development.

Meta

A couple of days ago, Llama 3.3 unexpectedly dropped, boasting to be ~25x cheaper than GPT-4o.

With OpenAI leading off their 12 days of Shipmas with a $200/mo subscription for Pro subscriptions, cost is clearly something we all need to pay more attention to when evaluating the AI space.

Which models can run efficiently? Which can bring in realistic revenues to replace the level of investment required to build? Open sourced options have been quickly pioneering these aspects of AI development, with larger companies content to throw endless amounts of money at lofty ideas like AGI.

Whether these moves by Meta towards the open source side of the aisle are meant as ideological/branding type moves, or more as a strategy for challenging closed-source competitors that they were falling behind, it’s been a good year for open source AI.

Throw into the mix that in the US we have an incoming Republican presidency and Republican majorities in the Senate & House—and you can start to think it’s going to be a very good time for open-source AI over the next four years.

Republican administrations are traditionally more lenient on regulating domestic industries, preferring a laissez faire style of economics (just let things be and let the best companies win).

If coupled with a time of anti-trust litigation (like we’re seeing now against most Big Tech firms) the two competitive advantages that big companies typically have could be hindered at the same time: ability to crush competitors with monopolies & ability to overcome regulatory barriers to entry that new players tend not to have resources to deal with.

With hopes of OpenAI bearing the standard flag for humanity in developing AGI seeming more and more like an investment pitch vs. the reality of what they want to do, sentiment across the industry (and just plain pocketbook economics) seem well and truly shifted towards open source.

Any thoughts?

Grok 2 + Aurora — New Photorealistic Image Gen

We’ve long been somewhat worried (or excited) by the idea that AI image generation is going to get so good that we won’t be able to tell what’s real and what’s not.

If you have half an eye for photography or films, most AI creations have looked more so cute, laughable, or just plain disturbing, than indistinguishable from reality.

But it looks like xAI’s Grok took us one big step close to that state, quietly releasing a new image generation model to it’s Grok toolkit over the weekend.

If you’re new to the AI image landscape, here’s a good recap to watch comparing the top tools to get a feel for the state of things.

The rise of highly sophisticated AI image generators has led to growing concerns about our ability to distinguish between real and artificial images. This phenomenon has significant implications for our perception of truth as a concept, and the world around us.

Indistinguishable Images

Recent studies have shown that humans are increasingly unable to differentiate between real photographs and AI-generated images. Lu et al. found that people could not significantly distinguish between real and AI-generated images, highlighting the sophistication of current AI image generation technology

Preference for Enhanced Reality

  • AI-rendered images often appear more visually stunning and pleasing to the eye.

  • Our innate desire for beauty and symmetry may draw us towards these enhanced versions.

This preference for algorithmically perfected visuals foreshadows the way this game could (and always does seem to) play out. When Photoshop arrived on the scene, photographers balked at the weird and distorted images some creatives were making. But try naming a major magazine or media outlet that doesn’t use retouching now on their images? I’ll wait…

People are always going to buy the PlayBoy magazine that puts out pictures of flawless girls. No one is out there clamoring for more shadows and less smooth skin.

Neurological Bias

Our preference for AI-enhanced images is probably rooted in our brain's evolution:

  • The primitive "reptilian brain" drives us towards quick gratification and aesthetically pleasing visuals.

  • This neurological bias, while historically beneficial for survival, can make us susceptible to AI's appealing distortions.

Philosophical Questions

The rise of deepfakes and AI-generated content raises profound philosophical questions:

  • What is truth in a world where reality can be manipulated at will?

  • Do we need to rethink our understanding of truth and the world around us?

  • How does technology shape our perceptions of reality?

As we grapple with these challenges, it becomes crucial for everyone (businesses, artists, individuals, teachers) to develop strategies for maintaining trust and authenticity in our increasingly AI-influenced world.

In a sense, perhaps what scares us so much in the AI age is reflective of the areas of human life and consciousness we have yet to really penetrate with our current intellectual capacity.

What’s your take?