• The Current ⚡️
  • Posts
  • AI Search Engines — Paradigm Shift ⛓️‍💥 or Same Thing, New Package 📦?

AI Search Engines — Paradigm Shift ⛓️‍💥 or Same Thing, New Package 📦?

Weekly recap and deep dive into the most compelling story lines.

Happy Sunday!

Let’s go deeper on the most compelling discussion topics raised during the past week:

OpenAI’s long anticipated “search engine” arrived, and while it wasn’t quite what it was hyped up to be, the implications of it’s presence are still wide ranging. We’ll discuss those, whether or not Gen AI is hitting a roadblock, and make an obligatory foray into American politics (being a couple days out from the election, felt necessary).

But first, here are the stories you definitely don’t want to have missed this week:

ICYMI: Top Headlines from the Week

Diving deeper

AI Search Engines — Paradigm Shift or Same Thing, New Package?

For all the hype over the last few months around OpenAI releasing a search engine that was supposed to be a Google killer and sound the early death knells of publishers all over the internet by association — it finally arrived on Halloween, as an unassuming little 🌐 button next to the paperclip in your ChatGPT message box.

It’s not the first time ChatGPT has been given access to the web. And really, this function is more so an improvement of what ChatGPT already does, rather than a separate product entirely.

Here’s a great example of what’s really changed.

ChatGPT could already provide you with a comparison of two products. But it would sometimes run into issues by referencing out-dated information, which is a major issue for topics that require very new information, in particular. Now, it can proactively search the web, providing more accurate responses and linking to sources.

So basically, the “AI overviews” that we’ve become accustomed to as the standard responses from ChatGPT and competitors like Google’s Gemini (which they integrated into their existing Google search engine — much to the chagrin of users) are getting better.

But is that really what consumers want?

Here’s a quick reference point:

  • Currently, ChatGPT is getting 3.1 billion visits per month.

  • Google is getting 82 billion visits per month (good for #1 across all of the internet).

That’s obviously a big difference, but barriers to exit beyond simple brand recognition and habit, are very low on the internet in general. Google has of course become ubiquitous in society, so much so it’s company name became a verb “just Google it” — similar to how Kleenex or Band-Aid became replacement nouns for tissue and…well, I don’t even know what to call a Band-Aid other than Band-Aid…maybe a “plaster” if you’re in the UK?

It’s not crazy to think ChatGPT could supplant Google as the number one website in the world one day, but there are some significant barriers to consider along the way.

Cost

Google has always been free, and it’s paid features are generally targeted towards enterprises, for which they are super useful of course. But the reason it’s able to offer its search engine for free is because it doesn’t really need that revenue from users, since all it’s actually doing is directing traffic around the internet. The value of that traffic is captured in one way or another by the destination sites (publishers, e-commerce, etc) and as a result, Google can charge a premium to those sites most willing to pay for the traffic to be directed their way.

Currently, ChatGPT only provides search functionality to its premium subscribers. They do have plans to expand access to free users in the coming months. But you have to wonder what exactly would be the draw for current Google users to switch willingly over to ChatGPT, if not the premium features. Google can already provide a similar AI search overview style of experience for users, and really the blowback they received back in springtime about the Gemini release was based mainly on quality issues. You have to believe Google can at least match what free ChatGPT users will get. With that in mind, OpenAI’s ceiling with search is probably just about as high as without. Whether they unseat Google could end up coming down to how much internet users end up caring about the way they get to where they want to go, vs. just getting there.

It’s not clear how much it costs OpenAI to provide users search capacity, but generally, it seems running AI models isn’t cheap. With historic levels of cash burn and investors crowding in (with expectations of profit) you have to assume that ChatGPT will be needing to prioritize shipping features that bring in more money as time goes on, and search is something we are all used to getting for free, so it’s unlikely to be the key driver of that revenue on it’s own.

Access to Information

ChatGPT relies on being able to access accurate and up to date information to provide the snappy comprehensive answers that made it famous in the first place. But that access doesn’t seem guaranteed moving forward, at least not for free.

Perplexity, a ChatGPT competitor that provides similar search overview style results from web searches, including citations, is being sued by a new publisher like the New York Times or Newscorp every other week it seems. OpenAI faced similar suits as far back as December of 2023, but has since been busy signing a flurry of multi-million dollar agreements with publishers like Conde Nast to allow permitted usage of their content.

Meanwhile, Google is basically positioned as the de-facto gatekeeper to all of the internet, a position they’ve shored up by numerous partnerships and deals to keep their search engine as the default in browsers and on consumer devices. Normal users don’t even think about it much anymore — even the URL box became essentially a Google redirect button. With the rules of the game set, the successful publishers of course aren’t too keen on seeing them change.

Whether or not organized opposition from publishers against AI products like Perplexity and ChatGPT will matter much in the long run is hard to call. What does seem pretty certain though, is that being a small-scale publisher (think: personal blog, review website, etc) could get really tough in the near future….or at least just become really different. Getting a site to perform well in an era where information is parsed from your site and served to users in a way that doesn’t require them to visit your domain implies that only the sites that are able to drive traffic organically or strike revenue sharing deals with AI companies (like those proposed by Perplexity) will be able to survive.

Using social media to reach new users may become more important than ever for small digital businesses if we truly are heading into a paradigm shift for how people use the internet 🤷‍♂️

So while AI-powered search engines like ChatGPT's new feature represent a significant advancement in how we access and process information online, they may not necessarily spell the end for traditional search engines or publishers just yet. The landscape is evolving rapidly, with challenges in cost management, access to information, and potential legal hurdles shaping the future of online search and content consumption.

The biggest challenge to Google’s dominance is probably still coming from the courts in the form of anti-trust suits. Whether or not a broken up Google is good for the internet as a whole is very unclear.

Ultimately, the future of online search and content discovery may not be a winner-takes-all scenario, but rather a diverse ecosystem where different tools serve different needs. Honestly, this iteration of ChatGPT seems more so a threat to Wikipedia than Google. Each search result is like a mini wikipedia page, full of the same wild array of links and contextual information. As users, we may find ourselves using a combination of traditional search engines, AI-powered assistants, and direct content sources, depending on the nature and complexity of our queries. The coming years will undoubtedly bring further innovations and refinements in this space, potentially reshaping how we interact with information on the internet.

Any thoughts?

Is Gen AI hitting a roadblock?

While AI has made significant progress in areas like language models and image generation, it still falls short of human-level intelligence in many aspects. Despite impressive capabilities, current AI systems lack true understanding and often make mistakes that humans would easily avoid, making it’s widespread deployment dubious in high-risk industries like medicine and finance.

Key Limitations to Overcome

To reach AGI, AI needs to overcome several crucial limitations:

  • Ability to reason

  • Understanding context

  • Transferring knowledge across domains

  • Developing common sense reasoning

  • Causal understanding

  • Learning from a small number of examples (like humans do)

According to the general discourse in the AI space, achieving AGI will require more than just incremental improvements in existing technologies. Potentially new paradigms and approaches to AI development are necessary. Ongoing research in areas such as neuroscience-inspired AI and hybrid systems that combine different AI techniques is mentioned as potential avenues for advancement.

Beyond the outputs, the AI boom still faces threats on the input side — data, data processing, & power.

Data Quality and Availability: David Baker, a recent Nobel laureate, pointed out that the scarcity of high-quality, curated data is hindering AI's application in scientific research, stressing the importance of accessible and reliable datasets.

The effectiveness of artificial intelligence (AI) systems is fundamentally dependent on the quality of the data they are trained on. Inaccurate, inconsistent, or biased data can lead to unreliable and unfair AI outcomes.

Additionally, if training data contains inherent biases, AI systems may perpetuate or even exacerbate these biases, leading to discriminatory outcomes. This is particularly concerning in applications like hiring or lending, where fairness is crucial. Users are also less likely to trust AI systems they learn have been trained on poor-quality data, hindering their adoption and success.

Photonic Computing is emerging as a promising solution to address data processing bottlenecks in artificial intelligence (AI) systems. Traditional electronic components often struggle with the massive data transfer requirements of AI, leading to increased latency and energy consumption.

Photonic computing leverages light for data transmission and processing, offering significant advantages:

  • High-Speed Data Transmission: Photons travel at the speed of light, enabling rapid data movement and reducing latency.

  • Energy Efficiency: Photonic systems generate less heat and consume less power compared to electronic counterparts, making them more sustainable.

  • Parallel Processing Capabilities: Photonic computing can handle multiple data streams simultaneously, enhancing parallel processing essential for AI tasks.

These are two companies making some noise in the photonics space so far:

  • Lightmatter's Passage Technology: Lightmatter has developed Passage, a photonic interconnect system that connects GPUs using light, significantly increasing bandwidth and reducing power consumption.

  • Xscape Photonics' Platform: Xscape Photonics is creating photonic solutions for ultra-high bandwidth connections in data centers, aiming to enhance AI computing performance sustainably.

Then there’s the issue of power consumption for these new datacenters.

The rapid growth of AI technology is straining power infrastructure, prompting tech giants to explore nuclear energy solutions. Companies like Google, Microsoft, and Amazon are forming partnerships with nuclear power providers to meet their increasing energy demands sustainably.

Google has partnered with Kairos Power to purchase 500 megawatts from small modular reactors (SMRs), with the first reactor expected to be operational by 2030. Microsoft has signed a 20-year agreement with Constellation Energy, which includes plans to reopen a reactor at Three Mile Island. Amazon is collaborating with X-energy on a 320-megawatt SMR project.

Nuclear energy offers several advantages for AI operations, including reliability and low carbon emissions. It provides a consistent power supply crucial for data centers and aligns with tech companies' sustainability goals. However, the shift to nuclear power also presents challenges, such as complex infrastructure development and the ongoing issue of radioactive waste management.

As AI continues to advance, the tech industry's energy strategies will play a crucial role in shaping both technological progress and environmental impact. The move towards nuclear power represents a significant shift in how tech companies approach their energy needs in the face of AI's growing demands.

While artificial intelligence has made remarkable strides in recent years, particularly in areas like language processing and image generation, it's clear that the field is facing challenges that could be viewed as significant roadblocks to further advancement.

Sam Altman was quoted back in January discussing how an energy breakthrough is effectively required for AI to reach the levels of AGI that it aims at.

In lieu of some utopian energy break through, the focus will likely shift be on developing AI systems that are not just powerful, but also more efficient, ethical, and aligned with current human values. Overcoming the current limitations need not be an event of such massive proportions, right?

What’s your take?

Elon Musk in Politics

The tech world is going through a bit of a political shuffle, to say the least.

Elon Musk is arguably the most widely recognized figure at the head of that shake up.

After openly endorsing Republican candidate Donald Trump, Musk's support has drawn considerable attention. His vast business reach—including companies like X (formerly Twitter) and Starlink—provides him with powerful tools to shape political discourse. Musk’s platform, X, has become a space for controversial debates and high-profile endorsements, attracting both supporters and critics.

By leveraging his platforms, Musk can individually amplify specific narratives, which almost certainly have non-zero impacts on public opinion, potentially influencing voter behavior.

For example, with one tweet, Musk brings a lot of attention to issues that otherwise would be missed by voters (for better or worse) —

The endorsement of Trump not only highlights Musk's personal leanings but also brings discussions on technology’s role in political power to the forefront. Beyond even the tech, Musk is pushing the boundaries of what’s fair and what’s straight up illegal in terms of billionaires supporting candidates.

For example, his promise to give away $1 million daily to petition signers from his PAC has been widely scrutinized.

Musk isn’t the only one in the tech world backing Trump, though. Marc Andressen and Ben Horowtiz of the famous a16z venture capital firm made headlines for publicly endorsing him a few months ago. Horowitz walked his stance back soon after, but the schism in the tech world was fully illuminated nonetheless.

Much has been made of the tech moguls’ differing political leanings.

Some claimed it simply to be the maturation of a still fairly new industry (that over time, the type of person running tech companies has diversified significantly enough from the homogeneity of it’s inception).

Others think backing Trump is a low-key Bitcoin strategy, betting on supposed worldwide chaos from another Trump term.

This comment on the WSJ video from some random user, while reductive, feels oddly poignant—

Whatever happens on Tuesday, this election cycle has certainly drawn some interesting lines within the tech world on key issues like data privacy, regulation, censorship, and what role companies should play in influencing politics. Whether you find Musk’s foray into the political world as inspiring, disgusting, frightening, or just plain entertaining, it’s surely going to generate a lot of headlines over the next four years no matter who wins. So I guess we better get used to it!

Let’s hear it —