🗳️ Tech's Future and the 2024 Election

On the eve of the 2024 election, let’s revisit the core differences between the candidates on AI and tech.

The Daily Current ⚡️

Welcome to the creatives, builders, pioneers, and thought leaders ever driving further into the liminal space.

The 2024 US presidential election could be a turning point for the tech industry, with candidates offering fairly distinct visions for how to foster innovation in AI. Meta's timely collaboration with the US government signifies an increasing closeness between Big Tech and the government, especially on national security. OpenAI's ChatGPT Search doesn’t quite look up to the task of replacing Google, new regulatory challenges are reshaping tech partnerships with power companies, and the collection of personal data for AI model training has been like a land grab, according to Axios.

 🔌 Plug Into These Headlines:

The 2024 US presidential election could be a pivotal moment for the tech industry, with Vice President Kamala Harris and former President Donald Trump offering divergent paths for its future. Their contrasting views on emerging technologies, from AI and cryptocurrency to cybersecurity and semiconductor manufacturing, will likely determine the regulatory landscape and innovation climate in the United States. The election’s result will shape how America positions itself in the global tech race while addressing domestic concerns about safety, privacy, and fairness.

  • Harris supports expanding federal investments in AI research and development, focusing on safety and accountability.

  • Trump pledges to repeal Biden’s AI Executive Order, arguing it hinders innovation and competitiveness with China.

  • Harris prioritizes combating algorithmic discrimination and AI bias, while Trump views these concerns as less pressing.

  • Both candidates support the CHIPS Act for domestic semiconductor manufacturing, but with different emphases on implementation.

🎭 Behind the political theater, the Harris-Trump showdown represents basically a choice between collaborative tech governance and competitive deregulation. Overall, Harris is expected to pursue a balanced approach to regulating AI, while Trump advocates for a more laissez-faire stance to compete with China. Typical Republican vs. Democrat breakdown there.

In what can’t possibly be a coincidence, shortly after it was revealed that Chinese developers had used Meta’s Llama models to create military applications, Meta is extending support for its Llama large language model to the US government for national security purposes. This collaboration involves granting access to the Llama 2 model, which was earlier released as open-source for commercial and research use. By partnering with federal agencies on AI, Meta joins the ranks of other major tech companies, potentially strengthening its relationship with US officials while contributing to advancements in AI-powered national security tools.

  • While Llama 2 was already open-source, Meta’s policy explicitly prohibited its use for military and national security purposes.

  • This move marks a shift in Meta’s stance on collaborating with government agencies on AI, aligning with competitors like Microsoft, Google, and Amazon.

  • Meta’s collaboration aims to enhance the government’s AI capabilities for national security applications.

  • The partnership could potentially lead to advancements in AI-powered national security tools and strategies.

🤔 Meta’s decision to share its AI technology with the government raises questions about the balance between open-source innovation and national security interests, blurring the lines between Silicon Valley innovation and federal defense capabilities. This partnership underscores the increasing interdependence between big tech and government agencies.

OpenAI’s ChatGPT Search demonstrates potential in answering complex, research-oriented questions but stumbles when processing brief, navigational queries that form the majority of Google searches. This limitation significantly hinders its ability to replace Google as the primary search engine for most users, despite its advanced AI capabilities.

  • ChatGPT Search excels at answering long, research-oriented questions, providing concise answers with clear source links.

  • The new search engine struggles with short queries, which make up the majority of Google searches.

  • During testing, ChatGPT Search produced inaccurate information and hallucinations for simple queries like sports scores and earnings reports.

  • OpenAI acknowledges the challenge with short queries and plans to improve the experience over time.

  • The median number of words in a Google query is 2-3, while for Perplexity it’s 10-11, indicating different use cases.

🎯 ChatGPT Search’s performance demonstrates that revolutionizing web search requires more than just advanced AI; it demands a deep understanding of diverse user behaviors and query types.

Tech giants and power utilities are reassessing their AI-driven energy management partnerships following a new FERC order. The regulation, which mandates greater disclosure and tighter controls on these collaborations, has prompted companies to reevaluate their strategies. Some are considering scaling back or abandoning projects altogether, citing increased regulatory burden. This shift could impact the pace of innovation in power grid optimization and energy consumption reduction, areas where AI algorithms have shown promise in improving efficiency.

  • The grid operator PJM Interconnection and the Susquehanna nuclear plant, which Talen owns, had filed a request to increase the amount of power dispatched to the Amazon data center from 300 megawatts currently to 480 megawatts.

  • FERC rejected the request based on concerns about grid reliability.

  • Talen said FERC’s decision will have a “chilling effect on economic development in states such as Pennsylvania, Ohio, and New Jersey”

🔀 This regulatory intervention may force a reimagining of how tech companies and utilities can work together to advance energy efficiency goals while addressing concerns about market fairness.

AI companies are collecting vast amounts of personal data to train their models, often without users’ explicit consent. From opt-in to opt-out policies, the differing approaches are based on geographical location and whether the service is consumer or enterprise-focused.

  • AI companies require enormous amounts of data to train their language and image models.

  • There’s a “data land grab” as companies rush to acquire information before potential legal restrictions.

  • Data use policies often vary between consumer and enterprise services, with businesses generally expecting more privacy.

💡 The AI data rush so far mirrors the early days of social media, but this time, users are already aware of the value of their personal information. Whether we’ll see meaningful legislation on this issue will depend on how seriously voters take data privacy policies at the polls.