⚡️ Next Gen AI Data Centers with Lightmatter’s Photonics

Lightmatter's latest funding round could mark a pivotal moment in AI data center technology, leveraging photonics to enhance performance and efficiency.

The Daily Current ⚡️

Welcome to the creatives, builders, pioneers, and thought leaders ever driving further into the liminal space.

Is AI pushing us to develop light-speed computing? While consumers and regulators grapple with AI's human-like charm and compliance quirks, a Massachusetts high school is hit with a lawsuit centered on AI in the classroom. At Adobe’s MAX conference, designers are frustrated with the excessive emphasis on AI.

 🔌 Plug Into These Headlines:

  1. ⚡️ Lightmatter’s Photonics Gets $400M Boost for Next Gen AI Data Centers

  2. 🤨 Understanding Consumer Response to AI — New Study

  3. 👨‍⚖️ Massachusetts School Finds Itself on the Frontlines of AI Copyright Debate With Cheating Lawsuit

  4. 🎨 Adobe MAX Attendees Weary of AI Focus

  5. ⚖️ EU AI Act Compliance Tool Exposes Big Tech's Regulatory Challenges

Lightmatter's $400M Series D funding round, led by T. Rowe Price Associates, values the company at $4.4B and brings total capital raised to $850M. The company's Passage technology, a 3D-stacked photonics engine, addresses critical bottlenecks in AI computing by dramatically increasing bandwidth and performance while reducing power consumption. This breakthrough enables efficient scaling of AI systems, preparing computing infrastructure for next-generation AI models.

  • Passage uses 3D-stacked photonics chips to boost AI cluster bandwidth and slash power consumption.

  • The technology addresses limitations in traditional electronic interconnects for 100,000+ XPU clusters.

  • Lightmatter's solution enables efficient scaling for next-gen AI models.

  • The company has attracted backing from major players like T. Rowe Price, Fidelity, and Google Ventures.

The massive investment in Lightmatter signals a potential paradigm shift in AI computing infrastructure. As the demand for more powerful AI systems grows, photonics-based solutions could become the cornerstone of future data centers, reshaping the competitive landscape in the AI industry.

AI chatbots' effectiveness in customer service is shown to hinge more on perceived intelligence than human-like features. The study shows anthropomorphism boosts satisfaction indirectly through perceived smarts. Customer know-how plays a big role, with AI bots working better for less savvy users.

  • 301 recent AI chatbot users surveyed.

  • Anthropomorphism indirectly ups satisfaction via perceived intelligence.

  • Less expert customers respond better to human-like AI.

  • Complex relationships analyzed using structural equation modeling.

  • Companies should prioritize AI smarts over human mimicry.

  • Adds to growing research on human-AI service interactions.

The jury is still out on human-like AI, with some delighted, others disgusted. Future AI chatbots might begin shape-shifting, tweaking their smarts and human-like qualities based on each customer's expertise.

A high school student's parents are suing over AI-related discipline, exposing the lack of clear policies on AI use in schools. The case highlights the clash between traditional academic integrity and emerging AI tools, with potential implications for college admissions and educational practices.

  • The student used AI for notes and outline, not paper writing.

  • School handbook had no explicit AI policy.

  • Grade lowered, National Honor Society entry initially denied.

  • The lawsuit seeks grade change and sanction removal.

  • Inconsistent enforcement revealed among other students.

  • The parents want school officials trained on AI in education.

While some educators promoting AI use, citing workplace relevance, this case may set a precedent for how schools handle AI use in academic work. Schools have been a battleground lately over how and when AI has a place in the learning process.

Adobe Max 2024 attendees have expressed frustration with the event's overwhelming focus on AI, particularly Adobe's Firefly model. Designers feel the conference has shifted from a creativity-focused event to an AI showcase, with constant promotion of Firefly and AI capabilities overshadowing other aspects of design. Many attendees argue that the new AI tools are less impressive or useful than Adobe portrays, and some fear Adobe is prioritizing AI at the expense of professional design tools and workflow improvements.

  • Constant AI promotion at every opportunity has become a "red flag" for attendees.

  • Designers find AI demonstrations repetitive and unengaging.

  • Concerns have been raised about Adobe neglecting pro tools and workflow management.

  • Attendees view Firefly's capabilities as limited, mainly useful for backgrounds or improved clone-stamping.

  • There is even speculation that Adobe is trying to cut out designers as "middlemen" between software and clients.

Adobe's AI emphasis could potentially end up alienating its core professional user base while attempting to appeal to a broader, less specialized audience. In theory this could open opportunities for competitors to capture market share among professional designers seeking more tailored, non-AI-centric tools, but it’d be hard to imagine anyone putting that much time and effort into a high-ticket price tool dedicated for an evaporating market, especially when making broadly applicative tools for cheap is an option.

The EU AI Act Compliance Checker, developed by AI governance startup Aleph Alpha, reveals significant challenges for big tech companies in meeting the EU's upcoming AI regulations. This tool analyzes AI systems against the Act's requirements, exposing potential compliance issues for major players like OpenAI, Anthropic, and Google.

  • The compliance checker identified that ChatGPT may struggle to meet the Act's requirements for transparency and human oversight.

  • Anthropic's Claude chatbot faces challenges in risk management and accuracy of outputs.

  • Google's PaLM 2 model shows potential issues with data governance and transparency.

  • Compliance gaps were found in mainly in areas such as bias mitigation, robustness, and cybersecurity across various AI systems.

Typically regulations are slow and take years to check for compliance. If the EU is confident in using this tool, the EU AI Act could actually begin reshape the global AI landscape by forcing major players to overhaul their systems or face exclusion from the European market. This could lead to a bifurcation in AI development, with EU-compliant and non-compliant versions of AI systems emerging.