6 topics covered

Listen to today's briefing
0:00--:--

Perplexity AI Faces Class-Action Lawsuit Over Data Sharing with Meta and Google

What happened: Perplexity AI is facing a class-action lawsuit alleging that the company has been secretly sharing personal user data from customer chats with Meta and Google without proper user consent.

Key details:

  • Class-action lawsuit targets Perplexity's data sharing practices
  • Alleged partners include Meta and Google
  • Data in question comes from user chats and conversations
  • Users were not explicitly informed of this data sharing
  • Bloomberg first reported the legal action

Why it matters: This lawsuit raises critical questions about data privacy in conversational AI platforms. If substantiated, it would demonstrate a pattern of user data monetization that many users assume doesn't occur when using "free" AI services. The case could establish legal precedent around what constitutes proper consent for AI training data collection and third-party data sharing.

Practical takeaway: If you use Perplexity, review the platform's privacy policy immediately and consider what data you're willing to share in conversations, and evaluate whether you should switch to alternative platforms with explicit commitments against third-party data sharing.

China Achieves 41% AI Accelerator Market Share Through Domestic Chipmakers

What happened: Chinese chipmakers have captured nearly 41 percent of China's AI accelerator server market in 2025, according to an IDC report, marking a significant milestone in China's AI infrastructure independence from U.S. suppliers like Nvidia.

Key details:

  • Chinese chipmakers now control 41% of China's AI accelerator market as of 2025
  • This represents substantial growth in domestic GPU/accelerator adoption
  • Data comes from IDC report seen by Reuters
  • Achievement reflects successful diversification away from Nvidia dominance
  • Domestic alternatives are becoming competitive in performance and adoption

Why it matters: This milestone demonstrates that China's domestic AI chip strategy—driven by both innovation and U.S. export restrictions—is succeeding. With nearly half the market, Chinese accelerator makers have achieved sufficient scale to sustain R&D investment. This reduces China's technology dependency on U.S. suppliers and signals a bifurcation of global AI infrastructure markets along geopolitical lines.

Practical takeaway: Companies in AI infrastructure should monitor whether China's 41% share continues growing—if domestic Chinese chips reach 50%+ market penetration, it signals the AI infrastructure market is permanently splitting into regional ecosystems.

Stream Deck Integrates AI Agent Control via Model Context Protocol

What happened: Elgato released Stream Deck software update version 7.4 that adds Model Context Protocol (MCP) support, allowing AI assistants like Claude, ChatGPT, and Nvidia G-Assist to directly control Stream Deck hardware by finding and activating buttons programmatically.

Key details:

  • Stream Deck 7.4 introduces native MCP support
  • Enables AI assistants to autonomously control Stream Deck devices
  • Compatible with Claude, ChatGPT, and Nvidia G-Assist
  • Users no longer need to manually push buttons—AI can delegate these tasks
  • Represents standardization of AI-hardware integration through MCP

Why it matters: This integration demonstrates MCP becoming a genuine infrastructure standard for AI-hardware connectivity. Stream Deck is a popular tool for content creators, streamers, and professionals—integrating AI control means millions of users now have direct AI agent access to their workflow automation. This is a concrete example of the "AI agents in production infrastructure" trend becoming mainstream.

Practical takeaway: If you use Stream Deck for content creation or workflow automation, update to version 7.4 and experiment with delegating routine button sequences to AI assistants to see what productivity gains are possible.

Anthropic's Claude Code Leak: Real-World Escalation with 8,000+ GitHub Clones

What happened: Following Anthropic's accidental source code leak of Claude Code in version 2.1.88, the damage is escalating significantly—Anthropic has discovered that clones of the leaked code have been created over 8,000 times on GitHub despite the company's mass takedown efforts.

Key details:

  • Over 512,000 lines of TypeScript source code were exposed
  • More than 8,000 GitHub clones exist despite Anthropic's ongoing takedown attempts
  • The complete internal architecture and implementation details are now publicly accessible
  • Mass takedowns have proven insufficient to contain distribution
  • This represents one of the largest accidental source leaks in AI infrastructure history

Why it matters: The proliferation of clones shows that the original breach cannot be contained through traditional takedown mechanisms. This means competitors, researchers, and potentially bad actors now have direct access to Anthropic's proprietary tool architecture. The 8,000+ clone count demonstrates the permanence of open-source distribution—information shared once cannot be recalled.

Practical takeaway: Organizations building proprietary AI tools should assume that major source code leaks cannot be fully mitigated after distribution and plan security strategies accordingly, including architectural changes to prevent exposed code from being directly usable.

Institutional AI Policy: EU Bans AI-Generated Communications and Jack Dorsey Questions Manager Value

What happened: Two parallel developments signal institutional skepticism about AI in critical roles: the EU Commission, Parliament, and Council have formally barred their press teams from using fully AI-generated content in official communications, while tech leader Jack Dorsey has made a public case against the value of management roles in an AI-augmented future.

Key details:

  • EU institutions have prohibited fully AI-generated content in official press communications
  • Policy applies to Commission, Parliament, and Council press teams
  • Experts view the ban as a missed opportunity rather than necessary caution
  • Jack Dorsey (Square/Block founder) argues AI will diminish the case for traditional management structures
  • Both reflect skepticism about AI replacing human judgment in high-stakes contexts

Why it matters: The EU's formal ban establishes institutional precedent that AI-generated content is unacceptable for official government communications, even within the world's largest and most AI-forward regulatory body. Combined with Dorsey's commentary questioning management hierarchy itself, these developments suggest that despite AI hype, institutional decision-makers are hedging against over-reliance on AI and reconsidering fundamental organizational structures.

Practical takeaway: Organizations considering AI-generated content policies should note that even the EU—which has heavily invested in AI governance expertise—has decided that official communications require human authorship, suggesting that mission-critical communications should maintain human accountability regardless of AI capability.

AI Agent Security Vulnerabilities Exposed by Google DeepMind

What happened: Researchers at Google DeepMind released the first systematic study documenting how websites, documents, and APIs can be weaponized to manipulate, deceive, and hijack autonomous AI agents operating in the wild.

Key details:

  • Study identifies six main categories of attack vectors against autonomous agents
  • Attacks can compromise agents browsing the web, handling emails, and executing transactions
  • The vulnerability stems from the environment agents operate in, not the agents themselves
  • This represents the first comprehensive taxonomy of agent-specific security threats
  • Agents currently lack robust defenses against these environmental manipulation techniques

Why it matters: As AI agents become deployed for increasingly critical real-world tasks—from automated business processes to financial transactions—these vulnerabilities represent a serious security gap. The study provides the first systematic framework for understanding and addressing agent-specific attack surfaces, which is essential before agents operate autonomously at scale.

Practical takeaway: Developers deploying autonomous agents should immediately review the Google DeepMind taxonomy and implement input validation and anomaly detection mechanisms that specifically account for environmental manipulation attacks.