News Froggy
newsfroggy
HomeTechReviewProgrammingGamesHow ToAboutContacts
newsfroggy

Your daily source for the latest technology news, startup insights, and innovation trends.

More

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

Categories

  • Tech
  • Review
  • Programming
  • Games
  • How To

© 2026 News Froggy. All rights reserved.

TwitterFacebook
Tech

Google in Talks with Marvell for Custom AI Inference Chips

Google is in talks with Marvell Technology to develop two custom AI inference chips, including a memory processing unit and an inference-optimized TPU. This move signals Google's strategic diversification of its chip supply chain, expanding beyond its primary partner Broadcom to address the rapidly growing demand and cost of AI inference workloads. The collaboration aims to enhance Google's competitive advantage in the burgeoning custom silicon market.

PublishedApril 20, 2026
Reading Time5 min
Google in Talks with Marvell for Custom AI Inference Chips

Google is reportedly in advanced discussions with Marvell Technology to develop two new specialized artificial intelligence (AI) chips, marking a significant move to diversify its custom silicon supply chain. This potential collaboration focuses on building a memory processing unit (MPU) and a Tensor Processing Unit (TPU) specifically optimized for AI inference workloads.

The talks, though not yet formalized into a signed contract, surfaced just days after Google solidified a long-term agreement with Broadcom, its primary custom chip partner, extending through 2031. This timing underscores Google’s strategy of expansion and diversification rather than a replacement of existing partners. Broadcom will continue to design and supply high-performance TPUs and networking components, while MediaTek contributes cost-optimized “e” variants. Should a deal with Marvell materialize, it would add a third key design partner to Google’s robust custom silicon ecosystem, alongside fabrication giant TSMC.

Shifting Focus to AI Inference

Google’s increased interest in inference-optimized silicon reflects a pivotal shift in AI compute demand. While training frontier AI models requires immense computational power over weeks or months, inference – the process of running trained models to serve user queries – represents a continuous, escalating cost that scales directly with user engagement. As AI-powered products reach hundreds of millions of users daily, inference costs are becoming the dominant expense.

Purpose-built inference chips offer a competitive advantage in terms of cost and efficiency that general-purpose GPUs often cannot match. Google’s seventh-generation TPU, Ironwood, launched earlier this month, is already being heralded as “the first Google TPU for the age of inference.” It boasts ten times the peak performance of its predecessor, the TPU v5p, and can scale to superpods of 9,216 liquid-cooled chips. Google plans to deploy millions of Ironwood units this year, and any Marvell-designed chips would likely complement this existing infrastructure, potentially addressing different workload profiles or cost requirements.

Marvell’s Growing Influence in Custom Silicon

Marvell Technology has emerged as a significant player in the custom silicon market. The company reported record data center revenue of $6.1 billion in its fiscal year ending February 2026, contributing to total revenue of $8.2 billion, a 42% year-over-year increase. Its custom silicon business generates a $1.5 billion annual run rate, having secured 18 design wins with major cloud providers, including Amazon (Trainium processors), Microsoft (Maia AI accelerator), and Meta (a new data processing unit). Marvell also already collaborates with Google on the Axion ARM CPU.

Recent strategic moves further bolster Marvell’s position. Nvidia invested $2 billion in Marvell in late March, forging a partnership through NVLink Fusion to integrate Marvell’s custom chips and networking with Nvidia’s interconnect fabric. Additionally, Marvell acquired Celestial AI in December 2025 for up to $5.5 billion, gaining advanced photonic interconnect technology. CEO Matt Murphy aims for a 20% market share in custom AI chips and projects roughly 30% year-over-year revenue growth in fiscal 2027. Marvell’s stock has reflected this strong momentum, rallying approximately 50% year-to-date.

Broadcom’s Enduring Market Leadership

Despite Google’s exploration of new partners, Broadcom’s position in the custom AI accelerator market remains robust. The company commands over 70% market share in this segment, with its AI revenue soaring to $8.4 billion in its most recent quarter—a 106% year-over-year increase. Guidance for the following quarter projects $10.7 billion in AI revenue, with the company targeting $100 billion in AI chip revenue by 2027. Following the Google extension announcement, Broadcom’s shares rose over 6%. Mizuho analysts estimate Broadcom will generate $21 billion in AI revenue from its Google and Anthropic relationships in 2026, potentially doubling to $42 billion in 2027.

The broader custom chip market is experiencing explosive growth, with TrendForce projecting a 45% increase in custom chip sales in 2026, significantly outpacing the 16% growth forecast for GPU shipments. Counterpoint Research anticipates Broadcom will hold approximately 60% of the custom AI accelerator market by 2027, with Marvell securing about 25%. The overall market for custom AI chips is projected to reach an astounding $118 billion by 2033.

Google’s Multi-Partner Approach

Google’s evolving chip strategy now encompasses at least four external partners (Broadcom, MediaTek, Marvell, and TSMC) alongside its own internal design teams. This complex, multi-vendor approach for its diverse product line—spanning AI training, inference, and general-purpose cloud compute—is a deliberate strategic choice. Hyperscalers dependent on a single chip supplier face inherent risks related to pricing, supply chain stability, and strategic vulnerability. By diversifying, Google aims to mitigate these risks and gain greater control over its foundational AI infrastructure.

This inference-focused engagement with Marvell highlights Google’s commitment to achieving massive cost efficiencies at scale. Shaving even a small percentage off the cost per inference across billions of daily AI-augmented search queries, Gemini conversations, and Cloud AI API calls translates into billions of dollars in annual savings. While chip development timelines mean any Marvell-designed products are likely years from production, the strategic direction is unequivocally clear: Google is building a resilient, multi-partner supply chain designed to power the world’s most demanding AI inference workloads.

FAQ

Q: Why is Google diversifying its AI chip suppliers?

A: Google is diversifying its AI chip suppliers to mitigate risks associated with relying on a single vendor, including pricing risk, supply chain vulnerabilities, and strategic dependence. A multi-partner approach provides greater control and resilience for its extensive AI infrastructure.

Q: What type of AI chips is Google discussing with Marvell?

A: Google is in talks with Marvell Technology to develop two specific types of AI chips: a memory processing unit (MPU) designed to work alongside existing Tensor Processing Units (TPUs), and a new TPU explicitly optimized for AI inference workloads.

Q: How does this potential partnership impact Google’s relationship with Broadcom?

A: The discussions with Marvell do not appear to replace Broadcom but rather to add a third design partner. Broadcom recently secured a long-term agreement with Google through 2031 and continues to hold a dominant market share in custom AI accelerators, indicating Google's strategy is diversification, not substitution.

#Google#Marvell#AI Chips#TPU#Broadcom

Related articles

PowerLight's Laser System Powers Drone for Hours in Pentagon Test
Tech
GeekWireApr 21

PowerLight's Laser System Powers Drone for Hours in Pentagon Test

PowerLight Technologies' laser power beaming system successfully kept a military drone airborne for hours during Pentagon tests. This breakthrough, demonstrated at Shaw Air Force Base, could enable indefinite flight for unmanned systems, revolutionizing surveillance and offering potential counter-drone capabilities.

Tim Cook Pens Farewell Letter as He Departs Apple CEO Role
Tech
The VergeApr 21

Tim Cook Pens Farewell Letter as He Departs Apple CEO Role

Apple CEO Tim Cook is set to depart his role in September, marking the conclusion of a significant 15-year tenure at the helm of the tech giant. In anticipation of this major leadership transition, Cook has penned a

NSA Reportedly Using Anthropic's Restricted Mythos AI Amid DoD Feud
Tech
TechCrunchApr 20

NSA Reportedly Using Anthropic's Restricted Mythos AI Amid DoD Feud

The National Security Agency (NSA) is reportedly utilizing Anthropic's highly restricted Mythos Preview AI model, a development that emerges despite the Department of Defense (DoD) having previously designated Anthropic

in-depth: Tech CEOs Think AI Will Let Them Be Everywhere at Once
Tech
WiredApr 21

in-depth: Tech CEOs Think AI Will Let Them Be Everywhere at Once

A growing trend among Silicon Valley's top executives reveals a bold, personalized vision for artificial intelligence: leveraging it to achieve a form of digital omnipresence within their organizations. Despite broader

policy: He spreads hate online — and fans pay him hundreds of
Tech
Washington Post TechnologyApr 21

policy: He spreads hate online — and fans pay him hundreds of

Far-right influencer Nick Fuentes has earned nearly $900,000 from fans since early 2025, funding his "hate-filled monologues." This highlights a significant, direct-to-fan monetization model for extremist online content.

startups: Meta targets 20 May for 8,000 layoffs as it redirects
Tech
The Next WebApr 19

startups: Meta targets 20 May for 8,000 layoffs as it redirects

Meta Platforms is set to commence a significant company-wide restructuring on May 20, initiating layoffs that will impact approximately 8,000 employees, representing 10% of its global workforce. This substantial

Back to Newsroom

Stay ahead of the curve

Get the latest technology insights delivered to your inbox every morning.