Advanced Micro Devices has spent years positioning itself as a credible alternative to Nvidia in the AI chip race. Now, with a landmark partnership with Meta Platforms and a stock performance that has outpaced even Nvidia year-to-date, AMD is forcing investors to reconsider how they think about the competitive landscape in artificial intelligence silicon. The question isn't whether AMD belongs in the conversation — it's whether the current valuation already prices in the growth ahead, and how it stacks up against a surprisingly strong rival in Broadcom.
The Meta Partnership: What 6 Gigawatts Actually Means
In April 2026, AMD announced one of the most significant deals in its history: a partnership with Meta Platforms to deploy 6 gigawatts of AMD GPUs across Meta's data center infrastructure. To put that number in context, a single gigawatt of GPU compute represents an enormous capital commitment — modern AI training clusters consume power at scales that dwarf traditional data centers by orders of magnitude.
The first gigawatt of shipments is set to begin in the second half of 2026, meaning the revenue impact will start materializing within the current fiscal year. AMD CFO Jean Hu told investors to expect "substantial multiyear revenue growth" from the Meta partnership — language that CFOs use carefully, because it sets expectations. The word "substantial" in a financial context isn't filler; it's a signal that this deal has the potential to move the needle on AMD's top line in ways that analysts need to model seriously.
Meta's decision to diversify its GPU supply away from exclusive reliance on Nvidia has strategic logic beyond just cost negotiation. When a single supplier controls the market for a critical input, hyperscalers become vulnerable to allocation constraints, price increases, and supply chain disruptions. AMD's MI300 series GPUs offer performance competitive enough for many AI workloads that Meta runs at scale, making this a genuine technical fit rather than a negotiating tactic.
AMD vs. Broadcom: Two Ways to Win the AI Chip Race
The AI chip story isn't just an AMD story — it's increasingly a two-company story, with Broadcom emerging as the other chipmaker that has outpaced Nvidia year-to-date and over the past year. Understanding why both AMD and Broadcom are winning, even as they compete for similar capital, requires understanding how differently they approach the market.
AMD operates on the general-purpose GPU model, similar to Nvidia. Its chips are designed to handle a wide range of AI workloads — training large language models, running inference at scale, supporting research applications that don't fit neatly into a single use case. This flexibility is a genuine advantage: a hyperscaler using AMD GPUs can redeploy that compute across different tasks without redesigning their infrastructure.
Broadcom, by contrast, specializes in custom AI chips — application-specific integrated circuits (ASICs) designed for a particular customer's particular workload. Broadcom produces Google's Tensor Processing Units (TPUs), and Anthropic is actively investing in additional TPU units. In April 2026, Broadcom also announced extended partnerships with Meta Platforms, Alphabet, and Anthropic — a sweep of deals that underscores how dominant it has become in the custom silicon market. For more on how custom chip deals are reshaping valuations in the semiconductor space, the recent news around MRVL stock surging on a Google AI chip deal illustrates the same dynamic playing out across the sector.
The tradeoff is real: custom chips can outperform general-purpose GPUs on specific workloads by significant margins, but they require massive upfront investment and lock both customer and supplier into a long-term relationship. If the workload changes, the chip may not. AMD's flexibility is worth something precisely because AI workloads are still evolving rapidly.
According to a detailed comparison of the two chipmakers, Broadcom currently holds an advantage in profit margins despite similar revenue growth trajectories. This is a meaningful distinction for investors: margin quality affects how much of revenue growth actually flows through to earnings and, ultimately, to valuation multiples that are sustainable over time.
The Margin Story: Where AMD Has Room to Run
Broadcom's margin advantage is real, but AMD's margin trajectory may be more interesting than its current position. The bull case for AMD stock isn't that it matches Broadcom's margins today — it's that the company has a credible path to net profit margin expansion that makes current valuations compelling relative to future earnings power.
AMD's GPU business carries higher margins than its CPU business, and as the revenue mix shifts toward AI accelerators (like the MI300 series driving the Meta deployment), overall company margins should improve structurally. This is a similar dynamic to what Nvidia experienced as it transitioned from a gaming-GPU company to an AI infrastructure company — the product mix shift amplified margin expansion beyond what simple revenue growth would suggest.
The key variables are execution and competition. AMD needs to keep its GPU products competitive with Nvidia's next-generation offerings while also ramping manufacturing at TSMC to meet surging demand. Supply chain execution at the scale Meta's partnership requires is a genuine operational challenge, not a trivial logistics problem.
Market Backdrop: A 29% CAGR Through 2030
The macro context for both AMD and Broadcom is genuinely unusual. Grandview Research projects a 29% compound annual growth rate for the AI chip industry from 2024 to 2030 — a rate that, if it materializes, would roughly quintuple the market within six years. Very few industries of any meaningful size grow at that rate for that long.
The drivers behind that projection are structural rather than cyclical. Large language models continue to scale with compute — meaning more chips, not fewer, are required to push AI capabilities forward. Inference demand (running AI models in production) is growing faster than training demand as AI applications proliferate beyond research labs and into consumer products, enterprise software, and critical infrastructure. Both training and inference workloads benefit AMD and Broadcom, though in different ways.
The risk in that 29% CAGR figure is that market projections for fast-moving technology sectors routinely overestimate near-term growth and underestimate long-term structural shifts. The AI chip market could grow faster if model scaling continues to reward compute investment. It could grow slower if efficiency improvements (like model distillation or quantization) reduce the compute required per unit of AI capability. AMD and Broadcom are positioned well either way, but the magnitude of their growth opportunity depends significantly on which scenario plays out.
Stock Performance Context: Outpacing Nvidia
The fact that both AMD and Broadcom have outpaced Nvidia year-to-date is worth examining carefully, because it runs counter to the narrative that Nvidia is the unassailable winner in AI chips. Nvidia remains the dominant player by revenue and market share, but its stock has arguably already priced in a larger portion of its future growth opportunity than AMD or Broadcom.
This is a valuation story as much as a fundamentals story. When a company is priced for perfection, even strong results can disappoint. When a company is priced for uncertainty — as AMD has been, given persistent skepticism about whether it can compete with Nvidia at the high end — positive data points (like the Meta partnership) produce outsized stock reactions because they reduce the uncertainty premium embedded in the valuation.
AMD's stock performance also reflects a broader investor rotation toward AI infrastructure beneficiaries that don't carry Nvidia's premium valuation. Institutional investors who missed Nvidia's run are looking for the next best exposure to AI chip demand, and AMD represents a logical alternative with genuine technical credentials and improving customer relationships.
What This Means for Investors: An Informed Take
The AMD bull case is straightforward: a credible AI chip product, a major new customer in Meta with a multiyear revenue commitment, a large and growing market, and a valuation that doesn't fully reflect the margin expansion potential of a GPU-heavy revenue mix. The bear case is equally clear: Broadcom has better margins today, Nvidia has deeper customer relationships and a more mature software ecosystem (CUDA), and AMD needs to execute flawlessly on a very large manufacturing ramp to meet the Meta deployment timeline.
For investors trying to choose between AMD and Broadcom, the decision hinges on risk tolerance and time horizon. Broadcom offers more margin certainty today and a custom chip model that creates deep customer lock-in — once a hyperscaler designs its infrastructure around a Broadcom ASIC, switching costs are enormous. AMD offers more upside from margin expansion and benefits from the flexibility of its general-purpose GPU approach at a time when AI workloads are still diversifying rapidly.
The honest answer for most investors is that both companies are reasonable expressions of AI chip exposure, and the competitive dynamics between them are less zero-sum than they appear. Meta signing partnerships with both companies in the same month is itself evidence that hyperscalers want diversified supply chains — which benefits AMD and Broadcom simultaneously.
Frequently Asked Questions
What is the AMD-Meta GPU partnership, and why does it matter?
AMD and Meta Platforms announced a deal to deploy 6 gigawatts of AMD GPUs across Meta's data center infrastructure, with the first gigawatt of shipments beginning in the second half of 2026. This matters because it represents one of the largest single GPU procurement commitments in AMD's history, provides visibility into multiyear revenue growth (per AMD CFO Jean Hu's comments to investors), and validates AMD's MI300-series GPUs as a credible alternative to Nvidia for large-scale AI workloads.
How does AMD compare to Broadcom as an AI chip investment?
Both companies have outpaced Nvidia year-to-date, but they compete differently. AMD makes general-purpose GPUs that work across a wide range of AI workloads, while Broadcom specializes in custom ASICs designed for specific customers like Google (TPUs) and Meta. Broadcom currently has higher profit margins, but AMD has more potential for margin expansion as its revenue mix shifts toward higher-margin GPU products. The choice between them depends on whether you value current margin quality (Broadcom) or future margin expansion potential (AMD).
Is AMD a good stock to buy right now?
AMD's risk/reward depends on execution. The Meta partnership provides multiyear revenue visibility, and the AI chip market is projected to grow at 29% CAGR through 2030 per Grandview Research. The risks include execution on a large manufacturing ramp, competition from Nvidia's next-generation products, and whether AMD can close the software ecosystem gap with Nvidia's CUDA platform. Investors should weigh the margin expansion potential against the operational challenges of scaling to meet demand of this magnitude. This is not financial advice — consult a qualified financial advisor before making investment decisions.
Why have AMD and Broadcom both outperformed Nvidia?
Nvidia's dominant position in AI chips is well-known and largely priced into its valuation. AMD and Broadcom have outperformed because investors are rotating toward AI chip exposure with more upside from current prices, and both companies have announced significant new partnerships in 2026 that reduce uncertainty about their growth trajectories. Positive surprises move stock prices more than expected outcomes — which helps AMD and Broadcom more than Nvidia at current valuations.
What is Broadcom's advantage over AMD?
Broadcom's primary advantages are higher profit margins and deep customer lock-in through custom chip design. When Broadcom co-designs a chip with Google or Meta, the customer's infrastructure is built around that specific silicon — switching costs are enormous, creating durable revenue relationships. Broadcom also benefits from partnerships with Anthropic (which is investing in additional Google TPU units that Broadcom produces) and Alphabet. The custom chip model trades flexibility for margin and stickiness — a tradeoff that has worked well for Broadcom financially.
The Bottom Line
AMD's announcement of a 6-gigawatt GPU deployment with Meta Platforms is the most significant commercial validation the company has received in the AI chip market — and it arrives at a moment when the broader AI infrastructure buildout is accelerating, not decelerating. The comparison with Broadcom is instructive rather than competitive: both companies are winning in an expanding market, just with different business models and margin profiles.
What AMD stock represents right now is a bet on margin expansion, on continued hyperscaler diversification away from single-vendor GPU dependence, and on the company's ability to execute one of the largest manufacturing ramps in its history. The Meta deal provides the revenue visibility that was previously AMD's biggest weakness in the AI chip narrative. Whether the stock is fairly valued depends on how much of that expansion investors are willing to price in today — and at a projected 29% CAGR for the broader industry, the runway for continued growth is long enough that even a conservative discount to Nvidia's multiples leaves room for meaningful upside.
The AI chip race isn't a winner-take-all contest. AMD, Broadcom, and Nvidia can all grow substantially while competing — and that's precisely the kind of market dynamic that rewards investors who look beyond the headline name to the companies still finding their footing at scale.