A Comprehensive Review of AI Chips by Nvidia, AMD, Google, and Tesla
This article provides a comprehensive review of AI chips developed by AMD, Google, and Tesla. Recent advancements suggest that Tesla, with its powerful Dojo 3 chip, might surpass AMD in terms of AI chip performance and production volume.
The key takeaway from Tesla's Abundance slide is its focus on AI compute alongside projects like Optimus, Robotaxi, and Full Self-Driving (FSD). This hints at the significance of the Dojo 2 and Dojo 3 AI training chips in enhancing FSD capabilities and training for the Optimus robot. Additionally, chips AI5 and AI6 are being developed for use in Optimus and Robotaxi, which are also central to Tesla's AI strategy.
In terms of production estimates, AMD is believed to have shipped between 300,000 and 400,000 units of its Instinct MI300 AI chips in 2024, generating about $5 billion in revenue. If we calculate the average selling price (ASP) of these chips, we see the following:
$5 billion ÷ 300,000 units = approximately $16,667 per chip
$5 billion ÷ 400,000 units = approximately $12,500 per chip.
Looking ahead to 2025, AMD is expected to sell around 500,000 AI chips, which could result in a revenue figure of $7.5 billion.
Nvidia AI Chips in 2025
Nvidia's dominance in the AI sector is reflected in its data center revenue, which serves as a good indicator of its AI chip sales. For 2024, analysts project Nvidia's data center revenue to reach approximately $110.36 billion. Given its continued leadership in the AI market, a revenue estimate of $120 billion for 2025 seems realistic.
The number of chips sold will depend on the average selling price. Nvidia's H100 GPUs are known to range from $20,000 to $40,000 each. Assuming an average ASP of $30,000 per chip, we can estimate:
$120 billion ÷ $30,000 = 4,000,000 chips (or 4 million chips).
Google TPUs in 2025
Google develops Tensor Processing Units (TPUs) primarily for its own internal use, meaning that the production numbers reflect deployment within its data centers rather than direct sales. In 2024, the global shipments of AI ASIC accelerators, including TPUs, are expected to total about 3.45 million units, with Google having a significant share of 74%, translating to approximately 2.55 million TPUs. With a 20% projected market growth in 2025, the total shipments could rise to 4.14 million units. If Google maintains its 74% market share, that would yield:
4.14 million × 0.74 = 3.06 million TPUs.
Regarding performance, the current TPU v4 delivers 275 teraFLOPs (bfloat16), while the TPU v5e provides 197 teraFLOPs (bfloat16). A new sixth-generation TPU (TPU v6) is planned for 2025, with expectations of performance improvements, estimated to deliver 400 teraFLOPs per chip (bfloat16).
Tesla Dojo 2 in 2025
Tesla's Dojo 2 AI training chip is anticipated to begin high-volume production by late 2025. The Dojo 1 chips offer 367 Tflops, which Tesla claims equates to just 5% of what 50,000 to 100,000 Nvidia H100 chips can achieve, suggesting that around 15,000 to 30,000 Dojo 1 chips may have been produced.
Tesla appears to be investing around $500 million annually into developing its Dojo supercomputers. Assuming an ASP of $10,000 per chip (similar to high-end AI chips), this translates to:
$500 million ÷ $10,000 = 50,000 chips.
The Dojo 2 is expected to be ten times more powerful than the Dojo 1, suggesting capabilities of around 3-4 petaflops, which would outpace the performance of Nvidia H100 chips.
Additionally, Tesla's upcoming Dojo 3 chips, projected to be released in 2026, could potentially achieve a tenfold increase in performance, reaching around 40 petaflops. This would put them in direct competition with Nvidia's B300s, particularly in terms of cost-performance metrics when compared to Nvidia's Rubin chips.
More broadly, both Tesla and XAI are expected to develop massive-scale AI data centers by 2026. If the Dojo 3 chip performs successfully, Tesla could emerge as the second-largest player in the AI chip market, surpassing AMD.
AMD, Google, Tesla, Nvidia, AI