Many of our investor clients are increasingly questioning whether AI data center investments are approaching saturation or if they will continue to accelerate. A common concern is whether the current purchases of GPUs for training and running models over the next 2-3 years will be sufficient, or if the industry is nearing its capacity. This concern is heightened by reports like Nvidia's Blackwell AI processors being sold out for the next 12 months.
Nvidia expected to produce 450,000 Blackwell AI GPUs in Q4 2024. This is not even counting H100s.
In this article, we will look at
GPU Investment Strategy: Meta and Google’s 2025 Plan
More GPUs → Better AI Performance → More AI Innovation
Distribution of GPU Allocation to Training vs Inference vs. Application
Major Data Center Build Outs
The Push for Sovereign AI
Please note: The insights presented in this article are derived from confidential consultations our team has conducted with clients across private equity, hedge funds, startups, and investment banks, facilitated through specialized expert networks. Due to our agreements with these networks, we cannot reveal specific names from these discussions. Therefore, we offer a summarized version of these insights, ensuring valuable content while upholding our confidentiality commitments.
GPU Investment Strategy: Meta and Google’s 2025 Plan
Jensen says Meta has 600,000 H100 GPUs (we estimate 400k are already deployed and being used) , the question arises whether the industry will approach a saturation point in near future. However, leaders like Mark Zuckerberg suggest that we are far from hitting a ceiling, with Meta and other companies continuing to ramp up investments in AI infrastructure.
Our clients frequently ask whether the growth in GPU investments will continue into 2025. We firmly believe that