Pricing details for Nvidia‘s highly anticipated Blackwell platform have begun to surface, revealing the hefty investment required for deploying these advanced AI servers. Analysts and Nvidia’s CEO Jensen Huang have indicated that the cost of these powerful products will be substantial, reflecting their cutting-edge capabilities. According to Morgan Stanley, Nvidia is expected to ship 60,000 to 70,000 B200 server cabinets in 2025, potentially generating over $210 billion in annual revenue despite their high price tag.
Nvidia has reportedly poured around $10 billion into the development of the Blackwell platform, enlisting approximately 25,000 personnel for the project. Given the exceptional performance of a single Blackwell GPU, it’s no surprise these products are priced at a premium.
HSBC analysts have provided more specific figures, estimating the cost of Nvidia’s GB200 NVL36 server rack system at $1.8 million, while the NVL72 is priced at $3 million. The GB200 Superchip, which integrates CPUs and GPUs, is projected to cost between $60,000 and $70,000 each. These Superchips feature two GB100 GPUs and a Grace Hopper chip, supported by substantial system memory (HBM3E).
Earlier this year, CEO Jensen Huang revealed to CNBC that a Blackwell GPU would cost between $30,000 and $40,000. Based on this information, Morgan Stanley has calculated that each AI server cabinet will be priced between $2 million and $3 million. With Nvidia’s planned shipment of 60,000 to 70,000 B200 server cabinets, the projected annual revenue could exceed $210 billion.
Despite the steep prices, the demand for these AI servers remains robust. Sequoia Capital analyst David Cahn estimated that to justify their investments, companies would need to generate $600 billion in annual AI revenue. However, the unparalleled performance of the B200, which boasts 208 billion transistors and can deliver up to 20 petaflops of FP4 compute power, makes it a worthy investment. Training a 1.8 trillion-parameter model, which would typically require 8,000 Hopper GPUs consuming 15 megawatts of power, can be accomplished with just 2,000 Blackwell GPUs using only four megawatts.
The GB200 Superchip offers 30 times the performance of an H100 GPU for large language model inference workloads and significantly reduces power consumption. Due to the high demand, Nvidia is reportedly increasing its orders with TSMC by approximately 25%, according to Morgan Stanley.
Blackwell is poised to become the industry standard for AI training and inference workloads, powering next-generation applications across various sectors, including robotics, self-driving cars, engineering simulations, and healthcare. As companies continue to invest heavily in AI infrastructure, Nvidia’s Blackwell platform is set to lead the charge in delivering unprecedented performance and efficiency.
Related topics:
How to Transform Your Home with Ikea Home Automation