More

    Musk Unveils Supercomputer Dojo: Tesla’s Groundbreaking AI Powerhouse Set to Challenge Nvidia

    In a high-profile announcement, Elon Musk has introduced Tesla’s latest technological marvel, the Dojo supercomputer, a self-developed powerhouse poised to revolutionize AI training. Designed to advance Tesla’s Full Self-Driving (FSD) and Optimus Prime robotics, Dojo represents a significant leap in computing power, rivaling Nvidia‘s leading technologies.

    During a visit to Tesla’s Texas Super Factory (Cortex), Musk provided insights into Dojo’s ambitious scale. “This system will feature approximately 100,000 H100/H200 GPUs, complemented by extensive storage capabilities for video training of our fully autonomous driving systems and Optimus robots,” Musk stated.

    In addition to utilizing Nvidia GPUs, Dojo’s infrastructure incorporates Tesla’s proprietary HW4, AI5, and Dojo systems. The supercomputer will be supported by an enormous power and cooling system capable of delivering up to 500 megawatts.

    The genesis of Dojo dates back to Tesla AI Day 2021, when Musk first announced its development. Three years later, the project has made substantial progress. Recent reports suggest that by the end of 2024, Tesla will achieve AI training computing power equivalent to 90,000 H100 GPUs. As of now, Dojo’s initial phase—referred to as Dojo 1—boasts computing power comparable to 8,000 H100 units, a formidable yet manageable scale.

    In June of the previous year, Musk revealed that Dojo had already been operational, contributing to various training tasks. During Tesla’s latest earnings conference, Musk reaffirmed Tesla’s commitment to advancing self-driving technologies, announcing that the AI team will “double down” on Dojo’s capabilities. By October 2024, Dojo is expected to reach a total computing power of 100 exaflops.

    To achieve this milestone, Tesla will need over 276,000 D1 chips or more than 320,000 Nvidia A100 GPUs. The D1 chip, introduced at Tesla AI Day 2021, is a marvel of engineering with 50 billion transistors packed into a chip the size of a palm. It utilizes TSMC’s 7nm process node and is specifically optimized for machine learning tasks, although it currently falls short of the Nvidia A100’s performance.

    To maximize bandwidth and computational efficiency, Tesla’s AI team has integrated 25 D1 chips into a single tile, functioning as a unified computer system. Despite the impressive capabilities of the D1, Nvidia’s A100 remains superior in terms of raw performance, with a chip size of 826 square millimeters and 54 billion transistors.

    As Tesla continues to push the boundaries of AI technology, Dojo stands as a testament to the company’s relentless pursuit of innovation in autonomous driving and robotics.

    Related topics:

    What Is RunwayML?

    Sora Vs Runway: Which Is Better?

    Top 6 AI Virtual Assistants of 2024

    Recent Articles

    TAGS

    Related Stories