Nvidia introduced new software on Monday aimed at simplifying the integration of artificial intelligence systems into business operations, marking a significant expansion of the chipmaker’s offerings.
The launch underscores Nvidia’s strategic move to broaden its presence in the AI application execution sector, known as inference, where its chips have not historically dominated the market, according to Joel Hellermark, CEO of Sana, a company specializing in AI assistants for businesses.
While Nvidia is renowned for supplying the chips used to train foundational models like OpenAI‘s GPT-4, which involves processing vast amounts of data and is predominantly conducted by AI-focused and major tech companies, there is now a growing demand from businesses of all sizes to incorporate these models into their operations. However, this process can often be complex.
The new tools released by Nvidia aim to simplify the modification and deployment of various AI models on Nvidia hardware, likened by venture capitalist Ben Metcalfe to purchasing a ready-made meal instead of sourcing ingredients individually. Metcalfe highlighted the importance of Nvidia’s expanded offerings, noting that while tech giants like Google, Doordash, and Uber have the resources to develop such systems independently, Nvidia’s increased GPU availability allows smaller companies to derive value from GPU technology.
One example provided is ServiceNow, a provider of software used by technical support teams in large corporations, which used Nvidia’s tools to create a “copilot” to assist in resolving corporate IT issues.
Notable partners for Nvidia’s new tools include Microsoft, Alphabet Inc.’s Google, and Amazon, which will offer them as part of their cloud computing services. Additionally, companies like Google, Cohere, Meta, and Mistral are providing models compatible with Nvidia’s tools. However, key players such as OpenAI, Microsoft, and Anthropic are conspicuously absent from the list.
The introduction of these tools represents a potential revenue opportunity for Nvidia, as they form part of its existing software suite priced at $4,500 annually per Nvidia chip for private data center usage, or $1 per hour in a cloud data center.