Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Oracle announced today an expansion of its partnership with Nvidia, introducing new GPU options and AI infrastructure services on Oracle Cloud Infrastructure (OCI). This move signals a maturing artificial intelligence market and aims to provide more flexibility for businesses of all sizes looking to leverage AI capabilities.
The announcement centers on the addition of Nvidia L40S GPUs to OCI’s compute offerings and new virtual machine options for Nvidia H100 Tensor Core GPUs.
“This is a great milestone in our partnership with Nvidia and in the AI market,” said Leo Leung, VP of OCI and Oracle Tech, in an interview with VentureBeat. “We’re satisfying that kind of maturing and expansion of use cases for customers at a high level.”
L40S GPU: A multi-purpose AI accelerator
The new L40S GPU instances are positioned as versatile options for a range of AI workloads, including inference, training of smaller models, and graphics-intensive applications like digital twins.
Dave Salvator, director of accelerated computing products at Nvidia, highlighted the L40S GPU’s versatility. “We kind of think of it as a universal AI accelerator,” he told VentureBeat. “It can do your traditional AI, probably more of a focus on the inference side, but can be used to train small models as well. It also has visual capabilities for 3D rendering and video processing.”
Oracle is offering these new GPU options in both bare metal and virtual machine configurations, providing customers with more choices in how they deploy AI workloads. Leung emphasized the importance of bare metal offerings, saying, “With bare metal, there’s no debate. You’re going to get all the resource available for the customer. And that’s critical for that first stage of AI, where people want the maximum performance.”
OCI Supercluster: Supporting massive AI models
The announcement also includes updates to Oracle’s “OCI Supercluster” service, which now supports up to 65,000 NVIDIA GPUs. This massive scale is aimed at organizations training the largest AI models with hundreds of billions of parameters.
“Scale really matters,” Salvator said. “That’s a combination of compute, as well as really capable networking. The faster you can get deployed, the faster you can get to inferencing and putting your application out there and begin getting value from it.”
Industry analysts see this expansion as a strategic move by Oracle to compete more aggressively in the AI cloud market dominated by Amazon Web Services, Microsoft Azure, and Google Cloud. By leveraging its partnership with Nvidia, Oracle is positioning itself as a serious contender for enterprises looking to deploy large-scale AI workloads.
The partnership also benefits Nvidia, providing another major cloud platform to showcase its latest GPU technologies and expand its reach in the enterprise market.
Expanding AI access across business sizes
As AI continues to transform industries, the race among cloud providers to offer the most powerful and flexible AI infrastructure is intensifying. Oracle’s latest offerings demonstrate its commitment to staying competitive in this rapidly evolving landscape.
For businesses, these new options present opportunities to right-size their AI infrastructure investments, potentially lowering barriers to entry for smaller organizations while providing the necessary scale for the most demanding AI workloads.
As Leung summed up, “As a cloud provider, from our perspective, we want to serve all those types of customers,” from tech giants hosting massive models to small engineering teams working on specialized applications.
With this announcement, Oracle has made a clear statement about its AI ambitions, setting the stage for increased competition in the cloud AI market.
Source link