395 annotations
Page 11 of 20
H100s go out really as HGXs sent to the world’s hyperscalers and they’re really, really quite large system components
(No comment added)
Transcript
2024 Q2
24 Aug 23
the largest driver of our revenue within this last quarter was definitely the HGX system
(No comment added)
Transcript
2024 Q2
24 Aug 23
The rest of the GPUs, we have new GPUs coming to market that we talk about the L40S, and they will add continued growth going forward.
(No comment added)
Transcript
2024 Q2
24 Aug 23
our HGX systems were a very significant part of our Data Center as well as our Data Center growth that we had seen
(No comment added)
Transcript
2024 Q2
24 Aug 23
The world has something along the lines of about $1 trillion worth of data centers installed, in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI.
(No comment added)
Transcript
2024 Q2
24 Aug 23
Spectrum-X couples the Spectrum or Ethernet switch with the BlueField-3 DPU, achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet.
(No comment added)
Transcript
2024 Q2
24 Aug 23
NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads
(No comment added)
Transcript
2024 Q2
24 Aug 23
It is the network of choice for leading AI practitioners.
(No comment added)
Transcript
2024 Q2
24 Aug 23
only InfiniBand can scale to hundreds of thousands of GPUs
(No comment added)
Transcript
2024 Q2
24 Aug 23
Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems.
(No comment added)
Transcript
2024 Q2
24 Aug 23
InfiniBand delivers more than double the performance of traditional Ethernet for AI
(No comment added)
Transcript
2024 Q2
24 Aug 23
DGX GH200 systems are expected to be available by the end of the year, Google Cloud, Meta and Microsoft among the first to gain access.
(No comment added)
Transcript
2024 Q2
24 Aug 23
enabling all of its 256 Grace Hopper Superchips to work together as one, a huge jump compared to our prior generation connecting just eight GPUs over [indiscernible]
(No comment added)
Transcript
2024 Q2
24 Aug 23
The second generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024.
(No comment added)
Transcript
2024 Q2
24 Aug 23
The GH200 Grace Hopper Superchip which combines our ARM-based Grace CPU with Hopper GPU entered full production and will be available this quarter in OEM servers. It is also shipping to multiple supercomputing customers, including Atmos (ph), National Labs and the Swiss National Computing Center.
(No comment added)
Transcript
2024 Q2
24 Aug 23
We also announced new NVIDIA AI enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor.
(No comment added)
Transcript
2024 Q2
24 Aug 23
just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and accelerated computing from NVIDIA with multi-cloud software for enterprises running VMware
(No comment added)
Transcript
2024 Q2
24 Aug 23
AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services.
(No comment added)
Transcript
2024 Q2
24 Aug 23
Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enables users to create 3D scene background with the help of generative AI.
(No comment added)
Transcript
2024 Q2
23 Aug 23
With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbot, search and summarization, right from the Snowflake Data Cloud.
(No comment added)
Transcript
2024 Q2
23 Aug 23