513 annotations
Page 20 of 26
The 4060 Ti is available starting today, while the 4060 will be available in July
(No comment added)
Transcript
2024 Q1
25 May 23
Last week, we launched the 60 family, RTX 4060, and 4060 Ti
(No comment added)
Transcript
2024 Q1
25 May 23
The RTX 4070 is nearly 3x faster than the RTX 2070
(No comment added)
Transcript
2024 Q1
25 May 23
In desktop, we ramped the RTX 4070, which joined the previously launched RTX 4090, 4080, and 4070 Ti GPUs.
(No comment added)
Transcript
2024 Q1
25 May 23
end demand was solid, and consistent with seasonality, demonstrating resilience against a challenging consumer spending backdrop
(No comment added)
Transcript
2024 Q1
25 May 23
Strong sequential growth was driven by sales of the 40 Series GeForce RTX GPUs for both notebooks and desktops.
(No comment added)
Transcript
2024 Q1
25 May 23
NVIDIA Grace CPU Superchip, which is 6x more energy-efficient than the previous supercomputer
(No comment added)
Transcript
2024 Q1
25 May 23
The coming wave of BlueField-3, Grace and Grace Hopper Superchips will enable a new generation of super energy efficient accelerated data centers.
(No comment added)
Transcript
2024 Q1
25 May 23
BlueField-3 is in production and has been adopted by multiple hyperscale and CSP customers, including Microsoft Azure, Oracle Cloud, CoreWeave, Baidu, and others.
(No comment added)
Transcript
2024 Q1
25 May 23
customers routinely enjoy a 20% increase in throughput for their sizable infrastructure investment
(No comment added)
Transcript
2024 Q1
25 May 23
Our 400 gig Quantum-2 InfiniBand platform is the gold standard for AI dedicated infrastructure
(No comment added)
Transcript
2024 Q1
25 May 23
Demand relating to general purpose CPU infrastructure remain soft.
(No comment added)
Transcript
2024 Q1
25 May 23
In networking, we saw strong demand at both CSPs and enterprise customers for generative AI and accelerated computing
(No comment added)
Transcript
2024 Q1
25 May 23
Google Cloud is the first CSP to adopt our L4 inference platform
(No comment added)
Transcript
2024 Q1
25 May 23
and also, recommendation systems and vector databases
(No comment added)
Transcript
2024 Q1
25 May 23
we announced four major new inference platforms
(No comment added)
Transcript
2024 Q1
25 May 23
L4 Tensor Core GPU for AI video, L40 for Omniverse, and graphics rendering, H100 NVL for large language models, and the Grace Hopper Superchip for LLMs
(No comment added)
Transcript
2024 Q1
25 May 23
The latest MLPerf industry benchmark released in April, showed NVIDIA's inference platform deliver performance that is orders of magnitude ahead of the industry
(No comment added)
Transcript
2024 Q1
25 May 23
NVIDIA NeMo for large language models, NVIDIA Picasso for images, video, and 3D, and NVIDIA BioNeMo for life sciences
(No comment added)
Transcript
2024 Q1
25 May 23
ServiceNow, a leading enterprise services platform is an early adopter of DGX cloud and NeMo.
(No comment added)
Transcript
2024 Q1
25 May 23