本文作者:彩虹

黄仁勋:AI数据中心可扩展至百万芯片,性能年翻倍,能耗年减2-3倍

彩虹 11-09 7
黄仁勋:AI数据中心可扩展至百万芯片,性能年翻倍,能耗年减2-3倍摘要:   来源:华尔街见闻  黄仁勋表示,没有物理定律限制AI数据中心扩展到百万芯片,我们现在可以将AI软件扩展到多个数据中心运行。我们已经为能够在一个前所未有的水平上扩展计算做好了准...

  来源:华尔街见闻

  黄仁勋表示,没有物理定律限制AI数据中心扩展到百万芯片,我们现在可以将AI软件扩展到多个数据中心运行。我们已经为能够在一个前所未有的水平上扩展计算做好了准备,而且我们现在才刚刚开始。在未来十年,计算性能每年将翻倍或翻三倍,而能源需求每年将减少2-3倍,我称之为超摩尔定律曲线。

黄仁勋:AI数据中心可扩展至百万芯片,性能年翻倍,能耗年减2-3倍

  本周,英伟达CEO黄仁勋接受了《No Priors》节目主持人的采访,就英伟达的十年赌注、x.AI超级集群的快速发展、NVLink技术创新等AI相关话题进行了一场深度对话。

  黄仁勋表示,没有任何物理定律可以阻止将AI数据中心扩展到一百万个芯片,尽管这是一个难题,多家大公司包括OpenAI、Anthropic、谷歌、Meta和微软等,都在争夺AI领域的领导地位,竞相攀登技术的高峰,但重新创造智能的潜在回报是如此之大,以至于不能不去尝试。

  摩尔定律曾是半导体行业发展的核心法则,预测芯片的晶体管数目每两年会翻倍,从而带来性能的持续提升。然而,随着物理极限的接近,摩尔定律的速度开始放缓,芯片性能提升的瓶颈逐渐显现。

  为了解决这一问题,英伟达将不同类型的处理器(如GPU、TPU等)结合起来,通过并行处理来突破传统摩尔定律的限制。黄仁勋表示,未来10年,计算性能每年将翻一番或三倍,而能源需求每年将减少2-3倍,我称之为“超摩尔定律曲线”。

  黄仁勋还提到,我们现在可以将AI软件扩展到多个数据中心:“我们已经做好准备,能够将计算扩展到前所未有的水平,而我们正处于这一领域的起步阶段。”

  以下是黄仁勋讲话的亮点:

1.我们在未来10年进行了重大的投资。我们正在投资基础设施,打造下一代AI计算平台。我们在软件、架构、GPU以及所有实现AI开发所需的组件上都进行了投资。

2.摩尔定律,即晶体管数目每两年翻倍的预言,曾经是半导体行业的增长指南。然而,随着物理极限的接近,摩尔定律已不再能够单独推动芯片性能的提升。为了解决这一问题,英伟达采用了类似于“异构计算”的方式,即将不同类型的处理器(如GPU、TPU等)结合起来,通过并行处理来突破传统摩尔定律的限制。英伟达的技术创新,如CUDA架构和深度学习优化,使得AI应用得以在超越摩尔定律的环境中高速运行。

3.我们推出了NVLink作为互连技术,它使得多个GPU能够协同工作,每个GPU处理工作负载的不同部分。通过NVLink,GPU之间的带宽和通信能力大幅提升,使得数据中心能够扩展并支持AI工作负载。

4.未来的AI应用需要动态和弹性强的基础设施,能够适应各种规模和类型的AI任务。因此,英伟达致力于构建可以灵活配置和高效运营的基础设施,满足从中小型AI项目到超大规模超级计算集群的需求。

5.构建AI数据中心的关键是要同时优化性能和效率。在AI工作负载中,你需要巨大的电力,而散热成为一个巨大的问题。所以我们花了大量时间优化数据中心的设计和运营,包括冷却系统和电力效率。

6.在硬件快速发展的背景下,保持软件与硬件架构的兼容性显得尤为重要。黄仁勋提到,我们必须确保我们的软件平台,如CUDA,可以跨代硬件使用。开发者不应当每次我们推出新芯片时都被迫重写代码。因此,我们确保保持向后兼容,并让软件能够在我们开发的任何新硬件上高效运行。

7.我们正在建设一个超级集群,叫做X.AI,它将成为世界上最大的AI超级计算平台之一。这个超级集群将提供支持一些最雄心勃勃的AI项目所需的计算能力。这是我们推动AI前进的一大步。

8.扩展AI数据中心的一个大挑战是管理它们消耗的巨大能源。问题不仅仅是构建更大、更快的系统。我们还必须处理运行这些超大规模系统时面临的热量和电力挑战。为了应对这一切,需要创新的工程技术来确保基础设施能够应对。

9.AI在芯片设计中的作用日益重要,黄仁勋指出,AI已经在芯片设计中发挥着重要作用。我们使用机器学习来帮助设计更高效的芯片,速度更快。这是我们设计下一代英伟达芯片的一个关键部分,并帮助我们构建专为AI工作负载优化的芯片。

10.英伟达市值的激增是因为我们能够将公司转型为AI公司。我们从一开始是GPU公司,但我们已经转型成了AI计算公司,这一转型是我们市值增长的关键部分。AI技术的需求正在迅速增长,我们处在一个能够满足这一需求的有利位置。

11.具象化AI是指将AI与物理世界进行结合。通过这种方式,AI不仅可以在虚拟环境中进行任务处理,还能在现实世界中进行决策并执行任务。具象化AI将推动智能硬件、自动驾驶等技术的快速发展。

12.AI不仅仅是工具,它也可以成为‘虚拟员工’,帮助提升工作效率。AI能够在数据处理、编程、决策等领域替代或辅助人类工作,进而改变整个劳动市场和工作方式。

13.AI将在科学与工程领域产生巨大影响,特别是在药物研发、气候研究、物理实验等领域。AI将帮助科学家处理大量数据,揭示新的科学规律,并加速创新。它还将在工程领域优化设计,提高效率,推动更具创新性的技术发展。

14.我自己也在日常工作中使用AI工具,来提高效率和创造力。我认为,AI不仅能够帮助我们处理复杂的数据和决策任务,还能提升我们的创意思维和工作效率,成为每个人工作中不可或缺的一部分。

  以下是采访文字实录全文,由AI翻译:

主持人:Welcome back, Johnson, 30 years in to Nvidia and looking 10 years out, what are the big bets you think are still to make? Is it all about scale up from here? Are we running into limitations in terms of how we can squeeze more compute memory out of the architectures we have? What are you focused on? Well.

嗨,Johnson,欢迎回来!你在英伟达工作了30年,展望未来10年,你认为还有哪些重要的投资机会?是不是说我们只需要继续扩大规模?我们在现有架构中是否会遇到限制,无法再挤出更多的计算内存?你目前关注的重点是什么?

黄仁勋:If we take a step back and think about what we‘ve done, we went from coding to machine learning, from writing software tools to creating AIs and all of that running on CPUs that was designed for human coding to now running on GPUs designed for AI coding, basically machine learning. And so the world has changed the way we do computing the whole stack has changed. And as a result, the scale of the problems we could address has changed a lot because we could, if you could paralyze your software on one GPU, you’ve set the foundations to paralyze across a whole cluster or maybe across multiple clusters or multiple data centers. And so I think we‘ve set ourselves up to be able to scale computing at a level and develop software at a level that nobody’s ever imagined before. And so we‘re at the beginning that over the next 10 years, our hope is that we could double or triple performance every year at scale, not at chip, at scale. And to be able to therefore drive the cost down by a factor of 2 or 3, drive the energy down by a factor of 2,3 every single year. When you do that every single year, when you double or triple every year in just a few years, it adds up. So it compounds really aggressively. And so I wouldn’t be surprised if, you know, the way people think about Moore‘s Law, which is 2 x every couple of years, you know, we’re gonna be on some kind of a hyper Moore‘s Law curve. And I fully hope that we continue to do that. Well, what.

以前我们编程都是靠自己写代码,现在我们开始让机器自己学习,自己写代码。以前我们用的那种电脑芯片(CPU)是给人写代码用的,现在我们用的电脑芯片(GPU)是给机器学习用的。因为这些变化,我们现在处理问题的方式和以前完全不一样了。打个比方,如果你能让一个机器学习程序在一个GPU上运行,那你就可以让它在整个电脑群里,甚至在很多电脑群或者数据中心里运行。这意味着我们现在能处理的问题比以前大多了。所以,我们相信自己已经建立了能够大规模扩展计算能力和开发软件的基础,这个规模是以前没人想象过的。

我们希望在未来10年里,每年都能让计算能力翻两倍或者三倍,不是单个芯片的能力,而是整体的能力。这样的话,我们就能每年把计算成本降低两倍或三倍,把能耗也降低两倍或三倍。这种增长如果每年都能实现,那么几年下来,这个增长会非常惊人。因此,我认为未来的计算将会超越传统的“摩尔定律”(即每两年计算能力翻倍),可能会走上一条更快的增长曲线,我也非常希望能够继续沿着这个方向前进。

主持人:Do you think is the driver of making that happen even faster than Morse law? Cuz I know morezo was sort of self reflexive, right? It was something that he said and then people kind of implemented it to me to happen.

你认为是什么因素推动了计算能力增长速度超过摩尔定律的?因为我知道,摩尔定律本身就是一种“自我实现”的规律,对吧?也就是说,摩尔定律本身是摩尔提出的,然后大家就按照这个规律去做,结果它就实现了。

黄仁勋:Yep, too. Fundamental technical pillars. One of them was Denard scaling and the other one was Carver Mead‘s VLSI scaling. And both of those techniques were rigorous techniques, but those techniques have really run out of steam. And, and so now we need a new way of doing scaling. You know, obviously the new way of doing scaling are all kinds of things associated with co design. Unless you can modify or change the algorithm to reflect the architecture of the system or change and then change the system to reflect the architecture of the new software and go back and forth. Unless you can control both sides of it, you have no hope. But if you can control both sides of it, you can do things like

move from FP64 to FP32 to BF16 to FPA to, you know, FP4 to who knows what, right? And so, and so I think that code design is a very big part of that. The second part of it, we call it full stack. The second part of it is data center scale. You know, unless you could treat the network as a compute fabric and push a lot of the work into the network, push a lot of the work into the fabric. And as a result, you‘re compressing, you know, doing compressing at very large scales. And so that’s the reason why we bought Melanox and started fusing infinite and MV Link in such an aggressive way.

过去推动技术进步的两个关键技术柱子是Denard缩放(Denard Scaling)和Carver Mead的VLSI缩放。但是这两种方法现在都不太管用了,我们需要新的方法来变得更快。

新方式就是“协同设计”(co-design),也就是软件和硬件必须同时考虑和优化。具体来说,如果你不能修改或调整算法,使其与系统的架构匹配,或者不能改变系统架构,以适应新软件的需求,那么就没有希望。但如果你能同时控制软件和硬件,你就能做很多新的事情,比如:从高精度的FP64转到低精度的FP32,再到BF16、FPA、甚至FP4等更低精度的计算。

这就是为什么“协同设计”这么重要的原因。另外,另一个重要的部分是全栈设计。这意味着,你不仅要考虑硬件,还要考虑数据中心级别的规模。比如,必须把网络当作一个计算平台来使用,把大量的计算任务推到网络里,利用网络和硬件进行大规模压缩运算。

因此,我们收购了Mellanox,并开始非常积极地推动InfiniBand和NVLink这类高速连接技术,来支持这种全新的大规模计算架构。

And now look where MV Link is gonna go. You know, the compute fabric is going to, I scale out what appears to be one incredible processor called a GPU. Now we get hundreds of GPUs that are gonna be working together.And now look where MV Link is gonna go. You know, the compute fabric is going to, I scale out what appears to be one incredible processor called a GPU. Now we get hundreds of GPUs that are gonna be working together.You know, most of these computing challenges that we‘re dealing with now, one of the most exciting ones, of course, is inference time scaling, has to do with essentially generating tokens at incredibly low latency because you’re self reflecting, as you just mentioned. I mean, you‘re gonna be doing tree surge, you’re gonna be doing chain of thought, you‘re gonna be doing probably some amount of simulation in your head. You’re gonna be reflecting on your own answers. Well, you‘re gonna be prompting yourself and generating text to your in, you know, silently and still respond hopefully in a second. Well, the only way to do that is if your latency is extremely low.Meanwhile, the data center is still about producing high throughput tokens because you know, you still wanna keep cost down, you wanna keep the throughput high, you wanna, right, you know, and generate a return. And so these two fundamental things about a factory, low latency and high throughput, they’re at odds with each other. And so in order for us to create something that is really great in both, we have to go invent something new, and Envy Link is really our way of doing that.We now you have a virtual GPU that has incredible amount of flops because you need it for context. You need a huge amount of memory, working memory, and still have incredible bandwidth for token generation all of the same time.

现在看NVLink(英伟达的高速连接技术)将走向哪里,未来的计算架构将变得非常强大。你可以把它想象成一个超级强大的处理器,就是GPU(图形处理单元)。而现在,英伟达的目标是把数百个GPU集成到一起,协同工作,形成一个庞大的计算平台。

现在我们面临的计算挑战中,有一个非常令人兴奋的问题就是推理时间的缩短。特别是在生成文本时,需要非常低的延迟。因为就像你刚才提到的,我们的思维其实是一种自我反思的过程:你可能在脑海中进行“树形搜索”(tree search)、思考链条(chain of thought),甚至可能会进行某种模拟,回顾自己的答案。你会自己给自己提问,并生成答案,在大脑里“默默地”思考,然后希望能在几秒钟内回应出来。

为了做到这一点,计算的延迟必须非常低,因为你不可能等太久才能得到结果。

但与此同时,数据中心的任务是产生大量的高吞吐量的“token”(符号)。你需要控制成本,保持高吞吐量,并且确保能够获得回报。所以,低延迟和高吞吐量是两个相互矛盾的目标:低延迟要求快速响应,而高吞吐量则需要处理更多的数据。这两者之间存在冲突。

为了同时做到这两点,必须创造一些全新的技术,而NVLink就是我们解决这个问题的方法之一。通过NVLink,英伟达希望能够在确保高吞吐量的同时,也能提供低延迟,从而解决这一计算上的矛盾,提升整体性能。

现在我们有了虚拟GPU,它的计算能力非常强大,因为我们需要这么强的计算能力来处理上下文。也就是说,当我们在处理一些任务时,需要非常大的内存(特别是工作内存),同时还要有极高的带宽来生成token(即文本或数据符号)。

主持人:Building the models, actually also optimizing things pretty dramatically like David and my team pull data where over the last 18 months or so, the cost of 1 million tokens going into a GPT four equivalent model is basically dropped 240 x. Yeah, and so there‘s just massive optimization and compression happening on that side as.

构建模型的过程其实也包括了很多优化工作,比如David和他的团队,通过过去18个月的努力,成功地将每百万个token的成本(用于GPT-4类模型的成本)降低了240倍。

黄仁勋:Well. Just in our layer, just on the layer that we work on. You know, one of the things that we care a lot about, of course, is the ecosystem of our stack and the productivity of our software. You know, people forget that because you have Kuda Foundation and that‘s a solid foundation. Everything above it can change. If everything, if the foundation’s changing underneath you, it‘s hard to build a building on top. It’s hard to create anything and interesting on top. And so could have made it possible for us to iterate so quickly just in the last year. And then we just went back and benchmarked when Lama first came out, we‘ve improved the performance of Hopper by a factor of five without the algorithm, without the layer on top ever changing. Now, well, a factor of five in one year is impossible using traditional computing approaches. But it’s already computing and using this way of code design, we‘re able to explain all kinds of new things.

在我们的工作领域里,有一件非常重要的事情,就是技术栈的生态系统和软件的生产力。我们特别重视的是Kuda Foundation这个基础平台,它是非常稳定和坚实的。因为如果基础平台不断变化,想要在上面构建出一个系统或者应用就非常困难,根本无法在不稳定的基础上创造出有趣的东西。所以,Kuda Foundation的稳定性让我们能够非常快速地进行迭代和创新,尤其是在过去一年里。

然后,我们还做了一个对比测试:当Lama首次推出时,我们通过优化Hopper(一种计算平台或架构),在不改变算法和不改变上层架构的情况下,提升了性能5倍。而且这种5倍的提升,在传统的计算方法下是几乎不可能实现的。但通过协同设计这种新的方法,我们能够在现有的基础上不断创新和解释更多新的技术可能性。

主持人:How much are, you know, your biggest customers thinking about the interchangeability of their infrastructure between large scale training and inference?

你的那些最大客户有多关心他们在大规模训练和推理之间基础设施的互换性?

黄仁勋:Well, you know, infrastructure is disaggregated these days. Sam was just telling me that he had decommissioned Volta just recently. They have pascals, they have amperes, all different configurations of blackwall coming. Some of it is optimized for air cool, some of it‘s optimized liquid cool. Your services are gonna have to take advantage of all of this. The advantage that Nvidia has, of course, is that the infrastructure that you built today for training will just be wonderful for inference tomorrow. And most of Chat GBT, I believe, are inferenced on the same type of systems that we’re trained on just recently. And so you can train on, you can inference on it. And so you‘re leaving a trail of infrastructure that you know is going to be incredibly good at inference, and you have complete confidence that you can then take that return on it, on the investment that you’ve had and put it into a new infrastructure to go scale with, you know you‘re gonna leave behind something of use and you know that Nvidia and the rest of the ecosystem are gonna be working on improving the algorithm so that the rest of your infrastructure improves by a factor of five, you know, in just a year. And so that motion will never change.

现在的基础设施不像以前那样是一成不变的了。比如Sam刚告诉我,他们最近淘汰了Volta型号的设备。他们有Pascal型号的,有Ampere型号的,还有很多不同配置的Blackwall型号即将到来。有些设备是优化了空气冷却的,有些则是优化了液体冷却的。你们的服务需要能够利用所有这些不同的设备。

英伟达的优势在于,你今天为训练搭建的基础设施,将来会非常适合用于推理。我相信大多数的Chat GBT(可能是指大型语言模型)都是在最近训练过的相同类型的系统上进行推理的。所以你可以在这个系统上训练,也可以在这个系统上进行推理。这样,你就留下了一条基础设施的轨迹,你知道这些基础设施将来会非常适合进行推理,你完全有信心可以把之前投资的回报,投入到新的基础设施中去,扩大规模。你知道你会留下一些有用的东西,而且你知道英伟达和整个生态系统都在努力改进算法,这样你的其他基础设施在仅仅一年内就能提高五倍的效率。所以这种趋势是不会变的。

And so the way that people will think about the infrastructures, yeah, even though I built it for training today, it‘s gotta be great for training. We know it’s gonna be great for inference. Inference is gonna be multi scale. 说话人 2 08:53 I mean, you‘re gonna take, first of all, in order to, the still smaller models could have a larger model that’s still from and so you‘re still gonna create these incredible a frontier models. They’re gonna be used for, of course, the groundbreaking work. You‘re gonna use it for synthetic data generation. You’re gonna use the models, the big models that teach smaller models and distill down to smaller models. And so there‘s a whole bunch of different things you can do, but in the end, you’re gonna have giant models all the way down to little tiny models. The little tiny models are gonna be quite effective, you know, not as generalizable, but quite effective. And so, you know, they‘re gonna perform very specific stunts incredibly well that one task. And we’re gonna see superhuman task in one little tiny domain from a little tiny model. Maybe you know, it‘s not a small language model, but you know, tiny language model, TLMs are, you know, whatever. Yeah, so I think we’re gonna see all kinds of sizes and we hope isn‘t right, just kind of like softwares today.

人们看待基础设施的方式在变,就像我现在建的这个设施虽然是为了训练用的,但它也必须很适合训练。我们知道它将来也会非常适合做推理。推理会有很多不同的规模。

我是说,你会有各种不同大小的模型。小模型可以从大模型那里学习,所以你还是会创造一些前沿的大模型。这些大模型会用来做开创性的工作,用来生成合成数据,用来教小模型,然后把知识蒸馏给小模型。所以你可以做的事情有很多,但最后你会有从巨大的模型到非常小的模型。这些小模型将会非常有效,虽然它们不能通用,但在特定任务上会非常有效。它们会在某个特定任务上表现得非常好,我们将会看到在某个小小的领域里,小模型能完成超乎人类的任务。也许它不是一个小型的语言模型,但你知道,就是微型语言模型,TLMs,反正就是类似的东西。所以我觉得我们会看到各种大小的模型,就像现在的软件一样。

Yeah, I think in a lot of ways, artificial intelligence allows us to break new ground in how easy it is to create new applications. But everything about computing has largely remained the same. For example, the cost of maintaining software is extremely expensive. And once you build it, you would like it to run on a large of an install base as possible. You would like not to write the same software twice. I mean, you know, a lot of people still feel the same way. You like to take your engineering and move them forward. And so to the extent that, to the extent that the architecture allows you, on one hand, create software today that runs even better tomorrow with new hardware that‘s great or software that you create tomorrow, AI that you create tomorrow runs on a large install base. You think that’s great. That way of thinking about software is not gonna.

我觉得在很多方面,人工智能让我们能够更容易地创造新的应用程序。但是在计算方面,大部分事情还是老样子。比如说,维护软件的成本非常高。一旦你建好了软件,你希望它能在尽可能多的设备上运行。你不想重复写同样的软件。我的意思是,很多人还是这么想的。你喜欢把你的工程推向前进。所以,如果架构允许你,一方面,今天创建的软件明天在新硬件上能运行得更好,那就太好了;或者你明天创建的软件,后天创建的人工智能能在很多设备上运行。你认为那很棒。这种考虑软件的方式是不会变的。

主持人:Change. And video has moved into larger and larger, let‘s say, like a unit of support for customers. I think about it going from single chip to, you know, server to rack and real 72. How do you think about that progression? Like what’s next? Like should Nvidia do you full data center? But

随着技术的发展,英伟达的产品已经不仅仅是单个的芯片了,而是扩展到了支持整个数据中心的规模。你怎么看待这种发展?接下来会是什么?比如,英伟达是不是应该做整个数据中心?

黄仁勋:In fact, we build full data centers the way that we build everything. Unless you‘re building, if you’re developing software, you need the computer in its full manifestation. We don‘t build Powerpoint slides and ship the chips and we build a whole data center. And until we get the whole data center built up, how do you know the software works until you get the whole data center built up, how do you know your, you know, your fabric works and all the things that you expected the efficiencies to be, how do you know it’s gonna really work at scale? And that‘s the reason why it’s not unusual to see somebody‘s actual performance be dramatically lower than their peak performance, as shown in Powerpoint slides, and it is, computing is just not used to, is not what it used to be. You know, I say that the new unit of computing is the data center. That’s to us. So that‘s what you have to deliver. That’s what we build.Now we build a whole thing like that. And then we, for every single thing that every combination, air cold, x 86, liquid cold, Grace, Ethernet, infinite band, MV link, no NV link, you know what I‘m saying? We build every single configuration. We have five supercomputers in our company today. Next year, we’re gonna build easily five more. So if you‘re serious about software, you build your own computers if you’re serious about software, then you‘re gonna build your whole computer. And we build it all at scale.

实际上,我们建造完整的数据中心就像我们建造其他所有东西一样。如果你在开发软件,你需要电脑的完整形态来测试。我们不只是做PPT幻灯片然后发货芯片,我们建造整个数据中心。只有当我们把整个数据中心搭建起来后,你才能知道软件是否正常工作,你的网络布线是否有效,所有你期望的效率是否都能达到,你才知道它是否真的能在大规模上运行。这就是为什么人们的实际性能通常远低于PPT幻灯片上展示的峰值性能,计算已经不再是过去的样子了。我说现在的计算单元是数据中心,对我们来说就是这样。这就是你必须交付的东西,也是我们建造的东西。

我们现在就这样建造整个系统。然后我们为每一种可能的组合建造:空气冷却、x86架构、液体冷却、Grace芯片、以太网、无限带宽、MVLink,没有NVLink,你懂我的意思吗?我们建造每一种配置。我们公司现在有五台超级计算机,明年我们轻易就能再建造五台。所以,如果你对软件是认真的,你就会自己建造计算机,如果你对软件是认真的,你就会建造整个计算机。我们都是大规模地建造。

This is the part that is really interesting. We build it at scale and we build it very vertically integrate. We optimize it full stack, and then we disagree everything and we sell lemon parts. That‘s the part that is completely, utterly remarkable about what we do. The complexity of that is just insane. And the reason for that is we want to be able to graft our infrastructure into GCP, AWS, Azure, OCI. All of their control planes, security planes are all different and all of the way they think about their cluster sizing all different. And, but yet we make it possible for them to all accommodate Nvidia’s architecture. So that could, it could be everywhere. That‘s really in the end the singular thought, you know, that we would like to have a computing platform that developers could use that’s largely consistent, modular, you know, 10% here and there because people‘s infrastructure are slightly optimized differently and modular 10% here and there, but everything they build will run everywhere. This is kind of the one of the principles of software that should never be given up. And it, and we protected quite dearly. Yeah, it makes it possible for our software engineers to build ones run everywhere. And that’s because we recognize that the investment of software is the most expensive investment, and it‘s easy to test.

这部分真的很有趣。我们不仅大规模建造,而且是垂直整合建造。我们从底层到顶层全程优化,然后我们把各个部分分开,单独卖。我们做的事情复杂得让人难以置信。为什么这么做呢?因为我们想把我们的基础设施融入到GCP、AWS、Azure、OCI这些不同的云服务提供商中。我们的控制平台、安全平台都不一样,我们考虑集群大小的方式也各不相同。但是,我们还是想办法让他们都能适应英伟达的架构。这样,我们的架构就能无处不在。

最终,我们希望有一个计算平台,开发者可以用它来构建软件,这个平台在大部分情况下是一致的,可以模块化地调整,可能这里那里有10%的不同,因为每个人的基础设施都略有优化差异,但是无论在哪里, 我们构建的东西都能运行。这是软件的一个原则,我们非常珍视这一点。这使得我们的软件工程师可以构建出到处都能运行的软件。这是因为我们认识到,软件的投资是最昂贵的投资,而且它很容易测试。

Look at the size of the whole hardware industry and then look at the size of the world‘s industries. It’s $100 trillion on top of this one trillion dollar industry. And that tells you something.The software that you build, you have to, you know, you basically maintain for as long as you shall live. We‘ve never given up on piece of software. The reason why Kuda is used is because, you know, I called everybody. We will maintain this for as long as we shall live. And we’re serious now. We still maintain. I just saw a review the other day, Nvidia Shield, our Android TV. It‘s the best Android TV in the world. We shifted seven years ago. It is still the number one Android TV that people, you know, anybody who enjoys TV. And we just updated the software just this last week and people wrote a new story about it. G Force, we have 300 million gamers around the world. We’ve never stranded a single one of them. And so the fact that our architecture is compatible across all of these different areas makes it possible for us to do it. Otherwise, we would be sub, we would be, we would have, you know, we would have software teams that are hundred times the size of our company is today if not for this architectural compatibility. So we‘re very serious about that, and that translates to benefits the developers.

看看整个硬件行业的规模,再比比全世界所有行业的规模。硬件行业只有一万亿美元,而全世界的行业加起来有一百万亿亿美元。这个对比告诉你,软件行业要比硬件行业大得多。

你们做的软件,基本上要一直维护下去。我们从没有放弃过任何一款软件。Kuda之所以被大家用,是因为我向所有人承诺,我们会一直维护它,只要我们还在。我们现在还是很认真的,我们还在维护它。我前几天还看到一篇评论,说我们的英伟达Shield,我们的安卓电视,是世界上最好的安卓电视。我们在七年前推出的,它仍然是排名第一的安卓电视,任何喜欢看电视的人都爱它。我们上周才更新了软件,然后人们就写了新的文章来评论它。我们的G Force,全世界有3亿玩家。我们从没有抛弃过他们中的任何一个。我们的架构在所有这些不同领域都是兼容的,这使得我们能做到这一点。如果不是因为我们的架构兼容性,否则我们今天的软件团队的规模会比现在公司大一百倍。所以我们非常重视这一点,这也给开发者带来了好处。

主持人:One impressive substantiation of that recently was how quickly brought up a cluster for X dot AI. Yeah, and if you want to check about that, cuz that was striking in terms of both the scale and the speed with what you did. That

最近有一个让人印象深刻的例子是我们为X dot AI迅速搭建了一个集群。如果你想了解这件事,因为它在规模和速度上都让人惊讶。我们很快就完成了这个任务。

黄仁勋:You know, a lot of that credit you gotta give to Elon. I think the, first of all, to decide to do something, select the site. I bring cooling to it. I power hum and then decide to build this hundred thousand GPU super cluster, which is, you know, the largest of its kind in one unit. And then working backwards, you know, we started planning together the date that he was gonna stand everything up. And the date that he was gonna stand everything up was determined, you know, quite, you know, a few months ago. And so all of the components, all the Oems, all the systems, all the software integration we did with their team, all the network simulation we simulate all the network configurations, we, we pre, I mean like we prestaged everything as a digital twin. We, we pres, we prestaged all of his supply chain. We prestaged all of the wiring of the networking. We even set up a small version of it. Kind of a, you know, just a first instance of it. You know, ground truth, if you reference 0, you know, system 0 before everything else showed up. So by the time that everything showed up, everything was staged, all the practicing was done, all the simulations were done.

这里得给埃隆·马斯克很多功劳。首先,他决定要做这件事,选了地方,解决了冷却和供电问题,然后决定建造这个十万GPU的超级计算机群,这是迄今为止这种类型中最大的一个。然后,我们开始倒推,就是说,我们几个月前就一起计划了他要让一切运行起来的日期。所以,所有的组件、所有的原始设备制造商、所有的系统、所有的软件集成,我们都是和他们的团队一起做的,所有的网络配置我们都模拟了一遍,我们预先准备,就像数字孪生一样,我们预先准备了所有的供应链,所有的网络布线。我们甚至搭建了一个小版本,就像是第一个实例,你懂的,就是所有东西到位之前的基准,你参考的0号系统。所以,当所有东西都到位的时候,一切都已经安排好了,所有的练习都做完了,所有的模拟也都完成了。

And then, you know, the massive integration, even then the massive integration was a Monument of, you know, gargantuan teams of humanity crawling over each other, wiring everything up 247. And within a few weeks, the clusters were out. I mean, it‘s, it’s really, yeah, it‘s really a testament to his willpower and how he’s able to think through mechanical things, electrical things and overcome what is apparently, you know, extraordinary obstacles. I mean, what was done there is the first time that a computer of that large scale has ever been done at that speed. Unless our two teams are working from a networking team to compute team to software team to training team to, you know, and the infrastructure team, the people that the electrical engineers today, you know, to the software engineers all working together. Yeah, it‘s really quite a fit to watch. Was.

然后,你知道,大规模的集成工作,即使这个集成工作本身也是个巨大的工程,需要大量的团队成员像蚂蚁一样辛勤工作,几乎是全天候不停地接线和设置。几周之内,这些计算机群就建成了。这真的是对他意志力的证明,也显示了他如何在机械、电气方面思考,并克服了显然是非常巨大的障碍。我的意思是,这可是第一次在这么短的时间内建成如此大规模的计算机系统。这需要我们的网络团队、计算团队、软件团队、训练团队,以及基础设施团队,也就是那些电气工程师、软件工程师,所有人一起合作。这真的挺壮观的。这就像是一场大型的团队协作,每个人都在努力确保一切顺利运行。

主持人:There a challenge that felt most likely to be blocking from an engineering perspective, active, just.

从工程角度来看,有没有哪个挑战最可能成为绊脚石,就是说,有没有哪个技术难题最可能让整个项目卡住,动弹不得?

黄仁勋:A tonnage of electronics that had to come together. I mean, it probably worth just to measure it. I mean, it‘s a, you know, it tons and tons of equipment. It’s just abnormal. You know, usually a supercomputer system like that, you plan it for a couple of years from the moment that the first systems come on, come delivered to the time that you‘ve probably submitted everything for some serious work. Don’t be surprised if it‘s a year, you know, I mean, I think that happens all the time. It’s not abnormal. Now we couldn‘t afford to do that. So we created, you know, a few years ago, there was an initiative in our company that’s called Data Center as a product. We don‘t sell it as a product, but we have to treat it like it’s a product. Everything about planning for it and then standing it up, optimizing it, tuning it, keep it operational, right? The goal is that it should be, you know, kind of like opening up your beautiful new iPhone and you open it up and everything just kind of works.

我们需要把大量的电子设备整合在一起。我的意思是,这些设备的量多到值得去称一称。有数吨又数吨的设备,这太不正常了。通常像这样的超级计算机系统,从第一个系统开始交付,到你把所有东西都准备好进行一些严肃的工作,你通常需要规划几年时间。如果这个过程需要一年,你要知道,这是常有的事,并不奇怪。

但现在我们没有时间去这么做。所以几年前,我们公司里有一个叫做“数据中心即产品”的计划。我们不把它当作产品来卖,但我们必须像对待产品一样对待它。从规划到建立,再到优化、调整、保持运行,所有的一切都是为了确保它能够像打开一部崭新的iPhone一样,一打开,一切都能正常工作。我们的目标就是这样。

Now, of course, it‘s a miracle of technology making it that, like that, but we now have the skills to do that. And so if you’re interested in a data center and just have to give me a space and some power, some cooling, you know, and we‘ll help you set it up within, call it, 30 days. I mean, it’s pretty extraordinary.

当然了,能这么快就把数据中心建好,这简直就是科技的奇迹。但现在我们已经有了这样的技术能力。所以如果你想要建一个数据中心,只需要给我一个地方,提供一些电力和制冷设备,我们就能在差不多30天内帮你把一切都搭建好。我的意思是,这真的非常了不起。

主持人:That‘s wild. If you think, if you look ahead to 200,000,500,000, a million in a super cluster, whatever you call it. At that point, what do you think is the biggest blocker? Capital energy supply in one area?

那真是厉害。如果你想想,要是将来有个超级大的计算机集群,里面有个二十万、五十万、甚至一百万的计算机,不管你叫它什么。到那个时候,你觉得最大的难题会是什么呢?是资金问题、能源供应问题,还是别的什么?

黄仁勋:Everything. Nothing about what you, just the scales that you talked about, though, nothing is normal.

你说的那些事情,不管是哪个方面,只要涉及到你提到的那些巨大规模,那就没有一件事情是正常的。

主持人:But nothing is impossible. Nothing.

但是,也没什么事是完全不可能的。啥事都有可能。

黄仁勋:Is, yeah, no laws of physics limits, but everything is gonna be hard. And of course, you know, I, is it worth it? Like you can‘t believe, you know, to get to something that we would recognize as a computer that so easily and so able to do what we ask it to do, what, you know, otherwise general intelligence of some kind and even, you know, even if we could argue about is it really general intelligence, just getting close to it is going to be a miracle. We know that. And so I think the, there are five or six endeavors to try to get there. Right? I think, of course, OpenAI and anthropic and X and, you know, of course, Google and meta and Microsoft and you know, there, this frontier, the next couple of clicks that mountain are just so vital. Who doesn’t wanna be the first on that mountain. I think that the prize for reinventing intelligence altogether. Right. It‘s just, it’s too consequential not to attempt it. And so I think there are no laws of physics. Everything is gonna be hard.

确实,没有物理定律说我们做不到,但每件事情都会非常难。你也知道,这值得吗?你可能觉得难以置信,我们要达到的那种电脑,能够轻松地做我们让它做的事情,也就是某种通用智能,哪怕我们能争论它是否真的是通用智能,接近它都将会是一个奇迹。我们知道这很难。所以我认为,有五六个团队正在尝试达到这个目标。对吧?比如说,OpenAI、Anthropic、X,还有谷歌、Meta和微软等等,他们都在努力攀登这个前沿科技的山峰。谁不想成为第一个登顶的人呢?我认为,重新发明智能的奖励是如此之大,它的影响太大了,我们不能不去尝试。所以,虽然物理定律上没有限制,但每件事都会很难。

主持人:A year ago when we spoke together, you talked about, we asked like what applications you got most excited about that Nvidia would serve next in AI and otherwise, and you talked about how you led to, your most extreme customers sort of lead you there. Yeah, and about some of the scientific applications. So I think that‘s become like much more mainstream of you over the last year. Is it still like science and AI’s application of science that most excites you?

一年前我们聊天时,我问你,你对英伟达接下来在AI和其他领域能服务的哪些应用最兴奋,你谈到了你的一些最极端的客户某种程度上引导了你。是的,还有关于一些科学应用的讨论。所以我觉得过去一年里,这些科学和AI的应用变得更主流了。现在,是不是仍然是科学以及AI在科学领域的应用让你最兴奋?

黄仁勋:I love the fact that we have digital, we have AI chip designers here in video. Yeah, I love that. We have AI software engineers. How.

我就直说了,咱们现在有数字版的,也就是用人工智能来设计芯片的设计师,就在视频里。是的,我喜欢这个。我们还有AI软件工程师。

主持人:Effective our AI chip designers today? Super.

我们今天用人工智能来设计芯片的效果怎么样?非常好。

黄仁勋:Good. We can‘t, we couldn’t build Hopper without it. And the reason for that is because they could explore a much larger space than we can and because they have infinite time. They‘re running on a supercomputer. We have so little time using human engineers that we don’t explore as much of the space as we should, and we also can explore commentary. I can‘t explore my space while including your exploration and your exploration. And so, you know, our chips are so large, it’s not like it‘s designed as one chip. It’s designed almost like 1,000 ships and we have to ex, we have to optimize each one of them. Kind of an isolation. You really wanna optimize a lot of them together and, you know, cross module code design and optimize across much larger space. But obviously we‘re gonna be able to find fine, you know, local maximums that are hidden behind local minimum somewhere. And so clearly we can find better answers. You can’t do that without AI. Engineers just simply can‘t do it. We just don’t have enough time.

我们的AI芯片设计师真的很厉害。如果没有它们,我们根本造不出Hopper这款芯片。因为它们能探索的范围比我们人类广得多,而且它们好像有无穷无尽的时间。它们在超级计算机上运行,而我们人类工程师的时间有限,探索不了那么大的范围。而且,我们也不能同时探索所有的可能,我探索我的领域的时候,就不能同时探索你的领域。

我们的芯片非常大,不像是设计一个单独的芯片,更像是设计1000个芯片,每个都需要优化。就像是一个个独立的小岛。但我们其实很想把它们放在一起优化,跨模块协同设计,在整个更大的空间里优化。显然,我们能找到更好的解决方案,那些隐藏在某个角落里的最好的选择。没有AI我们做不到这一点。工程师们就是时间不够,做不到。

主持人:One other thing has changed since we last spoke collectively, and I looked it up at the time in videos, market cap was about 500 billion. It‘s now over 3 trillion. So the last 18 months, you’ve added two and a half trillion plus of market cap, which effectively is $100 billion plus a month or two and a half snowflakes or, you know, a stripe plus a little bit, or however you wanna think about.A country or two. Obviously, a lot of things are stayed consistent in terms of focus on what you‘re building and etc. And you know, walking through here earlier today, I felt the buzz like when I was at Google 15 years ago was kind of you felt the energy of the company and the vibe of excitement. What has changed during that period, if anything? Or how, what is different in terms of either how Nvidia functions or how you think about the world or the size of bets you can take or.

自我们上次一起聊天以来,有一件事变了,我查了下,当时英伟达的市值大概是5000亿美元。现在超过了3万亿美元。所以在过去18个月里,你们增加了两万五千亿美元以上的市值,这相当于每个月增加了1000亿美元,或者说增加了两个半的Snowflake公司或者一个Stripe公司多一点的市值,无论你怎么想。

这相当于增加了一两个国家的市值。显然,尽管市值增长了这么多,你们在建造的东西和专注的领域上还是保持了一致性。你知道,今天我在这里走了一圈,我感受到了一种活力,就像15年前我在谷歌时感受到的那样,你能感觉到公司的能量和兴奋的氛围。在这段时间里,有什么变化了吗?或者,英伟达的运作方式、你对世界的看法、你能承担的风险大小等方面有什么不同了吗?

黄仁勋:Well, our company can‘t change as fast as a stock price. Let’s just be clear about. So in a lot of ways, we haven‘t changed that much. I think the thing to do is to take a step back and ask ourselves, what are we doing? I think that’s really the big, you know, the big observation, realization, awakening for companies and countries is what‘s actually happening. I think what we’re talking about earlier, I‘m from our industry perspective, we reinvented computing. Now it hasn’t been reinvented for 60 years. That‘s how big of a deal it is that we’ve driven down the marginal cost of computing, down probably by a million x in the last 10 years to the point that we just, hey, let‘s just let the computer go exhaustively write the software. That’s the big realization. 说话人 2 24:00 And that in a lot of ways, I was kind of, we were kind of saying the same thing about chip design. We would love for the computer to go discover something about our chips that we otherwise could have done ourselves, explore our chips and optimize it in a way that we couldn‘t do ourselves, right, in the way that we would love for digital biology or, you know, any other field of science.

我们公司的变化速度可没有股价变化那么快。所以这么说吧,我们在很多方面并没有太大变化。我认为重要的是要退一步来问问我们自己,我们到底在做什么。这真的是对公司和国家来说一个很大的观察、认识和觉醒,那就是真正发生的事情。

就像我们之前讨论的,从我们行业的角度来看,我们重新发明了计算。这可是60年来都没有发生过的事情。我们把计算的边际成本降低了,可能在过去10年里降低了一百万分之一,以至于我们现在可以让计算机去详尽地编写软件。这是一个重大的领悟。

在很多方面,我们对芯片设计也是这么说的。我们希望计算机能自己去发现我们芯片的一些东西,这些东西我们本来可以自己做,但计算机可以探索我们的芯片并以我们自己做不到的方式进行优化,就像我们希望在数字生物学或其他科学领域那样。

And so I think people are starting to realize when we reinvented computing, but what does that mean even, and as we, all of a sudden, we created this thing called intelligence and what happened to computing? Well, we went from data centers are multi tenant stores of files. These new data centers we‘re creating are not data centers. They don’t, they‘re not multi tenant. They tend to be single tenant. They’re not storing any of our files. They‘re just, they’re producing something. They‘re producing tokens. And these tokens are reconstituted into what appears to be intelligence. Isn’t that right? And intelligence of all different kinds. You know, it could be articulation of robotic motion. It could be sequences of amino acids. It could be, you know, chemical chains. It could be all kinds of interesting things, right? So what are we really doing? We‘ve created a new instrument, a new machinery that in a lot of ways is that the noun of the adjective generative AI. You know, instead of generative AI, you know, it’s, it‘s an AI factory. It’s a factory that generates AI. And we‘re doing that at extremely large scale. And what people are starting to realize is, you know, maybe this is a new industry. It generates tokens, it generates numbers, but these numbers constitute in a way that is fairly valuable and what industry would benefit from it.

所以我觉得人们开始意识到,当我们重新发明计算时,这到底意味着什么。突然间,我们创造了这个叫做智能的东西,计算发生了什么变化?嗯,我们以前把数据中心看作是多租户存储文件的地方。我们现在创建的这些新数据中心,其实已经不是传统意义上的数据中心了。它们往往是单一租户的,它们不存储我们的文件,它们只是在生产一些东西。它们在生产数据令牌。然后这些数据令牌重新组合成看起来像智能的东西。对吧?而且智能有各种各样的形式。可能是机器人动作的表达,可能是氨基酸序列,可能是化学物质链,可能是各种有趣的事情,对吧?所以我们到底在做什么?我们创造了一种新的工具,一种新的机械,从很多方面来说,它就是生成性人工智能的名词形式。你知道,不是生成性人工智能,而是人工智能工厂。它是一个生产人工智能的工厂。我们正在非常大规模地做这件事。人们开始意识到,这可能是一个新行业。它生成数据令牌,它生成数字,但这些数字以一种相当有价值的方式构成,哪些行业会从中受益。

Then you take a step back and you ask yourself again, you know, what‘s going on? Nvidia on the one hand, we reinvent a computing as we know it. And so there’s $1 trillion of infrastructure that needs to be modernized. That‘s just one layer of it. The big layer of it is that there’s, this instrument that we‘re building is not just for data centers, which we were modernizing, but you’re using it for producing some new commodity. And how big can this new commodity industry be? Hard to say, but it‘s probably worth trillions. 说话人 2 26:18 And so that I think is kind of the viewers to take a step back. You know, we don’t build computers anymore. We build factories. And every country is gonna need it, every company‘s gonna need it, you know, give me an example of a company who or industry as us, you know what, we don’t need to produce intelligence. We got plenty of it. And so that‘s the big idea. I think, you know, and that’s kind of an abstracted industrial view. And, you know, someday people realize that in a lot of ways, the semiconductor industry wasn‘t about building chips, it was building, it was about building the foundational fabric for society. And then all of a sudden, there we go. I get it. You know, this is a big deal. Isn’t not just about chips.

然后你退一步,再次问自己,到底发生了什么?Nvidia一方面,我们重新发明了我们所知道的计算。所以有一万亿美元的基础设施需要现代化。这只是其中一层。更大的一层是,我们正在建造的这个工具不仅仅是为了数据中心,我们正在现代化数据中心,而是你用它来生产一些新的商品。这个新商品行业能有多大?很难说,但可能价值数万亿美元。

所以我认为这是观众需要退一步的地方。你知道,我们不再制造电脑了。我们制造工厂。每个国家都会需要它,每个公司都会需要它,给我一个不需要生产智能的公司或行业的例子,你知道,我们有很多智能。所以这就是这个大主意。我认为,你知道,这是一种抽象的工业观点。然后,有一天人们意识到,在很多方面,半导体行业不是关于制造芯片,它是关于为社会建立基础结构。然后突然间,我们明白了。这不仅仅是关于芯片的大事。

主持人:How do you think about embodiment now?

你现在怎么看待“体现”或者“具体化”这个概念?就是说,你怎么考虑把智能或者人工智能真正应用到实际的物理世界中,比如机器人或者其他实体设备上?

黄仁勋:Well, the thing I‘m super excited about is in a lot of ways, we’ve, we‘re close to artificial general intelligence, but we’re also close to artificial general robotics. Tokens are tokens. I mean, the question is, can you tokenize it? You know, of course, tokenis, tokenizing things is not easy, as you guys know. But if you‘re able to tokenize things, align it with large language models and other modalities, if I can generate a video that has Jensen reaching out to pick up the coffee cup, why can’t I prompt a robot to generate the token, still pick up the rule, you know? And so intuitively, you would think that the problem statement is rather similar for computer. And, and so I think that we‘re that close. That’s incredibly exciting.

我现在非常兴奋的一点是,我们在很多方面都快要实现通用人工智能了,而且我们也快实现通用机器人技术了。数据令牌就是数据令牌。我的意思是,问题是,你能把它变成数据令牌吗?当然,把东西变成数据令牌并不容易,你们知道这一点。但如果你能做到这一点,把它和大型语言模型和其他方式对齐,如果我能生成一个视频,视频里有Jensen伸手去拿咖啡杯,为什么我不能提示一个机器人去生成数据令牌,实际上去拿起那个规则,你知道吗?所以直观上,你会认为这个问题对计算机来说相当相似。所以我认为我们已经很接近了。这非常令人兴奋。

Now the, the two brown field robotic systems. Brown field means that you don‘t have to change the environment for is self driving cars. And with digital chauffeurs and body robots right between the cars and the human robot, we could literally bring robotics to the world without changing the world because we built a world for those two things. Probably not a coincidence that Elon spoke is then those two forms. So robotics because it is likely to have the larger potential scale. And and so I think that’s exciting. But the digital version of it, I is equally exciting. You know, we‘re talking about digital or AI employees. There’s no question we‘re gonna have AI employees of all kinds, and our outlook will be some biologics and some artificial intelligence, and we will prompt them in the same way. Isn’t that right? Mostly I prompt my employees, right? You know, provide them context, ask him to perform a mission. They go and recruit other team members, they come back and work going back and forth. How‘s that gonna be any different with digital and AI employees of all kinds? So we’re gonna have AI marketing people, AI chip designers, AI supply chain people, AIs, you know, and I‘m hoping that Nvidia is someday biologically bigger, but also from an artificial intelligence perspective, much bigger. That’s our future company. If.

现在有两种“棕色地带”机器人系统。“棕色地带”意味着你不需要改变环境,比如自动驾驶汽车。有了数字司机和机器人助手在汽车和人类机器人之间,我们可以在不改变世界的情况下把机器人技术带到世界上,因为我们为这两样东西建造了世界。埃隆·马斯克可能不是偶然提到这两种形式的。所以机器人技术因为可能有更大的潜在规模而令人兴奋。而数字版的机器人也同样令人兴奋。你知道,我们谈论的是数字或AI员工。毫无疑问,我们将拥有各种AI员工,我们的前景将是一些生物和一些人工智能,我们将以相同的方式提示他们。不是吗?大多数情况下,我提示我的员工,对吧?给他们提供上下文,让他们执行任务。他们去招募其他团队成员,他们回来工作,来回工作。这和各种数字和AI员工有什么不同呢?所以我们将有AI营销人员,AI芯片设计师,AI供应链人员,AI,等等,我希望英伟达有一天在生物学上更大,同时从人工智能的角度来看,也更大。这是我们未来公司的样子。

主持人:We came back and talked to you year from now, what part of the company do you think would be most artificially intelligent?

如果我们一年后回来再和你聊聊,你觉得公司里哪个部分会是最智能化的?

黄仁勋:I‘m hoping it should sign.

我希望公司里最重要的、最核心的部分能实现智能化。

主持人:Okay. And most.

好的,然后继续询问。

黄仁勋:Important part. And the read. That‘s right. Because it because I should start where it moves the needle most also where we can make the biggest impact most. You know, it’s such an insanely hard problem. I work with Sasina at synopsis and rude at cadence. I totally imagine them having synopsis chip designers that I can rent. And they know something about a particular module, their tool, and they train an AI to be incredibly good at it. And we‘ll just hire a whole bunch of them whenever we need, we’re in that phase of that chip design. You know, I might rent a million synopsis engineers to come and help me out and then go rent a million Cadence engineers to help me out. And that, what an exciting future for them that they have all these agents that sit on top of their tools platform, that use the tools platform and other, and collaborate with other platforms. And you‘ll do that for, you know, Christian will do that at SAP and Bill will do that as service.

我认为最重要的部分应该是公司里最能产生影响的地方。他说,这个问题非常难,但他希望从最能推动公司发展的地方开始智能化。他和Synopsys的Sasina和Cadence的Rude一起工作,他想象着可以租用Synopsys的芯片设计师AI。这些AI对某个特定模块、工具非常了解,并且已经被训练得非常擅长这方面的工作。当他们需要进行芯片设计的某个阶段时,他们会租用一大批这样的AI设计师。比如,他可能会租用一百万个Synopsys工程师AI来帮忙,然后再租用一百万个Cadence工程师AI来帮忙。我认为,对于我们来说,有一个激动人心的未来,因为我们有所有这些AI代理,它们位于我们工具平台的顶部,使用这些工具平台,并且与其他平台协作。SAP的Christian会这样做,Bill会作为服务来做这件事。

Now, you know, people say that these Saas platforms are gonna be disrupted. I actually think the opposite, that they‘re sitting on a gold mine, that they’re gonna be this flourishing of agents that are gonna be specialized in Salesforce, specialized in, you know, well, Salesforce, I think they call Lightning and SAP is about, and everybody‘s got their own language. Is that right? And we got Kuda and we’ve got open USD for Omniverse. And who‘s gonna create an AI agent? That’s awesome. At open USD, we‘re, you know, because nobody cares about it more than we do, right? And so I think in a lot of ways, these platforms are gonna be flourishing with agents and we’re gonna introduce them to each other and they‘re gonna collaborate and solve problems.

现在,有些人说这些基于网络的软件服务平台(SaaS)将会被颠覆。但我实际上认为恰恰相反,他们就像坐在金矿上一样,将会有一个专业化的智能代理(AI)的繁荣时期。这些智能代理将会专门针对Salesforce、SAP等平台进行优化。比如Salesforce有个叫做Lightning的平台,每个平台都有自己的语言和特点。我们有Kuda,还有为Omniverse准备的开放USD。谁会来创造这些AI代理呢?那将会是非常酷的事情。在开放USD方面,我们会来做,因为没有人比我们更关心它,对吧?所以我认为在很多方面,这些平台将会因为这些智能代理而繁荣起来,我们会把它们相互介绍,它们将会协作并解决问题。

主持人:You see a wealth of different people working in every domain in AI. What do you think is under notice or that people that you want more entrepreneurs or engineers or business people could work on?

你觉得在人工智能领域,有没有什么被忽视的地方,或者你希望更多的创业者、工程师或商业人士能关注和投入工作的领域?

黄仁勋:Well, first of all, I think what is misunderstood, and I misunderstood, maybe it may be underestimated, is the, the under the water activity, under the surface activity of groundbreaking science, computer science to science and engineering that is being affected by AI and machinery. I think you just can‘t walk into a science department anywhere, theoretical math department anywhere, where AI and machine learning and the type of work that we’re talking about today is gonna transform tomorrow. If they are, if you take all of the engineers in the world, all of the scientists in the world and you say that the way they‘re working today is early indication of the future, because obviously it is. Then you’re gonna see a tidal wave of gender to AI, a tidal wave of AI, a tidal wave machine learning change everything that we do in some short period of time.

首先,我认为可能被误解或低估了的是,那些在水面下的、正在进行的、突破性的科学、计算机科学以及科学与工程活动,这些活动正受到人工智能和机械的影响。如果你走进任何一个科学系,任何一个理论数学系,你会发现今天的人工智能和机器学习的工作将改变明天。如果你把世界上所有的工程师、所有的科学家都看作是未来的早期迹象,因为显然他们是,那么你就会看到一股涌向人工智能的潮流,一股人工智能的潮流,一股机器学习改变我们所做的一切的潮流,这将在很短的时间内发生。

in some short period of time.ion. And to work with Alex and Elian and Hinton at at at in Toronto and Yan Lekun and of course, Andrew Ang here in Stanford. And, you know, I saw the early indications of it and we were fortunate to have extrapolated from what was observed to be detecting cats into a profound change in computer science and computing altogether. And that extrapolation was fortunate for us. And now, of course, we, we were so excited by, so inspired by it that we changed everything about how we did things. But that took how long? It took literally six years from observing that toy, Alex Net, which I think by today‘s standards will be considered a toy to superhuman levels of capabilities in object recognition. Well, that was only a few years. 说话人 2 33:40 Now what is happening right now, the groundswell in all of the fields of science, not one field of science left behind. I mean, just to be very clear. Okay, everything from quantum computing, the quantum chemistry, you know, every field of science is involved in the approaches that we’re talking about. If we give ourselves, and they‘ve been added for a couple to three years, if we give ourselves in a couple, two, three years, the world’s gonna change. There‘s not gonna be one paper, there’s not gonna be one breakthrough in science, one breakthrough in engineering, where generative AI isn‘t at the foundation of it. I’m fairly certain of it. And, and so I, I think, you know, there‘s a lot of questions about, you know, every so often I hear about whether this is a fad computer. You just gotta go back to first principles and observe what is actually happening.

就在很短的时间内,我们看到了科学领域的大浪潮,没有一个科学领域被落下。我的意思是,每一件事都非常清楚。从量子计算到量子化学,你知道的,每个科学领域都涉及到我们正在讨论的方法。如果我们给自己,比如说,两三年的时间,世界将会改变。不会有一篇科学论文,不会有一项科学突破,一项工程突破,不是以生成性人工智能为基础的。我对此相当确定。所以,我认为,你知道,有很多问题,时不时我听到关于这是否是计算机的一时风尚。你只需要回到基本原则,观察实际发生的事情。

人工智能和机器学习的发展非常快,而且影响深远。我在人工智能领域有重大贡献的科学家合作的经历,比如多伦多的Alex Krizhevsky、Eliasmith、Hinton和斯坦福的Yan LeCun以及Andrew Ng。、从识别猫咪的简单任务到物体识别能力的超人水平的发展,这个过程只用了几年时间。我相信,在未来几年内,每个科学领域的每项科学和工程突破都将以生成性人工智能为基础。鼓励人们不要怀疑这是否只是一时的流行,而应该观察实际发生的事情,基于事实来判断。

The computing stack, the way we do computing has changed if the way you write software has changed, I mean, that is pretty cool. Software is how humans encode knowledge. This is how we encode our, you know, our algorithms. We encode it in a very different way. Now that‘s gonna affect everything, nothing else, whatever, be the same. And so I, I think the, the, I think I’m talking to the converted here and we all see the same thing. And all the startups that, you know, you guys work with and the scientists I work with and the engineers I work with, nothing will be left behind. I mean, this, we‘re gonna take everybody with us again.

计算的整个体系,也就是我们进行计算的方式,已经改变了,连我们编写软件的方式也改变了。这意味着我们编码知识的方法也变了,这是一种全新的编码方式。这将会改变一切,其他的事情都不会和以前一样了。他认为他在这里是对已经认同这一点的人说话,大家都看到了同样的趋势。无论是他们合作的初创公司,还是他合作的科学家和工程师,所有人都将被这一变革所影响。他的意思是,这次变革将会带领所有人一起前进。

主持人:I think one of the most exciting things coming from like the computer science world and looking at all these other fields of science is like I can go to a robotics conference now. Yeah, material science conference. Oh yeah, biotech conference. And like, I‘m like, oh, I understand this, you know, not at every level of the science, but in the driving of discovery, it is all the algorithms that are.

计算机科学领域的一个最令人兴奋的事情是,现在可以应用于所有其他科学领域。比如,他可以去机器人会议、材料科学会议、生物技术会议,他会发现自己能理解那些内容。虽然不是在每个科学领域的每个层面上都懂,但在推动发现方面,都是算法在起作用。

黄仁勋:General and there‘s some universal unifying concepts.

对,有一些普遍统一的概念。

主持人:And I think that‘s like incredibly exciting when you see how effective it is in every domain.

我认为这非常令人兴奋,当你看到算法在每个领域都如此有效时。

黄仁勋:Yep, absolutely. And eh, I‘m so excited that I’m using it myself every day. You know, I don‘t know about you guys, but it’s my tutor now. I mean, I, I, I don‘t do, I don’t learn anything without first going to an AI. You know? Why? Learn the hard way. Just go directly to an AI. I should go directly to ChatGPT. Or, you know, sometimes I do perplexity just depending on just the formulation of my questions. And I just start learning from there. And then you can always fork off and go deeper if you like. But holy cow, it‘s just incredible.

我绝对同意。我很兴奋,因为我自己每天都在使用AI。不知你们怎么样,但AI已经成为我的导师。我现在学任何东西都会先去问AI。为什么?何必要费劲去学呢,直接去找AI就行了。比如他会直接去问ChatGPT,或者根据问题的不同,有时他会去问Perplexity。他会从那里开始学习,然后如果愿意,可以深入研究。天哪,这真是太不可思议了。

And almost everything I know, I check, I double check, even though I know it to be a fact, you know, what I consider to be ground truth. I‘m the expert. I’ll still go to AI and check, make double check. Yeah, so great. Almost everything I do, I involve it.

我现在几乎做任何事情都会用到AI。哪怕是他知道的事实,就算是他是那个领域的专家,他也会用AI再检查一遍。他觉得这样很好,因为他几乎所有的事情都会让AI参与。

主持人:I think it‘s a great note to stop on. Yeah, thanks so much that time today.

这是个很好的结束话题。感谢大家今天的参与,时间到了。

黄仁勋:Really enjoyed it. Nice to see you guys.

我今天很开心见到大家。

  风险提示及免责条款

  市场有风险,投资需谨慎。本文不构成个人投资建议,也未考虑到个别用户特殊的投资目标、财务状况或需要。用户应考虑本文中的任何意见、观点或结论是否符合其特定状况。据此投资,责任自负。

文章版权及转载声明

作者:彩虹本文地址:https://yanyuanhr.com/post/7991.html发布于 11-09
文章转载或复制请以超链接形式并注明出处在线算命

阅读
分享