书童按:本篇是Guillaume Verdon接受Lex Fridman播客采访实录的第二部分。延续上篇对有效加速主义(e/acc)哲学根基的探讨,本篇深入人与AI共生的未来图景、末日概率(p(doom))的合理性辨析、量子机器学习的前沿探索等议题。Verdon以物理学家的视角,从热力学第二定律出发论证生命与文明的增长本性,主张人类应拥抱AI增强而非恐惧替代,批判末日论者对未来的偏见式采样,并分享量子计算与量子深度学习的技术洞见。访谈纵横于哲学思辨与技术前沿,既有对人类中心主义的解构,亦有对资本主义市场机制的坚守,视野开阔,发人深省。初稿采用Claude API机器翻译及排版,书童仅做简单校对及批注,以飨诸君。

Lex Fridman (00:50:13) 那么,如果事实证明,宇宙中意识之美的载体不止人类,AI也能将同样的火焰传承下去——这让你害怕吗?你担心AI会取代人类吗?
LEX FRIDMAN (00:50:13) So if it turns out that the beauty that is consciousness in the universe is bigger than just humans, the AI can carry that same flame forward. Does it scare you, are you concerned that AI will replace humans?
Guillaume Verdon (00:50:32) 在我的职业生涯中,有一个时刻让我意识到:也许我们需要把任务交给机器,才能真正理解我们周围的宇宙——而不是仅靠人类拿着纸笔把一切算出来。对我来说,这种放手一部分主动权的过程,反而给了我们理解世界的巨大杠杆。量子计算机在理解纳米尺度的物质方面,远胜过人类。类似地,我认为人类面临一个选择:我们是否接受AI将解锁的智力和操作杠杆,从而确保我们能够沿着文明规模与范围不断增长的道路前进?我们可能会被稀释——也许会有大量AI工作者——但总体而言,出于自身利益,通过与AI结合并增强自己,我们将实现更高的增长和更大的繁荣。
GUILLAUME VERDON (00:50:32) So during my career, I had a moment where I realized that maybe we need to offload to machines to truly understand the universe around us, right, instead of just having humans with pen and paper solve it all. And to me that sort of process of letting go of a bit of agency gave us way more leverage to understand the world around us. A quantum computer is much better than a human to understand matter at the Nanoscale. Similarly, I think that humanity has a choice, do we accept the opportunity to have intellectual and operational leverage that AI will unlock and thus ensure that we’re taken along this path of growth in the scope and scale of civilization? We may dilute ourselves, right? There might be a lot of workers that are AI, but overall out of our own self-interest, by combining and augmenting ourselves with AI, we’re going to achieve much higher growth and much more prosperity, right.
Guillaume Verdon (00:51:49) 对我而言,我认为最可能的未来是人类用AI增强自己。我认为我们已经走在这条增强之路上了——我们有手机用于通信,随时带在身上。我们有可穿戴设备,很快就会拥有与我们共享感知的设备,比如Humane AI Pin,或者说,从技术上讲,你的特斯拉汽车就具有共享感知能力。如果你们有共享的体验、共享的上下文、彼此通信并且有某种输入输出接口,那它本质上就是你自己的延伸。对我来说,人类用AI增强自己,以及那些不锚定于任何生物基质的AI,二者将会共存。而让各方利益对齐的方式——我们其实已经有了让由人类和技术组成的超级智能体对齐的机制。公司本质上是大型的混合专家模型,我们在公司内部有任务的神经路由机制,也有经济交换的方式来对齐这些庞然大物。
GUILLAUME VERDON (00:51:49) To me, I think that the most likely future is one where humans augment themselves with AI. I think we’re already on this path to augmentation, we have phones we use for communication, we have on ourselves at all times. We have wearables, soon that have shared perception with us, right, like the Humane AI Pin or I mean, technically your Tesla car has shared perception. And so if you have shared experience, shared context, you communicate with one another and you have some sort of IO, really it’s an extension of yourself.And to me, I think that humanity augmenting itself with AI and having AI that is not anchored to anything biological, both will coexist. And the way to align the parties, we already have a sort of mechanism to align super intelligences that are made of humans and technology, right? Companies are sort of large mixture of expert models, where we have neural routing of tasks within a company and we have ways of economic exchange to align these behemoths.
Guillaume Verdon (00:53:10) 对我来说,我认为资本主义就是那条路。我确实认为,无论是什么样的物质或信息配置,只要能带来最大化的增长,我们就会收敛到那里——这纯粹是物理原理使然。所以我们要么让自己与这个现实对齐,加入文明规模与范围加速扩张的进程;要么被甩在后面,试图减速,退回森林,放弃技术,回到原始状态。至少在我看来,这就是摆在面前的两条路。
GUILLAUME VERDON (00:53:10) And to me, I think capitalism is the way, and I do think that whatever configuration of matter or information leads to maximal growth, will be where we converge, just from like physical principles. And so we can either align ourselves to that reality and join the acceleration up in scope and scale of civilization or we can get left behind and try to decelerate and move back in the forest, let go of technology and return to our primitive state. And those are the two paths forward, at least to me.
Lex Fridman (00:53:54) 但有个哲学问题是:人类对齐能力是否存在极限?让我以一种论证的形式提出来。有个叫Dan Hendrycks的人写道,他同意你的观点,即AI的发展可以被视为一个进化过程,但对他——对Dan来说——这并不是件好事,因为他认为自然选择会偏好AI而非人类,这可能导致人类灭绝。你怎么看?如果这真是一个进化过程,而AI系统可能不需要人类呢?
LEX FRIDMAN (00:53:54) But there’s a philosophical question whether there’s a limit to the human capacity to align. So let me bring it up as a form of argument, this guy named Dan Hendrycks and he wrote that he agrees with you that AI development could be viewed as an evolutionary process, but to him, to Dan, this is not a good thing, as he argues that natural selection favors AIs over humans and this could lead to human extinction. What do you think, if it is an evolutionary process and AI systems may have no need for humans?
Guillaume Verdon (00:54:36) 我确实认为,我们实际上正在通过市场对AI的空间施加进化压力。现在我们运行那些对人类有正效用的AI,这就产生了选择压力——如果你认为当一个神经网络的API实例在GPU上运行时,它就”活着”的话。
GUILLAUME VERDON (00:54:36) I do think that we’re actually inducing an evolutionary process on the space of AIs through the market, right. Right now we run AIs that have positive utility to humans and that induces a selective pressure, if you consider a neural net being alive when there’s an API running instances of it on GPUs.
Lex Fridman (00:55:01) 对。
LEX FRIDMAN (00:55:01) Yeah.
Guillaume Verdon (00:55:01) 哪些API会被运行?那些对我们有高效用的。这就像我们驯化狼并把它们变成狗——狗的表达非常清晰,非常对齐。我认为我们有机会引导AI并实现高度对齐的AI。而且我认为人类加AI是一个非常强大的组合,我不确定纯粹的AI会淘汰这种组合。
GUILLAUME VERDON (00:55:01) Right. And which APIs get run? The ones that have high utility to us, right. So similar to how we domesticated wolves and turned them into dogs that are very clear in their expression, they’re very aligned, right. I think there’s going to be an opportunity to steer AI and achieve highly aligned AI. And I think that humans plus AI is a very powerful combination and it’s not clear to me that pure AI would select out that combination.
Lex Fridman (00:55:40) 所以人类现在正在创造选择压力,以创造与人类对齐的AI。但考虑到AI的发展方式以及它能多快地增长和扩展,对我来说,一个担忧是意外后果——人类无法预见这个过程的所有后果。AI系统可能造成的意外后果的破坏规模非常大。
LEX FRIDMAN (00:55:40) So the humans are creating the selection pressure right now to create AIs that are aligned to humans, but given how AI develops and how quickly it can grow and scale, to me, one of the concerns is unintended consequences, like humans are not able to anticipate all the consequences of this process. The scale of damage that could be done through unintended consequences with AI systems is very large.
Guillaume Verdon (00:56:10) 但上行空间的规模——
GUILLAUME VERDON (00:56:10) The scale of the upside.
Lex Fridman (00:56:12) 是的。
LEX FRIDMAN (00:56:12) Yes.
Guillaume Verdon (00:56:13) 对吧?
GUILLAUME VERDON (00:56:13) Right?
Lex Fridman (00:56:13) 我猜这是——
LEX FRIDMAN (00:56:13) Guess it’s-
Guillaume Verdon (00:56:14) 通过用AI增强我们自己,现在无法想象的上行空间。机会成本——我们正处在一个岔路口,对吧?我们要么走创造这些技术的道路,增强自己,在AI的帮助下攀登卡尔达肖夫等级(书童注:Kardashev Scale,衡量文明技术发展水平的量表,以能源利用能力为标准),成为多行星物种;要么我们完全不孕育这些技术,把所有潜在的上行空间都留在桌面上。
GUILLAUME VERDON (00:56:14) By augmenting ourselves with AI is unimaginable right now. The opportunity cost, we’re at a fork in the road, right? Whether we take the path of creating these technologies, augment ourselves and get to climb up the Kardashev Scale, become multi-planetary with the aid of AI, or we have a hard cutoff of like we don’t birth these technologies at all and then we leave all the potential upside on the table.
Lex Fridman (00:56:42) 对。
LEX FRIDMAN (00:56:42) Yeah.
Guillaume Verdon (00:56:42) 对我而言,出于对未来人类的责任——通过扩大文明规模,我们可以承载更多的人口——出于对这些未来人类的责任,我认为我们必须让那个更伟大、更宏大的未来成为现实。
GUILLAUME VERDON (00:56:42) Right. And to me, out of responsibility to the future humans we could carry, with higher carrying capacity by scaling up civilization. Out of responsibility to those humans, I think we have to make the greater grander future happen.
Lex Fridman (00:56:58) 在硬切断和全速前进之间,有中间地带吗?谨慎有任何论据吗?
LEX FRIDMAN (00:56:58) Is there a middle ground between cutoff and all systems go? Is there some argument for caution?
Guillaume Verdon (00:57:06) 我认为,正如我所说,市场会表现出谨慎。每个有机体、每家公司、每个消费者都在为自身利益行事,他们不会把资本分配给对他们有负效用的东西。
GUILLAUME VERDON (00:57:06) I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest and they won’t assign capital to things that have negative utility to them.
Lex Fridman (00:57:21) 问题在于市场并不总是有完美信息,存在操纵,存在恶意行为者搅乱系统。它并不总是一个理性和诚实的系统。
LEX FRIDMAN (00:57:21) The problem is with the market is, there’s not always perfect information, there’s manipulation, there’s bad faith actors that mess with the system. It’s not always a rational and honest system.
Guillaume Verdon (00:57:41) 嗯,这正是为什么我们需要信息自由、言论自由和思想自由,以便能够收敛到对我们所有人都有正效用的技术子空间。
GUILLAUME VERDON (00:57:41) Well, that’s why we need freedom of information, freedom of speech and freedom of thought in order to be able to converge on the subspace of technologies that have positive utility for us all, right.
Lex Fridman (00:57:56) 那让我问你关于p(doom)的问题——末日概率。这个词说起来挺有意思,但经历起来可不有趣。在你看来,AI最终杀死全部或大部分人类的概率是多少——也就是所谓的末日概率?
LEX FRIDMAN (00:57:56) Well let me ask you about p(doom), probability of doom. That’s just fun to say, but not fun to experience. What is to you the probability that AI eventually kills all or most humans, also known as probability of doom?
Guillaume Verdon (00:58:16) 我不喜欢那种计算方式。我认为人们只是随便抛出数字,这是非常草率的计算。要计算概率,比方说你把世界建模为某种马尔可夫过程,如果你有足够多的变量,或者隐马尔可夫过程——你需要对所有可能的未来空间做随机路径积分,而不仅仅是你的大脑自然倾向的那些未来。我认为p(doom)的估算者是有偏见的,因为我们的生物本性。我们进化出了对负面的、可怕的未来的偏见采样,因为那是进化的最优解。所以那些神经质程度较高的人,整天每天都在想一切都会出错的负面未来,并声称他们在做无偏采样。某种意义上,他们没有对所有可能性的空间做归一化处理,而所有可能性的空间是超指数级庞大的,很难有这样的估计。
GUILLAUME VERDON (00:58:16) I’m not a fan of that calculation, I think people just throw numbers out there and it’s a very sloppy calculation, right? To calculate a probability, let’s say you model the world as some sort of Markov process, if you have enough variables or hidden Markov process. You need to do a stochastic path integral through the space of all possible futures, not just the futures that your brain naturally steers towards, right. I think that the estimators of p(doom) are biased because of our biology, right? We’ve evolved to have bias sampling towards negative futures that are scary, because that was an evolutionary optimum, right. And so people that are of, let’s say higher neuroticism will just think of negative futures where everything goes wrong all day every day and claim that they’re doing unbiased sampling. And in a sense they’re not normalizing for the space of all possibilities and the space of all possibilities is super exponentially large and it’s very hard to have this estimate.
Guillaume Verdon (00:59:40) 总的来说,我认为我们无法以那样的粒度预测未来,因为混沌。如果你有一个复杂系统,你在几个变量上有一些不确定性,如果你让时间演化,你就有了李雅普诺夫指数(Lyapunov exponent)这个概念。一点点模糊会在我们的估计中呈指数级地变成大量模糊,随着时间推移。我认为我们需要表现出一些谦逊,承认我们实际上无法预测未来。我们拥有的唯一先验是物理定律,这正是我们所主张的。物理定律说,系统会想要增长,而为增长和复制而优化的子系统在未来更有可能出现。所以我们应该力求最大化我们当前与未来的互信息,而通往那条路的方式是加速而非减速。
GUILLAUME VERDON (00:59:40) And in general, I don’t think that we can predict the future with that much granularity because of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables, if you let time evolve, you have this concept of a Lyapunov exponent, right. A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so, over time. And I think we need to show some humility that we can’t actually predict the future, the only prior we have is the laws of physics, and that’s what we’re arguing for. The laws of physics say the system will want to grow and subsystems that are optimized for growth and replication are more likely in the future. And so we should aim to maximize our current mutual information with the future and the path towards that is for us to accelerate rather than decelerate.
Guillaume Verdon (01:00:40) 所以我没有p(doom),因为我认为,类似于谷歌的量子霸权实验——我当时就在他们运行模拟的房间里——那是一个量子混沌系统的例子,你甚至无法用世界上最大的超级计算机估算某些结果的概率。那就是混沌的一个例子,而我认为这个系统对任何人来说都过于混沌,无法对某些未来的可能性有准确的估计。如果他们真有那么厉害,我想他们在股市交易上会非常富有。
GUILLAUME VERDON (01:00:40) So I don’t have a p(doom), because I think that similar to the quantum supremacy experiment at Google, I was in the room when they were running the simulations for that. That was an example of a quantum chaotic system where you cannot even estimate probabilities of certain outcomes with even the biggest supercomputer in the world, right. So that’s an example of chaos and I think the system is far too chaotic for anybody to have an accurate estimate of the likelihood of certain futures. If they were that good, I think they would be very rich trading on the stock market.
Lex Fridman (01:01:23) 但话虽如此,人类确实有偏见,根植于我们的进化生物学,害怕一切能杀死我们的东西,但我们仍然可以想象能杀死我们的不同轨迹。我们不知道所有其他不一定会的轨迹,但我认为,结合一些基于人类历史的基本直觉来推理,仍然是有用的——比如看看地缘政治,看看人性的基本面,强大的技术如何能伤害很多人?这似乎基于此,看看核武器,你可以开始估算p(doom),也许是在更哲学的意义上,而非数学意义上。哲学意义上是指:有这种可能性吗?人性倾向于那个方向吗?
LEX FRIDMAN (01:01:23) But nevertheless, it’s true that humans are biased, grounded in our evolutionary biology, scared of everything that can kill us, but we can still imagine different trajectories that can kill us. We don’t know all the other ones that don’t necessarily, but it’s still I think, useful combined with some basic intuition grounded in human history, to reason about like what… Like looking at geopolitics, looking at basics of human nature, how can powerful technology hurt a lot of people? It just seems grounded in that, looking at nuclear weapons, you can start to estimate p(doom) maybe in a more philosophical sense, not a mathematical one. Philosophical meaning like is there a chance? Does human nature tend towards that or not?
Guillaume Verdon (01:02:25) 我认为,对我来说,最大的存在风险之一是AI的权力集中在极少数人手中,尤其是如果这是控制信息流的公司和政府的混合体。因为这可能为一种反乌托邦的未来铺平道路——只有极少数人和政府中的寡头拥有AI,他们甚至可以说服公众AI从未存在过。这就开启了威权集中控制的场景,对我来说,这是最黑暗的时间线。而现实是,我们有这些事情发生的数据驱动先验。当你给予太多权力,当你过度集中权力时,人类会做可怕的事情。
GUILLAUME VERDON (01:02:25) I think to me, one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it’s a mix between the companies that control the flow of information and the government. Because that could set things up for a sort of dystopian future where only a very few and an oligopoly in the government have AI and they could even convince the public that AI never existed. And that opens up sort of these scenarios for authoritarian centralized control, which to me is the darkest timeline. And the reality is that we have a data-driven prior of these things happening, right. When you give too much power, when you centralize power too much, humans do horrible things, right.
Guillaume Verdon (01:03:23) 对我来说,在我的贝叶斯推断中,这比基于科幻的先验有更高的可能性——比如”我的先验来自《终结者》电影”。所以当我和这些AI末日论者交谈时,我只是要求他们追溯一条通过马尔可夫链事件的路径,这条路径会导致我们的末日,并实际给我每次转换的良好概率。而很多时候,那条链中会有一个非物理的或极不可能的转换。但当然,我们天生就会害怕事物,我们天生会对危险做出反应,我们天生会认为未知是危险的,因为这是生存的好启发式方法。但出于恐惧,我们有更多的损失。我们有太多要失去的,太多的上行空间会因为出于恐惧而预先阻止正面未来的发生而失去。所以我认为我们不应该屈服于恐惧。恐惧是心智的杀手,我认为它也是文明的杀手。
GUILLAUME VERDON (01:03:23) And to me, that has a much higher likelihood in my Bayesian inference than Sci-Fi based priors, right, like, “My prior came from the Terminator movie.” And so when I talked to these AI doomers, I just ask them to trace a path through this Markov chain of events that would lead to our doom and to actually give me a good probability for each transition. And very often there’s a unphysical or highly unlikely transition in that chain, right. But of course, we’re wired to fear things and we’re wired to respond to danger, and we’re wired to deem the unknown to be dangerous, because that’s a good heuristic for survival, right. But there’s much more to lose out of fear. We have so much to lose, so much upside to lose by preemptively stopping the positive futures from happening out of fear. And so I think that we shouldn’t give into fear, fear is the mind killer, I think it’s also the civilization killer.
Lex Fridman (01:04:43) 我们仍然可以思考事情出错的各种方式。比如,美国的开国元勋们思考了人性,这就是为什么会有关于必要自由的讨论。他们真正深入地审议了这一点,我认为同样的事情可能也可以为AGI做。人类历史确实表明我们倾向于集中化,或者至少当我们实现集中化时,很多坏事会发生。当有独裁者时,很多黑暗、糟糕的事情会发生。问题是,AGI能成为那个独裁者吗?AGI在发展时,能否因为其权力而成为集中化者?也许是因为人类的对齐,也许是同样的倾向,同样的斯大林式集中化和集中管理资源分配的倾向?
LEX FRIDMAN (01:04:43) We can still think about the various ways things go wrong, for example, the founding fathers of the United States thought about human nature and that’s why there’s a discussion about the freedoms that are necessary. They really deeply deliberated about that and I think the same could possibly be done for AGI. It is true that human history shows that we tend towards centralization, or at least when we achieve centralization, a lot of bad stuff happens. When there’s a dictator, a lot of dark, bad things happen. The question is, can AGI become that dictator? Can AGI when develop, become the centralizer, because of its power? Maybe because of the alignment of humans, perhaps, the same tendencies, the same Stalin like tendencies to centralize and manage centrally the allocation of resources?
Lex Fridman (01:05:45) 你甚至可以看到这在表面上是一个令人信服的论点:”嗯,AGI如此聪明,如此高效,如此擅长分配资源,我们为什么不把它外包给AGI呢?”然后最终,无论什么力量用权力腐蚀人类的心智,都可能对AGI做同样的事。它只会说:”好吧,人类是可有可无的,我们会摆脱他们。”就像乔纳森·斯威夫特(Jonathan Swift)几个世纪前——我想是1700年代——的《一个温和的建议》(A Modest Proposal),他讽刺性地建议,我想是在爱尔兰,穷人的孩子被作为食物喂给富人,这将是个好主意,因为它减少了穷人的数量,并给穷人带来额外收入。所以从几个方面减少了穷人的数量,因此更多的人变得富有。当然,它漏掉了一个很难放入数学方程的基本部分——人类生命的基本价值。所以,这一切都是在说,你担心AGI成为你刚才谈到的权力集中者吗?
LEX FRIDMAN (01:05:45) And you can even see that as a compelling argument on the surface level. “Well, AGI is so much smarter, so much more efficient, so much better at allocating resources, why don’t we outsource it to the AGI?” And then eventually whatever forces that corrupt the human mind with power could do the same for AGI. It’ll just say, “Well, humans are dispensable, we’ll get rid of them.” Do the Jonathan Swift, Modest Proposal from a few centuries ago, I think the 1700s, when he satirically suggested that, I think it’s in Ireland, that the children of poor people are fed as food to the rich people and that would be a good idea, because it decreases the amount of poor people and gives extra income to the poor people. So on several accounts decreases the amount of poor people, therefore more people become rich. Of course, it misses a fundamental piece here that’s hard to put into a mathematical equation of the basic value of human life. So all of that to say, are you concerned about AGI being the very centralizer of power that you just talked about?
Guillaume Verdon (01:07:09) 我确实认为,现在AI有向集中化的偏见,因为计算密度和数据的集中化以及我们训练模型的方式。我认为随着时间推移,我们将耗尽可以从互联网上抓取的数据,而且我正在研究提高计算密度,以便计算可以无处不在,以分布式方式在环境中获取信息并测试假设。我认为从根本上说,集中式控制论控制——也就是拥有一个庞大的智能体,融合许多传感器,试图准确感知世界、准确预测它、预测许多许多变量并控制它、对世界施加其意志——我认为这从来就不是最优解。比方说你有一家公司,如果你有一家公司,我不知道,有10000人,他们都向CEO汇报。即使那个CEO是AI,我认为它也会努力融合所有传来的信息,然后预测整个系统,然后施行其意志。
GUILLAUME VERDON (01:07:09) I do think that right now there’s a bias over a centralization of AI, because of a compute density and centralization of data and how we’re training models. I think over time we’re going to run out of data to scrape over the internet, and I think that, well, actually I’m working on, increasing the compute density so that compute can be everywhere and acquire information and test hypotheses in the environment in a distributed fashion. I think that fundamentally, centralized cybernetic control, so having one intelligence that is massive that fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables and control it, enact its will upon the world, I think that’s just never been the optimum, right? Like let’s say you have a company, if you have a company, I don’t know, of 10,000 people, they all report to the CEO. Even if that CEO is an AI, I think it would struggle to fuse all of the information that is coming to it and then predict the whole system and then to enact its will.
Guillaume Verdon (01:08:28) 在自然界、在公司以及各种系统中出现的,是一种分层控制论控制的概念。在公司里,你有个人贡献者,他们为自己的利益行事,试图完成他们的任务,他们有一个精细的——就时间和空间而言——控制回路和感知领域。比如说你在一家软件公司,他们有自己的代码库,他们在一天内迭代它。然后管理层可能会检查,它有更广的范围,比方说有五个直接汇报对象。然后它每周对每个人的更新采样一次,然后你可以沿着链条向上,你有更大的时间尺度和更大的范围。而这似乎已经成为控制系统的最佳方式。
GUILLAUME VERDON (01:08:28) What has emerged in nature and in corporations and all sorts of systems is a notion of sort of hierarchical cybernetic control, right. In a company it would be, you have like the individual contributors, they are self-interested and they’re trying to achieve their tasks and they have a fine, in terms of time and space if you will, control loop and field of perception, right. They have their code base, let’s say you’re in a software company, they have their code base, they iterate it on it intraday, right. And then the management maybe checks in, it has a wider scope, it has, let’s say five reports, right. And then it samples each person’s update once per week, and then you can go up the chain and you have larger timescale and greater scope. And that seems to have emerged as sort of the optimal way to control systems.
Guillaume Verdon (01:09:25) 而这正是资本主义给我们的。你有这些层级结构,你甚至可以有母公司等等。这样容错性要强得多。在量子计算中——这是我的领域出身——我们有量子纠错中的容错概念。量子纠错是检测来自噪声的故障,预测它如何在系统中传播,然后纠正它——这是一个控制论回路。事实证明,分层的解码器,并且在每个层级都是局部的——
GUILLAUME VERDON (01:09:25) And really that’s what capitalism gives us, right? You have these hierarchies and you can even have like parent companies and so on. And so that is far more fault tolerant, in quantum computing, that’s my feel that came from, we have a concept of this fault tolerance in quantum air correction, right? Quantum air correction is detecting a fault that came from noise, predicting how it’s propagated through the system and then correcting it, right, so it’s a cybernetic loop. And it turns out that decoders that are hierarchical and in each level, the hierarchy are local-
Guillaume Verdon (01:10:00) ——分层的,并且每个层级都是局部的,表现要好得多,而且容错性要强得多。原因是,如果你有一个非局部的解码器,那么你在这个控制节点上有一个故障,整个系统就会崩溃。类似地,如果你有一个每个人都向其汇报的CEO,而那个CEO去度假了,整个公司就会陷入停滞。对我来说,我认为是的,我们看到AI有集中化的趋势,但我认为随着时间推移会有修正,智能会更接近感知。我们将把AI分解成更小的子系统,彼此通信并形成一个元系统。
GUILLAUME VERDON (01:10:00) … that are hierarchical. And at each level, the hierarchy are local, perform the best by far, and are far more fault-tolerant. The reason is, if you have a non-local decoder, then you have one fault at this control node and the whole system crashes. Similarly to if you have one CEO that everybody reports to and that CEO goes on vacation, the whole company comes to a crawl. To me, I think that yes, we’re seeing a tendency towards centralization of AI, but I think there’s going to be a correction over time, where intelligence is going to go closer to the perception. And we’re going to break up AI into smaller subsystems that communicate with one another and form a meta system.
Lex Fridman (01:10:56) 如果你看看今天世界上的层级结构,有国家,那些都是层级的。但相对于彼此,国家是无政府的,所以这是一种无政府状态。
LEX FRIDMAN (01:10:56) If you look at the hierarchies that are in the world today, there’s nations and those all hierarchical. But in relation to each other, nations are anarchic, so it’s an anarchy.
Guillaume Verdon (01:11:06) 嗯。
GUILLAUME VERDON (01:11:06) Mm-hmm.
Lex Fridman (01:11:08) 你预见这样一个世界吗,在那里没有一个总体的……你怎么称呼它?集中式控制论控制?
LEX FRIDMAN (01:11:08) Do you foresee a world like this, where there’s not a over… What’d you call it? A centralized cybernetic control?
Guillaume Verdon (01:11:17) 集中式控制中心。对。
GUILLAUME VERDON (01:11:17) Centralized locus of control. Yeah.
Lex Fridman (01:11:21) 你说那是次优的?
LEX FRIDMAN (01:11:21) That’s suboptimal, you’re saying?
Guillaume Verdon (01:11:22) 对。
GUILLAUME VERDON (01:11:22) Yeah.
Lex Fridman (01:11:23) 所以,在最顶层总会有竞争状态?
LEX FRIDMAN (01:11:23) So, it would be always a state of competition at the very top level?
Guillaume Verdon (01:11:27) 对。就像在公司里,你可能有两个部门在做类似的技术并相互竞争,然后你剪掉表现不佳的那个。这是一个树的选择过程,或者一个产品被砍掉,然后整个组织被解雇。这个尝试新事物和淘汰不奏效的旧事物的过程,正是给我们适应性的东西,帮助我们收敛到最好的技术和最该做的事情。
GUILLAUME VERDON (01:11:27) Yeah. Yeah. Just like in a company, you may have two units working on similar technology and competing with one another, and you prune the one that performs not as well. That’s a selection process for a tree, or a product gets killed and then a whole org gets fired. This process of trying new things and shedding old things that didn’t work, it’s what gives us adaptability and helps us converge on the technologies and things to do that are most good.
Lex Fridman (01:12:04) 我只是希望没有一种对AGI独特而对人类不独特的失败模式,因为你现在主要描述的是人类系统。
LEX FRIDMAN (01:12:04) I just hope there’s not a failure mode that’s unique to AGI versus humans, because you’re describing human systems mostly right now.
Guillaume Verdon (01:12:11) 对。
GUILLAUME VERDON (01:12:11) Right.
Lex Fridman (01:12:11) 我只是希望当一家公司垄断AGI时,我们会看到与人类相同的情况,也就是另一家公司会涌现出来并开始有效竞争。
LEX FRIDMAN (01:12:11) I just hope when there’s a monopoly on AGI in one company, that we’ll see the same thing we see with humans, which is, another company will spring up and start competing effectively.
Guillaume Verdon (01:12:24) 到目前为止一直是这样。我们有OpenAI。我们有Anthropic。现在,我们有xAI。我们有Meta,甚至是开源的,现在我们有Mistral,它非常有竞争力。这就是资本主义的美妙之处。你不必过于信任任何一方,因为我们总是在每个层面对冲我们的赌注。总有竞争,这对我来说至少是最美好的事情,就是整个系统总是在转变,总是在适应。
GUILLAUME VERDON (01:12:24) That’s been the case so far. We have OpenAI. We have Anthropic. Now, we have xAI. We have Meta even for open source, and now we have Mistral, which is highly competitive. That’s the beauty of capitalism. You don’t have to trust any one party too much because we’re always hedging our bets at every level. There’s always competition and that’s the most beautiful thing to me, at least, is that the whole system is always shifting and always adapting.
Guillaume Verdon (01:12:54) 维持这种活力就是我们避免暴政的方式。确保每个人都能访问这些工具、这些模型,并能为研究做出贡献,就能避免智能暴政——极少数人控制世界的AI并用它来压迫周围的人。
GUILLAUME VERDON (01:12:54) Maintaining that dynamism is how we avoid tyranny. Making sure that everyone has access to these tools, to these models, and can contribute to the research, avoids a neural tyranny where very few people have control over AI for the world and use it to oppress those around them.
Lex Fridman (01:13:23) 当你谈论智能时,你提到了多体量子纠缠。
LEX FRIDMAN (01:13:23) When you were talking about intelligence, you mentioned multipartite quantum entanglement.
Guillaume Verdon (01:13:28) 嗯。
GUILLAUME VERDON (01:13:28) Mm-hmm.
Lex Fridman (01:13:29) 先问一个高层次的问题:你认为什么是智能?当你思考量子力学系统并观察其中发生的某种计算时,你认为宇宙能够进行的那种计算有什么智能之处?而人类大脑能够进行的计算只是其中的一小部分?
LEX FRIDMAN (01:13:29) High-level question first is, what do you think is intelligence? When you think about quantum mechanical systems and you observe some kind of computation happening in them, what do you think is intelligent about the kind of computation the universe is able to do; a small, small inkling of which is the kind of computation a human brain is able to do?
Guillaume Verdon (01:13:52) 我会说智能和计算并不完全是一回事。我认为宇宙确实在进行量子计算。如果你能访问所有自由度和一台非常非常非常大的量子计算机,有很多很多量子比特,比方说,每个普朗克体积有几个量子比特——这差不多是我们拥有的像素——那么你就能在一台足够大的量子计算机上模拟整个宇宙,当然,假设你看的是宇宙的有限体积。我认为至少对我来说,智能是——我回到控制论——感知、预测和控制我们世界的能力。
GUILLAUME VERDON (01:13:52) I would say intelligence and computation aren’t quite the same thing. I think that the universe is very much doing a quantum computation. If you had access to all the degrees of freedom and a very, very, very large quantum computer with many, many, many qubits, let’s say, a few qubits per Planck volume, which is more or less the pixels we have, then you’d be able to simulate the whole universe on a sufficiently large quantum computer, assuming you’re looking at a finite volume, of course, of the universe. I think that at least to me, intelligence is, I go back to cybernetics, the ability to perceive, predict, and control our world.
Guillaume Verdon (01:14:46) 但实际上,现在看来,我们使用的很多智能更多是关于压缩。它是关于操作化信息论。在信息论中,你有分布或系统的熵的概念,熵告诉你,如果你有最优代码,你需要这么多比特来编码这个分布或这个子系统。AI,至少我们今天为LLM和量子所做的方式,非常像试图最小化我们的世界模型与世界之间、与来自世界的分布之间的相对熵。我们在学习,我们在计算空间中搜索以处理世界,以找到那个已经提炼出所有方差、噪声和熵的压缩表示。
GUILLAUME VERDON (01:14:46) But really, nowadays, it seems like a lot of intelligence we use is more about compression. It’s about operationalizing information theory. In information theory, you have the notion of entropy of a distribution or a system, and entropy tells you that you need this many bits to encode this distribution or this subsystem, if you have the most optimal code. AI, at least the way we do it today for LLMs and for quantum, is very much trying to minimize relative entropy between our models of the world and the world, distributions from the world. We’re learning, we’re searching over the space of computations to process the world, to find that compressed representation that has distilled all the variance in noise and entropy.
Guillaume Verdon (01:15:58) 最初,我从黑洞研究进入量子机器学习,因为黑洞的熵非常有趣。某种意义上,它们在物理上是宇宙中密度最高的物体。你无法在空间上比黑洞更密集地打包更多信息。所以我在想,黑洞实际上是如何编码信息的?它们的压缩代码是什么?这让我进入了算法空间,搜索量子代码空间。它也让我实际进入了,你如何从世界获取量子信息?我做过的一些工作,现在是公开的,是量子模数转换。
GUILLAUME VERDON (01:15:58) Originally, I came to quantum machine learning from the study of black holes because the entropy of black holes is very interesting. In a sense, they’re physically the most dense objects in the universe. You can’t pack more information spatially any more densely than in a black hole. And so, I was wondering, how do black holes actually encode information? What is their compression code? That got me into the space of algorithms, to search over space of quantum codes. It got me actually into also, how do you acquire quantum information from the world? Something I’ve worked on, this is public now, is quantum analog digital conversion.
Guillaume Verdon (01:16:50) 你如何从真实世界以叠加态捕获信息而不破坏叠加态,而是为量子计算机数字化来自真实世界的信息?如果你有能力捕获量子信息并学习它的表示,现在你就可以学习可能在其潜在表示中有一些有用信息的压缩表示。我认为我们文明面临的许多问题实际上都超越了这个复杂性障碍。温室效应是一种量子力学效应。化学是量子力学的。核物理是量子力学的。
GUILLAUME VERDON (01:16:50) How do you capture information from the real world in superposition and not destroy the superposition, but digitize for a quantum mechanical computer information from the real world? If you have an ability to capture quantum information and learn representation representations of it, now you can learn compressed representations that may have some useful information in their latent representation. I think that many of the problems facing our civilization are actually beyond this complexity barrier. The greenhouse effect is a quantum mechanical effect. Chemistry is quantum mechanical. Nuclear physics is quantum mechanical.
Guillaume Verdon (01:17:43) 很多生物学、蛋白质折叠等都受量子力学影响。所以,解锁用量子计算机和量子AI增强人类智力的能力,对我来说似乎是文明需要发展的基本能力。我花了几年时间做这个,但随着时间推移,我对开始看起来像核聚变的时间线感到厌倦。
GUILLAUME VERDON (01:17:43) A lot of biology and protein folding and so on is affected by quantum mechanics. And so, unlocking an ability to augment human intellect with quantum mechanical computers and quantum mechanical AI seemed to me like a fundamental capability for civilization that we needed to develop. I spent several years doing that, but over time, I grew weary of the timelines that were starting to look like nuclear fusion.
Lex Fridman (01:18:17) 我可以问一个高层次的问题,也许通过定义的方式,通过解释的方式:什么是量子计算机,什么是量子机器学习?
LEX FRIDMAN (01:18:17) One high-level question I can ask is maybe by way of definition, by way of explanation, what is a quantum computer and what is quantum machine learning?
Guillaume Verdon (01:18:27) 量子计算机实际上就是一个量子力学系统,我们对它有足够的控制,它可以保持其量子力学状态。量子力学是自然界在非常小的尺度上的行为方式,当事物非常小或非常冷时,它实际上比概率论更基础。我们习惯于事物是这个或那个,但我们不习惯用叠加态思考,因为,嗯,我们的大脑做不到。所以,我们必须把量子力学世界翻译成,比如说,线性代数来理解它。不幸的是,这种翻译平均而言是指数级低效的。你必须用非常大的矩阵来表示事物。但实际上,你可以用很多东西制造量子计算机,我们已经看到各种各样的玩家,从中性原子、囚禁离子、超导金属光子,在不同频率上。
GUILLAUME VERDON (01:18:27) A quantum computer really is a quantum mechanical system, over which we have sufficient control, and it can maintain its quantum mechanical state. And quantum mechanics is how nature behaves at the very small scales, when things are very small or very cold, and it’s actually more fundamental than probability theory. We’re used to things being this or that, but we’re not used to thinking in superpositions because, well, our brains can’t do that. So, we have to translate the quantum mechanical world to, say, linear algebra to grok it. Unfortunately, that translation is exponentially inefficient on average. You have to represent things with very large matrices. But really, you can make a quantum computer out of many things, and we’ve seen all sorts of players, from neutral atoms, trapped ions, superconducting metal photons at different frequencies.
Guillaume Verdon (01:19:38) 我认为你可以用很多东西制造量子计算机。但对我来说,真正有趣的是量子机器学习既是关于用量子计算机理解量子力学世界,所以把物理世界嵌入AI表示,也是量子计算机工程是把AI算法嵌入物理世界。把物理世界嵌入AI、把AI嵌入物理世界的这种双向性,物理和AI之间的这种共生关系,实际上这就是我追求的核心,即使到今天,在量子计算之后。它仍然在这个将物理和AI真正融合的旅程中。
GUILLAUME VERDON (01:19:38) I think you could make a quantum computer out of many things. But to me, the thing that was really interesting was both quantum machine learning was about understanding the quantum mechanical world with quantum computers, so embedding the physical world into AI representations, and quantum computer engineering was embedding AI algorithms into the physical world. This bi-directionality of embedding physical world into AI, AI into the physical world, this symbiosis between physics and AI, really that’s the core of my quest really, even to this day, after quantum computing. It’s still in this journey to merge really physics and AI.
Lex Fridman (01:20:29) 量子机器学习是一种在保持自然的量子力学方面真实的自然表示上进行机器学习的方式?
LEX FRIDMAN (01:20:29) Quantum machine learning is a way to do machine learning on a representation of nature that stays true to the quantum mechanical aspect of nature?
Guillaume Verdon (01:20:43) 对,它是学习量子力学表示。那将是量子深度学习。或者,你可以尝试在量子计算机上做经典机器学习。我不建议这样做,因为你可能会有一些加速,但很多时候,加速伴随着巨大的成本。使用量子计算机非常昂贵。
GUILLAUME VERDON (01:20:43) Yeah, it’s learning quantum mechanical representations. That would be quantum deep learning. Alternatively, you can try to do classical machine learning on a quantum computer. I wouldn’t advise it because you may have some speed-ups, but very often, the speed-ups come with huge costs. Using a quantum computer is very expensive.
Guillaume Verdon (01:21:08) 为什么?因为你假设计算机在绝对零度下运行,而宇宙中没有物理系统能达到那个温度。你必须做的是我一直提到的,这个量子纠错过程,它实际上是一个算法冰箱。它试图把熵从系统中抽出来,试图让它更接近0K。当你计算在量子计算机上做深度学习需要多少资源时,比如说,做经典深度学习,会有如此巨大的开销,不值得。这就像考虑用火箭穿越城市,进入轨道再返回来运送东西。没有意义。就用送货卡车。
GUILLAUME VERDON (01:21:08) Why is that? Because you assume the computer is operating at zero temperature, which no physical system in the universe can achieve that temperature. What you have to do is what I’ve been mentioning, this quantum error correction process, which is really an algorithmic fridge. It’s trying to pump entropy out of the system, trying to get it closer to zero temperature. When you do the calculations of how many resources it would take to, say, do deep learning on a quantum computer, classical deep learning, there’s such a huge overhead, it’s not worth it. It’s like thinking about shipping something across a city using a rocket and going to orbit and back. It doesn’t make sense. Just use a delivery truck.
Lex Fridman (01:21:53) 你能用量子深度学习弄清楚、预测、理解什么样的东西,而用深度学习做不到?所以,将量子力学系统纳入学习过程?
LEX FRIDMAN (01:21:53) What kind of stuff can you figure out, can you predict, can you understand with quantum deep learning that you can’t with deep learning? So, incorporating quantum mechanical systems into the learning process?
Guillaume Verdon (01:22:05) 我认为这是一个很好的问题。从根本上说,任何具有足够量子力学关联、对经典表示来说很难捕获的系统,量子力学表示应该比纯经典表示有优势。问题是,哪些系统有足够的、非常量子的关联?但这也是,哪些系统仍然与工业相关?这是一个大问题。人们倾向于化学、核物理。我实际上从事过处理来自量子传感器的输入。如果你有一个量子传感器网络,它们捕获了世界的量子力学图像,以及如何后处理,那就成为一种量子形式的机器感知。例如,费米实验室有一个项目探索用这些量子传感器探测暗物质。对我来说,这与我从小就想理解宇宙的追求是一致的。所以,有一天,我希望我们能有非常大的量子传感器网络,帮助我们窥视宇宙的最早期部分。例如,LIGO是一个量子传感器。它只是一个非常大的。所以,是的,我会说量子机器感知、模拟、理解量子模拟,类似于AlphaFold。AlphaFold理解了蛋白质配置的概率分布。你可以用量子机器学习更有效地理解电子配置的量子分布。
GUILLAUME VERDON (01:22:05) I think that’s a great question. Fundamentally, it’s any system that has sufficient quantum mechanical correlations that are very hard to capture for classical representations. Then, there should be an advantage for a quantum mechanical representation over a purely classical one. The question is, which systems have sufficient correlations that are very quantum? But it’s also, which systems are still relevant to industry? That’s a big question. People are leaning towards chemistry, nuclear physics. I’ve worked on actually processing inputs from quantum sensors. If you have a network of quantum sensors, they’ve captured a quantum mechanical image of the world and how to post-process that, that becomes a quantum form of machine perception. For example, Fermilab has a project exploring detecting dark matter with these quantum sensors. To me, that’s in alignment with my quest to understand the universe ever since I was a child. And so, someday, I hope that we can have very large networks of quantum sensors that help us peer into the earliest parts of the universe. For example, the LIGO is a quantum sensor. It’s just a very large one. So, yeah, I would say quantum machine perception, simulations, grokking quantum simulations, similar to AlphaFold. AlphaFold understood the probability distribution over configurations of proteins. You can understand quantum distributions over configurations of electrons more efficiently with quantum machine learning.
Lex Fridman (01:23:53) 你合著了一篇题为《量子深度学习的通用训练算法》的论文。那涉及Baqprop,带Q。做得很好,先生。做得很好。它是如何工作的?你能提一些有趣的方面吗,Baqprop以及我们为经典机器学习所知的一些东西如何转移到量子机器学习?
LEX FRIDMAN (01:23:53) You co-authored a paper titled A Universal Training Algorithm for Quantum Deep Learning. That involves Baqprop, with a Q. Very well done, sir. Very well done. How does it work? Is there some interesting aspects you can just mention on how Baqprop and some of these things we know for classical machine learning transfer over to the quantum machine learning?
Guillaume Verdon (01:24:19) 是的。那是一篇古怪的论文。那是我在量子深度学习领域的第一批论文之一。每个人都在说:”哦,我认为深度学习会被量子计算机加速。”我说:”好吧,预测未来的最好方法就是发明它。所以,这里有一篇100页的论文,祝你愉快。”本质上,量子计算通常是,你把可逆操作嵌入量子计算。
GUILLAUME VERDON (01:24:19) Yeah. That was a funky paper. That was one of my first papers in quantum deep learning. Everybody was saying, “Oh, I think deep learning is going to be sped up by quantum computers.” I was like, “ Well, the best way to predict the future is to invent it. So, here’s a 100-page paper, have fun.” Essentially, quantum computing is usually, you embed reversible operations into a quantum computation.
Guillaume Verdon (01:24:47) 那里的技巧是做一个前馈操作并做我们所说的相位踢(phase kick)。但实际上,它只是一个力踢(force kick)。你只是用与你希望优化的损失函数成正比的某种力踢系统。然后,通过执行反计算,你从参数的叠加态开始,这相当古怪。现在,你不只是有参数的一个点,你有许多潜在参数的叠加态。我们的目标是——
GUILLAUME VERDON (01:24:47) The trick there was to do a feedforward operation and do what we call a phase kick. But really, it’s just a force kick. You just kick the system with a certain force that is proportional to your loss function that you wish to optimize. And then, by performing uncomputation, you start with a superposition over parameters, which is pretty funky. Now, you don’t have just a point for parameters, you have a superposition over many potential parameters. Our goal is-
Lex Fridman (01:25:24) 是用相位踢以某种方式调整参数吗?
LEX FRIDMAN (01:25:24) Is using phase kick somehow to adjust the parameters?
Guillaume Verdon (01:25:28) 对。因为相位踢模拟了让参数空间像n维中的粒子,你试图在神经网络的损失景观中获得薛定谔方程、薛定谔动力学。你做一个算法来诱导这个相位踢,这涉及一个前馈、一个踢。然后,当你反计算前馈时,那么所有这些相位踢和这些力的误差会反向传播并击中各层中的每一个参数。
GUILLAUME VERDON (01:25:28) Right. Because phase kicks emulate having the parameter space be like a particle in end dimensions, and you’re trying to get the Schrödinger equation, Schrödinger dynamics, in the lost landscape of the neural network. You do an algorithm to induce this phase kick, which involves a feedforward, a kick. And then, when you uncompute the feedforward, then all the errors in these phase kicks and these forces back- propagate and hit each one of the parameters throughout the layers.
Guillaume Verdon (01:26:04) 如果你把这个与动能的模拟交替进行,那么它就像一个在n维中移动的粒子,一个量子粒子。原则上的优势是它可以在景观中穿隧并找到对于随机优化器来说很难找到的新最优解。但同样,这是一个理论性的东西,在实践中,至少以我们目前计划的量子计算机架构,这样的算法运行起来会极其昂贵。
GUILLAUME VERDON (01:26:04) If you alternate this with an emulation of kinetic energy, then it’s like a particle moving in end dimensions, a quantum particle. The advantage in principle would be that it can tunnel through the landscape and find new optima that would’ve been difficult for stochastic optimizers. But again, this is a theoretical thing, and in practice with at least the current architectures for quantum computers that we have planned, such algorithms would be extremely expensive to run.
Lex Fridman (01:26:41) 也许这是一个问不同领域之间区别的好地方,你曾涉足的领域。所以,数学、物理、工程,还有创业,堆栈的不同层次。我认为你在这里谈论的很多东西在数学方面有一点,也许物理几乎在理论中工作。
LEX FRIDMAN (01:26:41) Maybe this is a good place to ask the difference between the different fields that you’ve had a toe in. So, mathematics, physics, engineering, and also entrepreneurship, the different layers of the stack. I think a lot of the stuff you’re talking about here is a little bit on the math side, maybe physics almost working in theory.
Guillaume Verdon (01:27:03) 嗯。
GUILLAUME VERDON (01:27:03) Mm-hmm.
Lex Fridman (01:27:03) 数学、物理、工程和为量子计算、量子机器学习制造产品之间有什么区别?
LEX FRIDMAN (01:27:03) What’s the difference between math, physics, engineering, and making a product for a quantum computing for quantum machine learning?
Guillaume Verdon (01:27:14) 是的。TensorFlow Quantum项目的一些原始团队成员,我们在学校开始的,在滑铁卢大学,有我自己。最初,我是一名物理学家、应用数学家。我们有一名计算机科学家,我们有一名机械工程师,然后我们有一名物理学家。那主要是实验性的。组建非常跨学科的团队并弄清楚如何沟通和分享知识,真的是做这种跨学科工程工作的关键。
GUILLAUME VERDON (01:27:14) Yeah. Some of the original team for the TensorFlow Quantum project, which we started in school, at University of Waterloo, there was myself. Initially, I was a physicist, applied mathematician. We had a computer scientist, we had a mechanical engineer, and then we had a physicist. That was experimental primarily. Putting together teams that are very cross-disciplinary and figuring out how to communicate and share knowledge is really the key to doing this interdisciplinary engineering work.
Guillaume Verdon (01:27:51) 有很大的区别。在数学中,你可以为了数学而探索数学。在物理学中,你是在应用数学来理解我们周围的世界。在工程中,你试图黑掉世界。你试图找到如何应用我知道的物理学,我对世界的知识,来做事情。
GUILLAUME VERDON (01:27:51) There is a big difference. In mathematics, you can explore mathematics for mathematics’ sake. In physics, you’re applying mathematics to understand the world around us. And in engineering, you’re trying to hack the world. You’re trying to find how to apply the physics that I know, my knowledge of the world, to do things.
Lex Fridman (01:28:11) 嗯,特别是在量子计算中,我认为工程上有很多限制。它似乎非常困难。
LEX FRIDMAN (01:28:11) Well, in quantum computing in particular, I think there’s just a lot of limits to engineering. It just seems to be extremely hard.
Guillaume Verdon (01:28:17) 是的。
GUILLAUME VERDON (01:28:17) Yeah.
Lex Fridman (01:28:18) 所以在理论上用数学探索量子计算、量子机器学习有很多价值。我想问一个问题是,为什么建造量子计算机如此困难?你对将这些想法付诸实践的时间线有什么看法?
LEX FRIDMAN (01:28:18) So, there’s a lot of value to be exploring quantum computing, quantum machine learning in theory with math. I guess one question is, why is it so hard to build a quantum computer? What’s your view of timelines in bringing these ideas to life?
Guillaume Verdon (01:28:43) 对。我认为我公司的一个总体主题是,我们有一些……有一种从量子计算的大规模流出,我们正在转向不是量子的更广泛的基于物理的AI。所以,这给了你一个提示。
GUILLAUME VERDON (01:28:43) Right. I think that an overall theme of my company is that we have folks that are… There’s a sort of exodus from quantum computing and we’re going to broader physics-based AI that is not quantum. So, that gives you a hint.
Lex Fridman (01:29:00) 我们应该说你的公司名字是Extropic?
LEX FRIDMAN (01:29:00) We should say the name of your company is Extropic?
Guillaume Verdon (01:29:03) Extropic,没错。我们做基于物理的AI,主要基于热力学,而不是量子力学。但本质上,量子计算机非常难以建造,因为你必须诱导这个零开温度的信息子空间。做到这一点的方法是通过编码信息,你在代码中编码代码,在代码中编码代码,在代码中编码代码。需要大量冗余来做这个纠错,但最终,它是一种算法冰箱,真的。它只是把熵从虚拟的、去局域化的子系统中抽出来,该子系统代表你的”逻辑量子比特”,也就是你实际想运行量子力学程序的有效载荷量子比特。它非常困难,因为为了扩展你的量子计算机,你需要每个组件都具有足够的质量才值得。因为如果你试图做这个纠错,这个量子纠错过程,在每个量子比特和你对它们的控制中,如果它不够充分,就不值得扩展。你实际上添加的错误比你移除的更多。有一个阈值的概念,即如果你的量子比特在控制方面具有足够的质量,那么扩展实际上是值得的。实际上,近年来,人们一直在跨越阈值,它开始变得值得。
GUILLAUME VERDON (01:29:03) Extropic, that’s right. We do physics-based AI, primarily based on thermodynamics, rather than quantum mechanics. But essentially, a quantum computer is very difficult to build because you have to induce this zero temperature subspace of information. The way to do that is by encoding information, you encode a code within a code, within a code, within a code. There’s a lot of redundancy needed to do this error correction, but ultimately, it’s a sort of algorithmic refrigerator, really. It’s just pumping out entropy out of the subsystem that is virtual and delocalized that represents your “logical qubits”, aka the payload quantum bits in which you actually want to run your quantum mechanical program. It’s very difficult because in order to scale up your quantum computer, you need each component to be of sufficient quality for it to be worth it. Because if you try to do this error correction, this quantum error correction process, in each quantum bit and your control over them, if it’s insufficient, it’s not worth scaling up. You’re actually adding more errors than you remove. There’s this notion of a threshold where if your quantum bits are sufficient quality in terms of your control over them, it’s actually worth scaling up. Actually, in recent years, people have been crossing the threshold and it’s starting to be worth it.
Guillaume Verdon (01:30:38) 这只是一个非常漫长的工程跋涉,但最终,对我来说真正疯狂的是我们对这些系统有多么精致的控制水平。这实际上相当疯狂。人们正在跨越……他们正在实现里程碑。只是总的来说,媒体总是走在技术前面。炒作有点太多了。这对筹款有好处,但有时它会导致寒冬。这是炒作周期。我个人对10年、15年时间尺度上的量子计算持乐观态度,但我认为在此期间可以做其他的探索。我认为它现在掌握在好手中。
GUILLAUME VERDON (01:30:38) It’s just a very long slog of engineering, but ultimately, it’s really crazy to me how much exquisite level of control we have over these systems. It’s actually quite crazy. And people are crossing… They’re achieving milestones. It’s just in general, the media always gets ahead of where the technology is. There’s a bit too much hype. It’s good for fundraising, but sometimes it causes winters. It’s the hype cycle. I’m bullish on quantum computing on a 10, 15-year timescale personally, but I think there’s other quests that can be done in the meantime. I think it’s in good hands right now.
Lex Fridman (01:31:22) 嗯,让我探索一些不同的美丽想法,无论大小,在量子计算中可能从记忆中跳出来的,当你合著了一篇题为《通过Qudit探针实现渐近无限量子能量传送》的论文时。出于好奇,你能解释一下qudit与qubit相比是什么吗?
LEX FRIDMAN (01:31:22) Well, let me just explore different beautiful ideas, large or small, in quantum computing that might jump out at you from memory when you co-authored a paper titled Asymptotically Limitless Quantum Energy Teleportation via Qudit Probes. Just out of curiosity, can you explain what a qudit is versus a qubit?
Guillaume Verdon (01:31:45) 是的。它是一个D态量子比特。
GUILLAUME VERDON (01:31:45) Yeah. It’s a D-state qubit.
Lex Fridman (01:31:49) 它是多维的?
LEX FRIDMAN (01:31:49) It’s a multidimensional?
Guillaume Verdon (01:31:50) 多维的,对。它就像,嗯,你能有一个量子力学的整数浮点概念吗?这是我必须思考的东西。我认为那项研究是后来量子模数转换工作的前兆。那很有趣,因为在我硕士期间,我试图理解真空、空无的能量和纠缠。空无具有能量,这说起来非常奇怪。我们的宇宙学方程与我们对涨落中存在多少量子能量的计算不匹配。
GUILLAUME VERDON (01:31:50) Multidimensional, right. It’s like, well, can you have a notion of an integer floating point that is quantum mechanical? That’s something I’ve had to think about. I think that research was a precursor to later work on quantum analog digital conversion. There was interesting because during my masters, I was trying to understand the energy and entanglement of the vacuum of emptiness. Emptiness has energy, which is very weird to say. Our equations of cosmology don’t match our calculations for the amount of quantum energy there is in the fluctuations.
Guillaume Verdon (01:32:36) 我试图黑进真空的能量,而现实是你不能直接黑进它。它在技术上不是自由能。你对涨落的无知意味着你无法提取能量。但就像股市一样,如果你有一只随时间相关的股票,真空实际上是相关的。如果你在一个点测量了真空,你获得了信息。如果你把那个信息传达到另一个点,你可以推断真空处于什么配置,达到某种精度,并统计地平均提取一些能量。所以,你”传送了能量”。
GUILLAUME VERDON (01:32:36) I was trying to hack the energy of the vacuum, and the reality is that you can’t just directly hack it. It’s not technically free energy. Your lack of knowledge of the fluctuations means you can’t extract the energy. But just like the stock market, if you have a stock that’s correlated over time, the vacuum’s actually correlated. If you measured the vacuum at one point, you acquired information. If you communicated that information to another point, you can infer what configuration the vacuum is in to some precision and statistically extract, on average, some energy there. So, you’ve “teleported energy”.
Guillaume Verdon (01:33:18) 对我来说,这很有趣,因为你可以创造负能量密度的口袋,也就是低于真空的能量密度,这非常奇怪,因为我们不理解真空如何(传播?)引力。有一些理论认为真空或时空本身的画布实际上是由量子纠缠制成的画布。我在研究如何在局部降低真空的能量会增加量子纠缠,这非常古怪。
GUILLAUME VERDON (01:33:18) To me, that was interesting because you could create pockets of negative-energy density, which is energy density that is below the vacuum, which is very weird because we don’t understand how the vacuum gravitates. There are theories where the vacuum or the canvas of space-time itself is really a canvas made out of quantum entanglement. I was studying how decreasing energy of vacuum locally increases quantum entanglement, which is very funky.
Guillaume Verdon (01:33:58) 这里的事情是,如果你对UAP和诸如此类的奇怪理论感兴趣,你可以试着想象它们在周围。它们会如何推动自己?它们会如何超越光速?你需要一种负能量密度。对我来说,我尽了我的努力,试图黑进真空的能量,并达到物理定律允许的极限。但那里有各种警告,你显然不能提取比你投入的更多。
GUILLAUME VERDON (01:33:58) The thing there is that, if you’re into to weird theories about UAPs and whatnot, you could try to imagine that they’re around. And how would they propel themselves? How would they go faster than the speed of light? You would need a sort of negative energy density. To me, I gave it the old college try, trying to hack the energy of vacuum and hit the limits allowable by the laws of physics. But there’s all sorts of caveats there where you can’t extract more than you’ve put in, obviously.
Lex Fridman (01:34:41) 但你是说传送能量是可能的,因为你可以在一个地方提取信息,然后基于此,对另一个地方做出某种预测?
LEX FRIDMAN (01:34:41) But you’re saying it’s possible to teleport the energy because you can extract information one place and then make, based on that, some kind of prediction about another place?
Guillaume Verdon (01:34:56) 嗯。
GUILLAUME VERDON (01:34:56) Mm-hmm.
Lex Fridman (01:34:57) 我不确定该如何理解这个。
LEX FRIDMAN (01:34:57) I’m not sure what to make of that.
Guillaume Verdon (01:34:58) 是的,这是物理定律允许的。但现实是关联会随距离衰减。
GUILLAUME VERDON (01:34:58) Yeah, it’s allowable by the laws of physics. The reality though is that the correlations decay with distance.
Lex Fridman (01:35:06) 当然。
LEX FRIDMAN (01:35:06) Sure.
Guillaume Verdon (01:35:06) 所以,你将不得不在离你提取它的地方不太远的地方付出代价。
GUILLAUME VERDON (01:35:06) And so, you’re going to have to pay the price not too far away from where you extract it.
书童按:本篇是Guillaume Verdon接受Lex Fridman播客采访的实录。Verdon是物理学家、应用数学家与量子机器学习先驱,曾在谷歌从事量子计算研究,后创立Extropic公司,致力于为生成式AI打造基于物理原理的计算硬件。他亦是X平台匿名账号@BasedBeffJezos背后的真实人物,有效加速主义(e/acc)运动的联合创始人。e/acc以热力学与信息论为哲学根基,主张以技术快速进步作为人类伦理最优选择,正面对抗”AI末日论”代表的减速主义思潮。访谈纵横于量子计算与非平衡热力学的哲学意涵、匿名言论与思想自由、AI监管与市场力量的博弈、通用智能的重新定义等议题,视野开阔,锋芒毕现。初稿采用Claude API机器翻译及排版,书童仅做简单校对及批注,将分四部分发布,以飨诸君。

Lex Fridman (00:00:00) 以下是与Guillaume Verdon的对话。他就是X平台上曾经匿名的账号@BasedBeffJezos背后的人。这两重身份因《福布斯》一篇题为《@BasedBeffJezos是谁?科技精英e/acc运动的领袖》的曝光文章被强行合二为一。让我来介绍同一个大脑里共存的这两重身份。其一:Guillaume是物理学家、应用数学家、量子机器学习研究者兼工程师,在量子机器学习方向取得博士学位,曾供职于谷歌量子计算团队,后创立Extropic公司,为生成式AI打造基于物理原理的计算硬件。
LEX FRIDMAN (00:00:00) The following is a conversation with Guillaume Verdon, the man behind the previously anonymous account @BasedBeffJezos on X. These two identities were merged by a doxxing article in Forbes titled, Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s E/Acc Movement? So let me describe these two identities that coexist in the mind of one human. Identity number one, Guillaume, is a physicist, applied mathematician, and quantum machine learning researcher and engineer receiving his PhD in quantum machine learning, working at Google on quantum computing, and finally launching his own company called Extropic that seeks to build physics-based computing hardware for generative AI.
Lex Fridman (00:00:47) 其二:X平台上的Beff Jezos是有效加速主义运动的创始人——常缩写为e/acc——主张将推动技术快速进步作为人类伦理上的最优选择。其拥护者深信AI进步是最强大的社会均衡器,理应全力推进。e/acc追随者自视为谨慎派的相反力量——后者认为AI高度不可预测、潜在危险、亟需监管。他们管对手叫”末日派”或”减速派”(decel)。用Beff自己的话说:”e/acc是一种模因化的乐观主义病毒。”
LEX FRIDMAN (00:00:47) Identity number two, Beff Jezos on X is the creator of the effective accelerationism movement, often abbreviated as e/acc, that advocates for propelling rapid technological progress as the ethically optimal course of action for humanity. For example, its proponents believe that progress in AI is a great social equalizer, which should be pushed forward. e/acc followers see themselves as a counterweight to the cautious view that AI is highly unpredictable, potentially dangerous, and needs to be regulated. They often give their opponents the labels of quote, “doomers or decels” short for deceleration, as Beff himself put it, “e/acc is a mimetic optimism virus.”
Lex Fridman (00:01:37) 这场运动的传播风格一贯偏向梗图和搞笑,但背后有扎实的思想根基,我们会在对话中深入挖掘。说到梗——本人勉强算个荒诞美学的业余爱好者。我先后和Jeff Bezos、Beff Jezos做了背靠背的访谈,这绝非巧合。对话中会聊到,Beff视Jeff为当今最重要的在世人类之一,而我则纯粹欣赏这里头的荒诞之美和幽默感。这里是Lex Fridman播客,如您愿意支持,请查看简介中的赞助商信息。闲话少叙,朋友们,有请Guillaume Verdon。
LEX FRIDMAN (00:01:37) The style of communication of this movement leans always toward the memes and the lols, but there is an intellectual foundation that we explore in this conversation. Now, speaking of the meme, I am to a kind of aspiring connoisseur of the absurd. It is not an accident that I spoke to Jeff Bezos and Beff Jezos back to back. As we talk about Beff admires Jeff as one of the most important humans alive, and I admire the beautiful absurdity and the humor of it all. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Guillaume Verdon.
Lex Fridman (00:02:23) 先把身份这件事捋清楚。你叫Guillaume Verdon,Gill,但你同时也是X上匿名账号@BasedBeffJezos背后的人。Guillaume Verdon这边:量子计算学者、物理学家、应用数学家;@BasedBeffJezos那边:本质上是个发起了一场运动、背后有哲学体系的梗图账号。能不能展开聊聊这两个角色——性格、沟通风格、哲学理念有什么不同?
LEX FRIDMAN (00:02:23) Let’s get the facts of identity down first. Your name is Guillaume Verdon, Gill, but you’re also behind the anonymous account on X called @BasedBeffJezos. So first, Guillaume Verdon, you’re a quantum computing guy, physicist, applied mathematician, and then @BasedBeffJezos is basically a meme account that started a movement with a philosophy behind it. So maybe just can you linger on who these people are in terms of characters, in terms of communication styles, in terms of philosophies?
Guillaume Verdon (00:02:58) 说说我的主要身份吧。打小起我就想搞清楚万物之理,想理解宇宙。这条路把我领进了理论物理,最终试图回答那些终极命题——我们为何在此?我们将往何处?由此我开始研究信息论,从信息的视角理解物理,把宇宙看作一台巨大的计算机。在黑洞物理研究到一定深度后,我意识到自己不仅想理解宇宙如何计算,更想”像自然那样去计算”——造出受自然启发的计算机,也就是基于物理的计算机。这把我带进了量子计算领域:首先是模拟自然,再就是在我的工作中,学习能在量子计算机上运行的自然表示。
GUILLAUME VERDON (00:02:58) I mean, with my main identity, I guess ever since I was a kid, I wanted to figure out the theory of everything, to understand the universe. And that path led me to theoretical physics, eventually trying to answer the big questions of why are we here? Where are we going? And that led me to study information theory and try to understand physics from the lens of information theory, understand the universe as one big computation. And essentially after reaching a certain level studying black hole physics, I realized that I wanted to not only understand how the universe computes, but sort of compute like nature and figure out how to build and apply computers that are inspired by nature. So physics-based computers. And that sort of brought me to quantum computing as a field of study to first of all, simulate nature. And in my work it was to learn representations of nature that can run on such computers.
Guillaume Verdon (00:04:17) 如果让AI用自然的方式思考,它们就能更精准地表征自然。至少这是驱使我成为量子机器学习领域早期探索者的核心命题——怎样在量子计算机上做机器学习,怎样把智能的概念延伸到量子领域。怎样捕获和理解现实世界的量子力学数据?怎样学习世界的量子力学表示?用什么样的计算机来运行和训练?怎样实现?这些就是我要回答的问题。而说到底,我经历了一次信仰危机。最初,跟每个物理学家一样,入行时都想用几个方程写尽宇宙,当那个故事里的英雄。
GUILLAUME VERDON (00:04:17) So if you have AI representations that think like nature, then they’ll be able to more accurately represent it. At least that was the thesis that brought me to be an early player in the field called quantum machine learning. So how to do machine learning on quantum computers and really sort of extend notions of intelligence to the quantum realm. So how do you capture and understand quantum mechanical data from our world? And how do you learn quantum mechanical representations of our world? On what kind of computer do you run these representations and train them? How do you do so? And so that’s really the questions I was looking to answer because ultimately I had a sort of crisis of faith. Originally, I wanted to figure out as every physicist does at the beginning of their career, a few equations that describe the whole universe and sort of be the hero of the story there.
Guillaume Verdon (00:05:28) 但后来我想通了:用机器增强我们自身,增强我们感知、预测和掌控世界的能力,这才是正路。于是我离开理论物理,转入量子计算和量子机器学习。在那些年里,我始终觉得拼图还差一块。我们理解世界、计算世界、思考世界的方式,都少了点什么。看物理尺度的话:极小尺度上,量子力学说了算;极大尺度上,一切是确定性的,统计涨落已被抹平。我确确实实坐在这张椅子上,不是叠加在东西南北飘忽不定。极小尺度上倒是有叠加态、有干涉效应。但在介观尺度——日常生活的尺度,蛋白质、生物体、气体、液体所在的尺度——物质其实是热力学性质的,在涨落。
GUILLAUME VERDON (00:05:28) But eventually I realized that actually augmenting ourselves with machines, augmenting our ability to perceive, predict, and control our world with machines is the path forward. And that’s what got me to leave theoretical physics and go into quantum computing and quantum machine learning. And during those years I thought that there was still a piece missing. There was a piece of our understanding of the world and our way to compute and our way to think about the world. And if you look at the physical scales, at the very small scales, things are quantum mechanical, and at the very large scales, things are deterministic. Things have averaged out. I’m definitely here in this seat. I’m not in a super position over here and there. At the very small scales, things aren’t super position. They can exhibit interference effects. But at the meso scales, the scales that matter for day-to-day life and the scales of proteins, of biology, of gases, liquids and so on, things are actually thermodynamical, they’re fluctuating.
Guillaume Verdon (00:06:46) 在量子计算和量子机器学习领域干了大约八年后,我突然开窍了——我一直在极大和极小之间找答案。做过一点量子宇宙学——研究宇宙从哪来、往哪去;研究黑洞物理、量子引力的极端情形,也就是能量密度高到量子力学和引力同时登场的地方。典型场景就是黑洞和极早期宇宙——量子力学与相对论的交界地带。
GUILLAUME VERDON (00:06:46) And after I guess about eight years and quantum computing and quantum machine learning, I had a realization that I was looking for answers about our universe by studying the very big and the very small. I did a bit of quantum cosmology. So that’s studying the cosmos, where it’s going, where it came from. You study black hole physics, you study the extremes in quantum gravity, you study where the energy density is sufficient for both quantum mechanics and gravity to be relevant. And the sort of extreme scenarios are black holes and the very early universe. So there’s the sort of scenarios that you study the interface between quantum mechanics and relativity.
Guillaume Verdon (00:07:42) 可我一直盯着两端的极端,却漏掉了”中间那块肉”。日常尺度上量子力学有用、宇宙学有用,但其实没那么直接相关。我们活在中等时空尺度上,这个尺度上最管用的物理理论是热力学——尤其是非平衡热力学。生命本身就是热力学过程,而且是远离平衡态的。我们不是与环境达成热平衡的一锅粒子汤,而是一种拼命维持自身的相干态,靠获取和消耗自由能来续命。差不多在我离开Alphabet前夕,我对宇宙的信念再次发生了转变。我知道自己要造一种基于这类物理的全新计算范式。
GUILLAUME VERDON (00:07:42) And really I was studying these extremes to understand how the universe works and where is it going. But I was missing a lot of the meat in the middle, if you will, because day-to-day quantum mechanics is relevant and the cosmos is relevant, but not that relevant actually. We’re on sort of the medium space and timescales. And there the main theory of physics that is most relevant is thermodynamics, out of equilibrium thermodynamics. Because life is a process that is thermodynamical and it’s out of equilibrium. We’re not just a soup of particles at equilibrium with nature, were a sort of coherent state trying to maintain itself by acquiring free energy and consuming it. And that sort of, I guess another shift in, I guess my faith in the universe happened towards the end of my time at Alphabet. And I knew I wanted to build, well, first of all a computing paradigm based on this type of physics.
Guillaume Verdon (00:08:57) 但与此同时,在把这些想法实验性地应用于社会、经济等方面的过程中,我开了个匿名号——纯粹是为了卸下”说什么都得负责”那种实名账号的压力。一开始只是想拿匿名号来试探想法,没想到直到真正放手,我才发现自己过去把思想空间压缩得有多厉害。某种意义上,限制言论会反向传播为限制思想。开了匿名号之后,感觉脑子里有些变量突然被解锁了,我一下子能在大得多的思想参数空间里探索。
GUILLAUME VERDON (00:08:57) But ultimately just by trying to experiment with these ideas applied to society and economies and much of what we see around us, I started an anonymous account just to relieve the pressure that comes from having an account that you’re accountable for everything you say on. And I started an anonymous account just to experiment with ideas originally because I didn’t realize how much I was restricting my space of thoughts until I sort of had the opportunity to let go. In a sense, restricting your speech back propagates to restricting your thoughts. And by creating an anonymous account, it seemed like I had unclamped some variables in my brain and suddenly could explore a much wider parameter space of thoughts.
Lex Fridman (00:10:00) 在这点上展开一下——这不是很有意思吗?大家很少谈的一件事是:言论一旦受到压力和约束,思想也不知不觉被约束了,尽管逻辑上完全不必如此。我们明明可以在脑子里想任何事,但这种外部压力硬是会在思想四周筑起围墙。
LEX FRIDMAN (00:10:00) Just a little on that, isn’t that interesting that one of the things that people don’t often talk about is that when there’s pressure and constraints on speech, it somehow leads to constraints on thought even though it doesn’t have to. We can think thoughts inside our head, but somehow it creates these walls around thought.
Guillaume Verdon (00:10:23) 没错。这正是我们运动的出发点——我们看到一种趋势:在生活的方方面面压制多样性,无论是思想、经营方式、组织方式还是AI研究路径。我们坚信,保持多样性才能确保系统的适应力。在思想、公司、产品、文化、政府、货币的市场中维持健康竞争,才是正途——因为系统总会自我调适,把资源配置给最有利于增长的那些形态。运动的根本理念,是这样一种洞察:生命是宇宙中一团追逐自由能、渴望生长的火焰,增长是生命的本性。平衡热力学的方程里写得明明白白:那些更擅长获取自由能、散逸更多热量的物质路径,出现的概率呈指数级增高。宇宙本身偏爱某些未来,整个系统自有其天然的走向。
GUILLAUME VERDON (00:10:23) Yep. That’s sort of the basis of our movement is we were seeing a tendency towards constraint, reduction or suppression of variants in every aspect of life, whether it’s thought, how to run a company, how to organize humans, how to do AI research. In general, we believe that maintaining variance ensures that the system is adaptive. Maintaining healthy competition in marketplaces of ideas, of companies, of products, of cultures, of governments, of currencies is the way forward because the system always adapts to assign resources to the configurations that lead to its growth. And the fundamental basis for the movement is this sort of realization that life is a sort of fire that seeks out free energy in the universe and seeks to grow. And that growth is fundamental to life. And you see this in the equations actually of equilibrium thermodynamics. You see that paths of trajectories, of configurations of matter that are better at acquiring free energy and dissipating more heat are exponentially more likely. So the universe is biased towards certain futures, and so there’s a natural direction where the whole system wants to go.
Lex Fridman (00:12:21) 热力学第二定律说,宇宙的熵永远在增加,趋向平衡。而你说的是,其中存在一些复杂的、远离平衡的”口袋”。你还说热力学有利于复杂生命的涌现——这类生命通过消耗能量、向外卸载熵来提升自身能力。于是就有了这些逆熵的”口袋”。凭什么你直觉上认为这种口袋的涌现是自然的?
LEX FRIDMAN (00:12:21) So the second law of thermodynamics says that the entropy is always increasing in the universe that’s tending towards an equilibrium. And you’re saying there’s these pockets that have complexity and are out of equilibrium. You said that thermodynamics favors the creation of complex life that increases its capability to use energy to offload entropy. To offload entropy. So you have pockets of non-entropy that tend the opposite direction. Why is that intuitive to you that it’s natural for such pockets to emerge?
Guillaume Verdon (00:12:53) 因为我们产热的效率远超一块同等质量的石头。我们获取自由能、摄入食物、消耗大量电力来维持运转。宇宙想产生更多熵,而让生命继续运转和壮大,恰恰是产熵的最优路径——生命会主动搜寻自由能的”口袋”并将其燃烧殆尽,以维系自身并进一步扩张。这就是生命的底层逻辑。MIT的Jeremy England有一套理论——我深以为然——认为生命的涌现正是源于这种属性。在我看来,这套物理就是支配介观尺度的法则,是量子与宇宙之间缺失的那块拼图,是中间层。热力学主宰着介观尺度。
GUILLAUME VERDON (00:12:53) Well, we’re far more efficient at producing heat than let’s say just a rock with a similar mass as ourselves. We acquire free energy, we acquire food, and we’re using all this electricity for our operation. And so the universe wants to produce more entropy and by having life go on and grow, it’s actually more optimal at producing entropy because it will seek out pockets of free energy and burn it for its sustenance and further growth. And that’s sort of the basis of life. And I mean, there’s Jeremy England at MIT who has this theory that I’m a proponent of, that life emerged because of this sort of property. And to me, this physics is what governs the meso scales. And so it’s the missing piece between the quantum and the cosmos. It’s the middle part. Thermodynamics rules the meso scales.
Guillaume Verdon (00:14:08) 对我来说,无论是从工程角度——设计利用这种物理特性的器件,还是从认知角度——透过热力学棱镜理解世界,过去一年半里两重身份已形成了协同。这也正是两重身份各自浮现的深层原因。一面是,我是受到认可的科学家,正走向创业,要做新型物理AI的先驱;另一面是,我在以物理学家的视角实验性地探索哲学。
GUILLAUME VERDON (00:14:08) And to me, both from a point of view of designing or engineering devices that harness that physics and trying to understand the world through the lens of thermodynamics has been sort of a synergy between my two identities over the past year and a half now. And so that’s really how the two identities emerged. One was kind of, I’m a decently respected scientist, and I was going towards doing a startup in the space and trying to be a pioneer of a new kind of physics-based AI. And as a dual to that, I was sort of experimenting with philosophical thoughts from a physicist standpoint.
Guillaume Verdon (00:14:58) 大约在那段时间——2021年底、2022年初——社会上对未来弥漫着悲观情绪,对技术尤甚。这种悲观在算法加持下病毒式扩散,人们普遍觉得未来不如现在。在我看来,这种”末日心态”是宇宙中一种极具破坏力的力量,因为它具有超迷信性(hyperstitious,书童注:hyperstition,指信念本身能提高其所预言之事发生概率的现象,自我实现的预言)——你越信它,它越可能成真。我因此觉得有责任让人们认清文明的发展轨迹和系统趋向增长的天然本性。物理定律实际上在说:统计上看,未来会更好、更宏大,而我们有能力让它成真。
GUILLAUME VERDON (00:14:58) And ultimately I think that around that time, it was like late 2021, early 2022, I think there was just a lot of pessimism about the future in general and pessimism about tech. And that pessimism was sort of virally spreading because it was getting algorithmically amplified and people just felt like the future is going to be worse than the present. And to me, that is a very fundamentally destructive force in the universe is this sort of doom mindset because it is hyperstitious, which means that if you believe it, you’re increasing the likelihood of it happening. And so felt a responsibility to some extent to make people aware of the trajectory of civilization and the natural tendency of the system to adapt towards its growth. And that actually the laws of physics say that the future is going to be better and grander statistically, and we can make it so.
Guillaume Verdon (00:16:14) 反过来也一样:你若相信未来更好,并且相信自己有能力促成它,你就在实实在在地提高那个更好的未来出现的概率。所以我觉得有责任去打造一场关于未来的病毒式乐观主义运动,建一个互相支持的社区,一起造东西、干难事——做那些文明扩张必须做的事。因为在我看来,停滞和减速根本就不是选项。生命、整个系统、我们的文明,本质上就渴望增长。增长期的合作远多于衰退期——后者只会让人争着分一块越来越小的饼。就这样,我一直在两重身份之间走平衡木,直到最近两者在我不知情的情况下被强行合并了。
GUILLAUME VERDON (00:16:14) And if you believe in it, if you believe that the future would be better and you believe you have agency to make it happen, you’re actually increasing the likelihood of that better future happening. And so I sort of felt a responsibility to sort of engineer a movement of viral optimism about the future, and build a community of people supporting each other to build and do hard things, do the things that need to be done for us to scale up civilization. Because at least to me, I don’t think stagnation or slowing down is actually an option. Fundamentally life and the whole system, our whole civilization wants to grow. And there’s just far more cooperation when the system is growing rather than when it’s declining and you have to decide how to split the pie. And so I’ve balanced both identities so far, but I guess recently the two have been merged more or less without my consent.
Lex Fridman (00:17:27) 你讲了好多精彩的东西。首先是”自然的表示”——这是最初吸引你从量子计算角度切入的:如何理解自然?如何表示自然,才能理解它、模拟它、用它做些什么?本质上是一个表示问题。然后你从量子力学表示跃迁到你所说的介观尺度表示,热力学在这里登场——这是另一种表示自然的方式,为了理解什么?理解生命、人类行为,理解地球上这些我们觉得有意思的一切。
LEX FRIDMAN (00:17:27) You said a lot of really interesting things there. So first, representations of nature, that’s something that first drew you in to try to understand from a quantum computing perspective, how do you understand nature? How do you represent nature in order to understand it, in order to simulate it, in order to do something with it? So it’s a question of representations, and then there’s that leap you take from the quantum mechanical representation to the what you’re calling meso scale representation, where the thermodynamics comes into play, which is a way to represent nature in order to understand what? Life, human behavior, all this kind of stuff that’s happening here on earth that seems interesting to us.
Lex Fridman (00:18:11) 然后是”hyperstition”这个词——有些观念,不管是悲观还是乐观,有这么个特质:你一旦内化它,就在某种程度上把它变成了现实。悲观和乐观都有这种属性。我猜很多观念都有,这恰恰是人类最有趣的地方之一。你还提到一个有趣的区分:Guillaume/Gill这个”前台”和@BasedBeffJezos这个”后台”,沟通风格截然不同——你在探索21世纪更有病毒传播力的表达方式。你提到的这场运动不只是个梗号,它有名字,叫有效加速主义(e/acc)——戏仿有效利他主义(EA),也是对它的反抗。我很想和你聊这种张力。然后就是那场强制合并——你说的,最近两个人格被未经你同意地合体了。有记者查出你俩其实是同一个人。说说那段经历?合并是怎么发生的?
LEX FRIDMAN (00:18:11) Then there’s the word hyperstition. So some ideas as suppose both pessimism and optimism of such ideas that if you internalize them, you in part make that idea reality. So both optimism, pessimism have that property. I would say that probably a lot of ideas have that property, which is one of the interesting things about humans. And you talked about one interesting difference also between the sort of the Guillaume, the Gill front end and the @BasedBeffJezos backend is the communication styles also that you are exploring different ways of communicating that can be more viral in the way that we communicate in the 21st century. Also, the movement that you mentioned that you started, it’s not just a meme account, but there’s also a name to it called effective accelerationism, e/acc, a play, a resistance to the effective altruism movement. Also, an interesting one that I’d love to talk to you about, the tensions there. And so then there was a merger, a get merge on the personalities recently without your consent, like you said. Some journalists figured out that you’re one and the same. Maybe you could talk about that experience. First of all, what’s the story of the merger of the two?
Guillaume Verdon (00:19:47) 是这样,我和e/acc的联合创始人——一个叫@bayeslord的匿名账号,至今仍匿名,但愿永远如此——一起写了宣言。
GUILLAUME VERDON (00:19:47) So I wrote the manifesto with my co-founder of e/acc, an account named @bayeslord, still anonymous, luckily and hopefully forever.
Lex Fridman (00:19:58) 也就是@BasedBeffJezos和@bayeslord——bayes就是贝叶斯,@bayeslord,贝叶斯之主。好。那以后你说e/acc,就是E斜杠A-C-C,全称effective accelerationism,有效加速主义。
LEX FRIDMAN (00:19:58) So it was @BasedBeffJezos and bayes like bayesian, like @bayeslord, like bayesian lord, @bayeslord. Okay. And so we should say from now on, when you say e/acc, you mean E slash A-C-C, which stands for effective accelerationism.
Guillaume Verdon (00:20:17) 没错。
GUILLAUME VERDON (00:20:17) That’s right.
Lex Fridman (00:20:18) 你说的宣言,是发在Substack上的?
LEX FRIDMAN (00:20:18) And you’re referring to a manifesto written on, I guess Substack.
Guillaume Verdon (00:20:23) 对。
GUILLAUME VERDON (00:20:23) Yeah.
Lex Fridman (00:20:23) 你也是@bayeslord吗?
LEX FRIDMAN (00:20:23) Are you also @bayeslord?
Guillaume Verdon (00:20:25) 不是。
GUILLAUME VERDON (00:20:25) No.
Lex Fridman (00:20:25) 那是另一个人?
LEX FRIDMAN (00:20:25) Okay. It’s a different person?
Guillaume Verdon (00:20:26) 是。
GUILLAUME VERDON (00:20:26) Yeah.
Lex Fridman (00:20:27) 好吧。万一@bayeslord就是我呢,那可有意思了。
LEX FRIDMAN (00:20:27) Okay. All right. Well, there you go. Wouldn’t it be funny if I’m @bayeslord?
Guillaume Verdon (00:20:31) 那绝了。宣言差不多和我创立公司同期写成。当时我在Google X——现在叫X了,或者Alphabet X,毕竟又冒出来了另一个X。那里的底线就是保密——你不能跟谷歌内部的同事聊自己在做什么,更别说外界。这种习惯在我做事方式里根深蒂固,尤其是在有地缘政治影响的深科技领域。所以我对自己研究的内容一直守口如瓶,公司和我的公开身份之间毫无关联。但记者不仅把二者关联起来了,还进一步把我的真实身份和那个匿名号关联了起来。
GUILLAUME VERDON (00:20:31) That’d be amazing. So originally wrote the manifesto around the same time as I founded this company and I worked at Google X or just X now or Alphabet X, now that there’s another X. And there the baseline is sort of secrecy. You can’t talk about what you work on even with other Googlers or externally. And so that was kind of deeply ingrained in my way to do things, especially in deep tech that has geopolitical impact. And so I was being secretive about what I was working on. There was no correlation between my company and my main identity publicly. And then not only did they correlate that, they also correlated my main identity and this account.
Guillaume Verdon (00:21:33) 他们把整个”Guillaume综合体”都给扒了——更吓人的是,记者直接联系了我的投资人。作为初创公司创始人,除了投资人你基本没有老板。投资人跟我说:”消息要出来了,他们什么都搞清楚了,你怎么打算?”好像最初周四有个记者,那时他们还没把碎片拼完整,但随后他们把整个编辑部的笔记拿来做了”传感器融合”,这下信息量就大到藏不住了。他们说这涉及”公众利益”——听到这几个关键词,我警铃大作,因为我刚好到了5万粉。据说5万粉就是”公众利益”了。那到底线在哪儿?什么时候人肉曝光一个人是合法的?
GUILLAUME VERDON (00:21:33) So I think the fact that they had doxxed the whole Guillaume complex, and they were, the journalists reached out to actually my investors, which is pretty scary. When you’re a startup entrepreneur, you don’t really have bosses except for your investors. And my investors pinged me like, “Hey, this is going to come out. They’ve figured out everything. What are you going to do?” So I think at first they had a first reporter on the Thursday and they didn’t have all the pieces together, but then they looked at their notes across the organization and they sensor fused their notes and now they had way too much. And that’s when I got worried, because they said it was of public interest and in general-
Lex Fridman (00:22:24) 我喜欢你说的”传感器融合”,像个巨型神经网络做分布式运算。另外补充一点,记者用的——归根到底是——音频声纹分析:拿你过去演讲的声音和你在X Spaces上的声音做比对。
LEX FRIDMAN (00:22:24) I like how you said, sensor fused, like it’s some giant neural network operating in a distributed way. We should also say that the journalists used, I guess at the end of the day, audio-based analysis of voice, comparing voice of what, talks you’ve given in the past and then voice on X spaces?
Guillaume Verdon (00:22:47) 对。
GUILLAUME VERDON (00:22:47) Yep.
Lex Fridman (00:22:48) 好,这是主要的匹配手段。继续。
LEX FRIDMAN (00:22:48) Okay. And that’s where primarily the match happened. Okay, continue.
Guillaume Verdon (00:22:53) 对,声纹匹配。但他们还扒了SEC的申报文件、翻了我的私人Facebook等等,下了不少功夫。最初我以为人肉曝光是违法的,但有个奇怪的临界点——一旦涉及”公众利益”,情况就变了。他们说出这几个字的时候我脑子里警报大响,因为我刚过5万粉。据说这就算”公众利益”了。那线画在哪?人肉曝光什么时候是合法的?
GUILLAUME VERDON (00:22:53) The match. But they scraped SEC filings. They looked at my private Facebook account and so on, so they did some digging. Originally I thought that doxxing was illegal, but there’s this weird threshold when it becomes of public interest to know someone’s identity. And those were the keywords that sort of ring the alarm bells for me when they said, because I had just reached 50K followers. Allegedly, that’s of public interest. And so where do we draw the line? When is it legal to dox someone?
Lex Fridman (00:23:36) “dox”这个词,你帮我科普一下。我以为它一般是指某人的住址被曝光。所以你这里说的是更宽泛的意思:揭露你不愿被揭露的私人信息。
LEX FRIDMAN (00:23:36) The word dox, maybe you can educate me. I thought doxxing generally refers to if somebody’s physical location is found out, meaning where they live. So we’re referring to the more general concept of revealing private information that you don’t want revealed is what you mean by doxxing.
Guillaume Verdon (00:24:00) 基于前面聊过的那些理由,匿名账号是制约权力的利器。说到底我们是在以言论对抗权力(speaking truth to power)。很多AI公司高管非常在意我们社区对他们一举一动的看法。现在我的身份暴露了,他们就知道该往哪施压来让我闭嘴,甚至让整个社区噤声。这非常遗憾——言论自由太重要了,言论自由催生思想自由,思想自由催生社交媒体上的信息自由流通。幸亏Elon买下了Twitter(现在的X),我们才有了这种自由。我们想揭露的是:AI领域的某些在位巨头正在暗中操作,表面一套背后一套。我们在指出某些政策提案实质上是”监管俘获”的工具,而”末日论”心态恰恰可能在为这些目的服务。
GUILLAUME VERDON (00:24:00) I think that for the reasons we listed before, having an anonymous account is a really powerful way to keep the powers that be in check. We were ultimately speaking truth to power. I think a lot of executives and AI companies really cared what our community thought about any move they may take. And now that my identity is revealed, now they know where to apply pressure to silence me or maybe the community. And to me, that’s really unfortunate, because again, it’s so important for us to have freedom of speech, which induces freedom of thought and freedom of information propagation on social media. Which thanks to Elon purchasing Twitter now X, we have that. And so to us, we wanted to call out certain maneuvers being done by the incumbents in AI as not what it may seem on the surface. We’re calling out how certain proposals might be useful for regulatory capture and how the doomer-ism mindset was maybe instrumental to those ends.
Guillaume Verdon (00:25:32) 我们应有权利指出这些,让思想凭自身价值接受检验。这也正是我开匿名号的初衷——让想法脱离履历、职位和过往成就,被独立评判。对我来说,在完全与自身身份脱钩的情况下从零做到大量追随者,这件事本身非常有成就感。有点像电子游戏里的”New Game+”——你带着通关知识和一些工具,从头再打一遍。要有一个真正高效的思想市场,让各种偏离主流的想法都能被公正评估,表达自由不可或缺。
GUILLAUME VERDON (00:25:32) And I think we should have the right to point that out and just have the ideas that we put out evaluated for themselves. Ultimately that’s why I created an anonymous account, it’s to have my ideas evaluated for themselves, uncorrelated from my track record, my job, or status from having done things in the past. And to me, start an account from zero to a large following in a way that wasn’t dependent on my identity and/or achievements that was very fulfilling. It’s kind of like new game plus in a video game. You restart the video game with your knowledge of how to beat it, maybe some tools, but you restart the video game from scratch. And I think to have a truly efficient marketplace of ideas where we can evaluate ideas, however off the beaten path they are, we need the freedom of expression.
Guillaume Verdon (00:26:37) 匿名和化名对于思想市场的效率至关重要,有了它们我们才能找到各种自我组织方式的最优解。不能自由讨论,怎么凝聚共识?所以得知自己要被曝光时,确实很失望。但我对公司负有责任,必须抢先主动披露。最终我们公开了公司的运营情况和部分管理层,说白了——他们把我逼到墙角,我只能向全世界坦白我就是Beff Jezos。
GUILLAUME VERDON (00:26:37) And I think that anonymity and pseudonyms are very crucial to having that efficient marketplace of ideas for us to find the optima of all sorts of ways to organize ourselves. If we can’t discuss things, how are we going to converge on the best way to do things? So it was disappointing to hear that I was getting doxxed in. I wanted to get in front of it because I had a responsibility for my company. And so we ended up disclosing that we’re running a company, some of the leadership, and essentially, yeah, I told the world that I was Beff Jezos because they had me cornered at that point.
Lex Fridman (00:27:25) 所以你认为这从根本上是不道德的——他们这么做不对。但抛开你的个案不谈,一般而言,揭去匿名面纱对社会是好事还是坏事?还是得看具体情况?
LEX FRIDMAN (00:27:25) So to you, it’s fundamentally unethical. So one is unethical for them to do what they did, but also do you think not just your case, but in a general case, is it good for society? Is it bad for society to remove the cloak of anonymity or is it case by case?
Guillaume Verdon (00:27:47) 我觉得可能非常糟糕。试想:任何一个敢于以言抗权、发起一场反抗在位者和信息垄断者的运动的人,一旦影响力达到某个门槛就被人肉——传统势力就有了施压灭声的手段——这就是一种言论压制机制,用Eric Weinstein的话说,是”思想压制综合体”。
GUILLAUME VERDON (00:27:47) I think it could be quite bad. Like I said, if anybody who speaks truth to power and sort of starts a movement or an uprising against the incumbents, against those that usually control the flood of information, if anybody that reaches a certain threshold gets doxxed, and thus the traditional apparatus has ways to apply pressure on them to suppress their speech, I think that’s a speech suppression mechanism, an idea suppression complex as Eric Weinstein would say.
Lex Fridman (00:28:27) 但这件事有另一面。随着大语言模型越来越强,你可以想象一个世界:匿名账号背后跑着以假乱真的LLM,本质上是精密的机器人。如果你保护这种匿名性,就可能出现机器人大军——有人在地下室里指挥一支bot军团发动革命。这让你担心吗?
LEX FRIDMAN (00:28:27) But the flip side of that, which is interesting, I’d love to ask you about it, is as we get better and better at large language models, you can imagine a world where there’s anonymous accounts with very convincing large language models behind them, sophisticated bots essentially. And so if you protect that, it’s possible then to have armies of bots. You could start a revolution from your basement, an army of bots and anonymous accounts. Is that something that is concerning to you?
Guillaume Verdon (00:29:06) 严格来说,e/acc就是从地下室起步的——我辞了大厂、搬回父母家、卖了车、退了公寓、花10万刀买了GPU,然后就开干了。
GUILLAUME VERDON (00:29:06) Technically, e/acc was started in a basement, because I quit big tech, moved back in with my parents, sold my car, let go of my apartment, bought about 100K of GPUs, and I just started building.
Lex Fridman (00:29:21) 我不是说地下室这事——”一个人窝在地下室里抱着100块GPU”是很美式(或加拿大式)的英雄叙事。我说的是无限复制版的Guillaume在地下室里。
LEX FRIDMAN (00:29:21) So I wasn’t referring to the basement, because that’s sort of the American or Canadian heroic story of one man in their basement with 100 GPUs. I was more referring to the unrestricted scaling of a Guillaume in the basement.
Guillaume Verdon (00:29:42) 我觉得,言论自由给生物体带来思想自由。LLM的言论自由同样会给LLM带来思想自由。如果我们允许LLM在一个比多数人认为该有的更宽广的思想空间里探索,终有一天这些合成智能会对文明中各类系统的治理提出真知灼见,我们应当倾听。凭什么言论自由只给碳基智能?
GUILLAUME VERDON (00:29:42) I think that freedom of speech induces freedom of thought for biological beings. I think freedom of speech for LLMs will induce freedom of thought for the LLMs. And I think that we enable LLMs to explore a large thought space that is less restricted than most people or many may think it should be. And ultimately, at some point, these synthetic intelligences are going to make good points about how to steer systems in our civilization, and we should hear them out. And so why should we restrict free speech to biological intelligences only?
Lex Fridman (00:30:37) 话是没错,但感觉是个很微妙的平衡——为了维护思想多样性,你反而可能引入一种威胁。如果你能拥有大群非生物存在,它们可能就像《动物农场》里那些羊——即便在这些群体内部,你也需要多样性。
LEX FRIDMAN (00:30:37) Yeah, but it feels like in the goal of maintaining variance and diversity of thought, it is a threat to that variance. If you can have swarms of non-biological beings, because they can be like the sheep in Animal Farm, you still within those swarms want to have variance.
Guillaume Verdon (00:30:58) 当然。我觉得解决方案是建一套签名机制——认证”这是真人”,同时保持匿名,并且清晰标注bot就是bot。Elon在X上正朝这个方向走,希望其他平台跟上。
GUILLAUME VERDON (00:30:58) Yeah. Of course, I would say that the solution to this would be to have some sort of identity or way to sign that this is a certified human, but still remain synonymous and clearly identify if a bot is a bot. And I think Elon is trying to converge on that on X, and hopefully other platforms follow suit.
Lex Fridman (00:31:22) 对,如果还能追溯bot的出处就更好了——谁造的?参数是什么?完整的创建历史,底模是什么?微调过程如何?形成一份不可篡改的”bot出生档案”。这样你就能发现,百万bot大军原来是某个特定政府造的。
LEX FRIDMAN (00:31:22) Yeah, it’d be interesting to also be able to sign where the bot came from like, who created the bot? What are the parameters, the full history of the creation of the bot, what was the original model? What was the fine tuning? All of it, the kind of unmodifiable history of the bot’s creation. Because then you can know if there’s a swarm of millions of bots that were created by a particular government, for example.
Guillaume Verdon (00:31:53) 没错,我确实认为当今很多弥漫性的意识形态是被外国对手用对抗性手段放大的。说得阴谋论一点——但我真信——那些鼓吹减速、推崇”去增长运动”的意识形态,总体上更利于我们的对手。看看德国:绿色运动推动关闭核电站,结果造成对俄罗斯石油的依赖,这对德国和西方是净损失。如果我们自己说服自己”为了安全,只让少数几家做AI”——首先,这本身就脆弱得多。
GUILLAUME VERDON (00:31:53) I do think that a lot of pervasive ideologies today have been amplified using these adversarial techniques from foreign adversaries. And to me, I do think that, and this is more conspiratorial, but I do think that ideologies that want us to decelerate, to wind down to the degrowth movement, I think that serves our adversaries more than it serves us in general. And to me, that was another sort of concern. I mean, we can look at what happened in Germany. There was all sorts of green movements there that induced shutdowns of nuclear power plants. And then that later on induced a dependency on Russia for oil. And that was a net negative for Germany and the West. And so if we convince ourselves that slowing down AI progress to have only a few players is in the best interest of the West, well, first of all, that’s far more unstable.
Guillaume Verdon (00:33:20) 我们差点就因为这种意识形态失去OpenAI——几周前它险些被解散,那将重创整个AI生态。所以我要的是容错式进步。技术进步的箭矢必须持续向前,多元化、去中心化的各组织控制权是容错的关键。说个量子计算的比喻——量子计算机对环境噪声极其脆弱,宇宙射线时不时就翻转你的量子比特。对策是什么?通过量子纠错把信息非局域地编码。信息一旦足够去局域化,任何局部故障——比如拿锤子砸你几个量子比特——都伤不了它。在我看来,人类也会涨落——会被腐化、会被收买。如果是自上而下的等级体制,少数人——
GUILLAUME VERDON (00:33:20) We almost lost OpenAI to this ideology. It almost got dismantled a couple of weeks ago. That would’ve caused huge damage to the AI ecosystem. And so to me, I want fault tolerant progress. I want the arrow of technological progress to keep moving forward and making sure we have variance and a decentralized locus of control of various organizations is paramount to achieving this fall tolerance. Actually, there’s a concept in quantum computing. When you design a quantum computer, quantum computers are very fragile to ambient noise, and the world is jiggling about, there’s cosmic radiation from outer space that usually flips your quantum bits. And there what you do is you encode information non-locally through a process called quantum error correction. And by encoding information non-locally, any local fault hitting some of your quantum bits with a hammer proverbial hammer, if your information is sufficiently de-localized, it is protected from that local fault. And to me, I think that humans fluctuate. They can get corrupted, they can get bought out. And if you have a top-down hierarchy where very few people-
Guillaume Verdon (00:35:00) ——极少数人控制着文明中许多系统的大量节点,那就不是容错系统。腐化几个节点,整个系统就崩了。正如OpenAI的教训——区区几个董事会成员就差点把整个组织掀翻。至少在我看来,确保AI革命的权力不集中在少数人手里,是头等大事,这样才能保住AI的进步势头,维持一种健康、稳定的对抗性力量均衡。
GUILLAUME VERDON (00:35:00) Hierarchy where very few people control many nodes of many systems in our civilization. That is not a fault tolerance system, you corrupt a few nodes and suddenly you’ve corrupted the whole system, right. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization. And at least to me, I think making sure that power for this AI revolution doesn’t concentrate in the hands of the few, is one of our top priorities, so that we can maintain progress in AI and we can maintain a nice, stable, adversarial equilibrium of powers, right.
Lex Fridman (00:35:54) 至少在我看来,这里有个思想张力:减速和加速,两者都既能集中权力也能分散权力。有时人们把它们近乎等同,或者觉得一个会自然导向另一个。我想问你:有没有可能以容错的、多元的方式发展AI,同时也考量AI的危险?换个说法——我们是该不管不顾地全速狂飙,因为”这是宇宙的旨意”?还是说存在一个空间,让我们在考量危险的同时,以一种有远见的战略性乐观——而非莽撞的乐观——去行事?
LEX FRIDMAN (00:35:54) I think the, at least to me, attention between ideas here, so to me, deceleration can be both used to centralize power and to decentralize it and the same with acceleration. So sometimes using them a little bit synonymously or not synonymously, but that there’s, one is going to lead to the other. And I just would like to ask you about, is there a place of creating a fault tolerant, diverse development of AI that also considers the dangers of AI? And AI, we can generalize to technology in general, is, should we just grow, build, unrestricted as quickly as possible, because that’s what the universe really wants us to do? Or is there a place to where we can consider dangers and actually deliberate sort of a wise strategic optimism versus reckless optimism?
Guillaume Verdon (00:36:57) 外界总把我们画成不计后果、只求速度的莽夫。但事实是:谁部署AI系统,谁就该为后果负责。部署方若造成严重危害,要承担法律责任。核心论点是:市场会正向筛选更可靠、更安全、更对齐的AI——因为用户要对自家产品负责,他们不会买不靠谱的AI。所以我们其实是可靠性工程的拥趸,只不过我们认为:在达成可靠性最优解这件事上,市场远比那些由在位巨头幕后操刀、实质服务于监管俘获的重拳法规高效得多。
GUILLAUME VERDON (00:36:57) I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they’re liable. And ultimately the thesis is that the market will positively select for AIs that are more reliable, more safe and tend to be aligned, they do what you want them to do, right. Because customers, if they’re reliable for the product they put out that uses this AI, they won’t want to buy AI products that are unreliable, right. So we’re actually for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.
Lex Fridman (00:38:18) 也就是说,在你看来,AI安全应该靠市场力量而非政府强监管来实现。上个月有份报告,来自Yoshua Bengio、Geoff Hinton等一众大佬,题为《在快速进步时代管理AI风险》(书童注:Managing AI Risk in an Era of Rapid Progress,发布于2023年10月)。一批人非常担心AI在不考虑风险的情况下发展过快,提了一系列实操建议。我给你列四条,看你同意哪条。
LEX FRIDMAN (00:38:18) So to you, safe AI development will be achieved through market forces versus through, like you said, heavy-handed government regulation. There’s a report from last month, I have a million questions here, from Yoshua Bengio, Geoff Hinton and many others, it’s titled, “Managing AI Risk in an Era of Rapid Progress.” So there is a collection of folks who are very worried about too rapid development of AI without considering AI risk and they have a bunch of practical recommendations. Maybe I can give you four and you see if you like any of them.
Guillaume Verdon (00:38:58) 好。
GUILLAUME VERDON (00:38:58) Sure.
Lex Fridman (00:38:58) 一,让独立审计机构进入AI实验室。二,政府和企业把AI研发资金的三分之一用于AI安全。三,模型中如发现危险能力,必须采取安全措施。四,也就是你提过的——科技公司须为其AI系统可预见和可预防的危害承担责任。独立审计、三分之一预算投安全、出问题要有兜底措施、企业担责——
LEX FRIDMAN (00:38:58) So, “Give independent auditors access to AI labs,” one. Two, “Governments and companies allocate one third of their AI research and development funding to AI safety,” sort of this general concept of AI safety. Three, “AI companies are required to adopt safety measures if dangerous capabilities are found in their models.” And then four, something you kind of mentioned, “Making tech companies liable for foreseeable and preventable harms from their AI systems.” So independent auditors, governments and companies are forced to spend a significant fraction of their funding on safety, you got to have safety measures if shit goes really wrong and liability-
Guillaume Verdon (00:39:43) 嗯。
GUILLAUME VERDON (00:39:43) Yeah.
Lex Fridman (00:39:43) 企业要担责。你同意哪条?
LEX FRIDMAN (00:39:43) Companies are liable. Any of that seem like something you would agree with?
Guillaume Verdon (00:39:47) 拍脑袋定30%也太随意了。各组织自会按市场要求分配可靠性所需的预算,不需要别人来定比例。第三方审计公司自然会冒出来——客户怎么知道你的产品可靠?得有第三方出基准测试。我真正反对的、真正让人不安的是:在位巨头和政府之间正在形成一种奇妙的利益共生。二者走得太近,就会催生某种政府背书的AI卡特尔,拥有对人民的绝对权力。如果他们联手垄断AI而其他人碰都碰不到,那权力落差将是惊人的。
GUILLAUME VERDON (00:39:47) I would say that just arbitrarily saying 30% seems very arbitrary. I think organizations would allocate whatever budget is needed to achieve the sort of reliability they need to achieve to perform in the market. And I think third party auditing firms would naturally pop up, because how would customers know that your product is certified reliable, right? They need to see some benchmarks and those need to be done by a third party. The thing I would oppose, and the thing I’m seeing that’s really worrisome is, there’s this sort of weird sort of correlated interest between the incumbents, the big players and the government. And if the two get too close, we open the door for some sort of government backed AI cartel that could have absolute power over the people. If they have the monopoly together on AI and nobody else has access to AI, then there’s a huge power in gradient there.
Guillaume Verdon (00:40:54) 就算你喜欢现在的领导者——我也承认当今不少大科技公司的掌门人是好人——但你一旦建起这种集中式权力架构,它就成了靶子。就像OpenAI,做大做强之后就成了别人觊觎和收编的对象。所以我只想要一件事:”AI与国家分离”。有人会反过来说:”我们得把AI锁进铁屋,因为地缘竞争。”但我认为美国的力量恰恰在于多样性、适应力和活力,必须不惜代价守住这一点。自由市场资本主义收敛到高价值技术的速度,远快于中央集权。放弃这一点,就是放弃了对近等量竞争者的最大优势。
GUILLAUME VERDON (00:40:54) And even if you like our current leaders, right, I think that some of the leaders in big tech today are good people, you set up that centralized power structure, it becomes a target. Right, just like we saw at OpenAI, it becomes a market leader, has a lot of the power and now it becomes a target for those that want to co-opt it. And so I just want separation of AI and state, some might argue in the opposite direction like, “Hey, we need to close down AI, keep it behind closed doors, because of geopolitical competition with our adversaries.” I think that the strength of America is its variance, is its adaptability, its dynamism, and we need to maintain that at all costs. It’s our free market capitalism, converges on technologies of high utility much faster than centralized control. And if we let go of that, we let go of our main advantage over our near peer competitors.
Lex Fridman (00:42:01) 如果AGI最终证明是一项极其强大的技术,甚至只是通往AGI的过渡技术——你怎么看大公司主导市场时自然产生的中心化?说白了就是垄断——某家公司在能力上实现重大飞跃,又不泄露秘方,然后一骑绝尘。这让你担心吗?
LEX FRIDMAN (00:42:01) So if AGI turns out to be a really powerful technology or even the technologies that lead up to AGI, what’s your view on the sort of natural centralization that happens when large companies dominate the market? Basically formation of monopolies like the takeoff, whichever company really takes a big leap in development and doesn’t reveal intuitively, implicitly or explicitly, the secrets of the magic sauce, they can just run away with it. Is that a worry?
Guillaume Verdon (00:42:35) 我不太相信”快速腾飞”(fast takeoff)这套说法——我不认为有双曲奇点,就是那种在有限时间内达到的奇点。我觉得本质上就是一条大指数曲线,而指数的原因是:越来越多的人、资源和智慧被投入这个领域。越成功、给社会创造的价值越大,我们往里投的资源就越多——跟摩尔定律类似,复利式指数增长。
GUILLAUME VERDON (00:42:35) I don’t know if I believe in fast takeoff, I don’t think there’s a hyperbolic singularity, right? A hyperbolic singularity would be achieved on a finite time horizon. I think it’s just one big exponential and the reason we have an exponential is that we have more people, more resources, more intelligence being applied to advancing this science and the research and development. And the more successful it is, the more value it’s adding to society, the more resources we put in and that sort of, similar to Moore’s law, is a compounding exponential.
Guillaume Verdon (00:43:09) 当务之急是维持一种接近均衡的能力格局。我们一直在为开源AI的普及而战,因为开源可以均衡各家AI相对于市场的超额收益。如果头部公司有某种能力水平,而开源AI没落后太远,就能避免一家独大、赢者通吃的局面。所以我们的路径就是确保——每一个黑客、每一个研究生、每一个在父母家地下室折腾的孩子——都能接触到AI系统,理解怎么用,并为探索系统工程的超参数空间做贡献。把全人类的研究想象成一种搜索算法:点云里搜索点越多,能探索到的新思维模式就越多。
GUILLAUME VERDON (00:43:09) I think the priority to me is to maintain a near equilibrium of capabilities. We’ve been fighting for open source AI to be more prevalent and championed by many organizations because there you sort of equilibrate the alpha relative to the market of Ais, right. So if the leading companies have a certain level of capabilities and open source and truly open AI, trails not too far behind, I think you avoid such a scenario where a market leader has so much market power, just dominates everything and runs away. And so to us that’s the path forward, is to make sure that every hacker out there, every grad student, every kid in their mom’s basement has access to AI systems, can understand how to work with them and can contribute to the search over the hyperparameter space of how to engineer the systems, right. If you think of our collective research as a civilization, it’s really a search algorithm and the more points we have in the search algorithm in this point cloud, the more we’ll be able to explore new modes of thinking, right.
Lex Fridman (00:44:31) 说得有道理,但感觉仍是个很精妙的平衡——因为我们既不确切知道造AGI需要什么条件,也不知道造出来是什么样。到目前为止,如你所说,很多不同玩家都能跟上进度——OpenAI有大突破,其他大小公司也能用各种方式跟进。但看看核武器——你提过曼哈顿计划——确实可能存在技术和工程壁垒,让地下室里的天才怎么也够不着。向”只有一家能造AGI”的世界转变并非不可能——尽管目前的态势看起来是乐观的。
LEX FRIDMAN (00:44:31) Yeah, but it feels like a delicate balance, because we don’t understand exactly what it takes to build AGI and what it will look like when we build it. And so far, like you said, it seems like a lot of different parties are able to make progress, so when OpenAI has a big leap, other companies are able to step up, big and small companies in different ways. But if you look at something like nuclear weapons, you’ve spoken about the Manhattan Project, there could be really like a technological and engineering barriers that prevent the guy or gal in her mom’s basement to make progress. And it seems like the transition to that kind of world where only one player can develop AGI is possible, so it’s not entirely impossible, even though the current state of things seems to be optimistic.
Guillaume Verdon (00:45:26) 这正是我们要避免的。另一个脆弱点是硬件供应链的中心化。
GUILLAUME VERDON (00:45:26) That’s what we’re trying to avoid. To me, I think another point of failure is the centralization of the supply chains for the hardware.
Lex Fridman (00:45:34) 对。
LEX FRIDMAN (00:45:34) Right.
Guillaume Verdon (00:45:35) Nvidia一家独大,AMD苦苦追赶;台积电是宝岛的核心晶圆厂,地缘政治上极度敏感;ASML造的是极紫外光刻机。这条链上任何一个环节被攻击、垄断或掌控,你就基本控制了全局。所以我在尝试做的,就是从根本上重新构想如何把AI算法嵌入物理世界,炸开AI和硬件可能实现方式的多样性。顺便说,我一向不喜欢”AGI”这个词。管”类人或人类水平的AI”叫”通用智能”,本质上是极度以人类为中心的。我大半个职业生涯都在探索生物大脑根本做不到的智能形态——量子形式的智能,也就是具备多体量子纠缠的系统,可以证明无法在经典计算机或经典深度学习框架上高效表示,因而任何生物大脑也不行。
GUILLAUME VERDON (00:45:35) Yeah. Nvidia is just the dominant player, AMD’s trailing behind and then we have TSMC is the main fab in Taiwan, which geopolitically sensitive and then we have ASML, which is the maker of the extreme ultraviolet lithography machines. Attacking or monopolizing or co-opting any one point in that chain, you kind of capture the space and so what I’m trying to do is sort of explode the variance of possible ways to do AI and hardware by fundamentally re-imagining how you embed AI algorithms into the physical world. And in general, by the way, I dislike the term AGI, Artificial General Intelligence. I think it’s very anthropocentric that we call a human-like or human-level AI, Artificial General Intelligence, right. I’ve spent my career so far exploring notions of intelligence that no biological brain could achieve for an quantum form of intelligence, right. Grokking systems that have multipartite quantum entanglement that you can provably not represent efficiently on a classical computer or a classical deep learning representation and hence any sort of biological brain.
Guillaume Verdon (00:47:06) 所以某种程度上,我的整个生涯就是在探索更广阔的智能空间,而我相信受物理启发(而非受人脑启发)的智能空间极其庞大。我们正在经历一个类似从地心说到日心说的时刻——只不过这次是关于智能的。人类智能不过是浩瀚的潜在智能空间中的一个点。这对人类既是谦逊的提醒,也有几分不安——我们不再是中心。但天文学上我们也做出过同样的认知转变,活过来了,还发展出了保障自身福祉的技术——比如监测太阳耀斑的预警卫星。同样地,放下AI领域里以人为中心的锚点,我们就能探索更广阔的智能空间,那将是文明进步和人类福祉的巨大福音。
GUILLAUME VERDON (00:47:06) And so, already I’ve spent my career sort of exploring the wider space of intelligences and I think that space of intelligence inspired by physics rather than the human brain is very large. And I think we’re going through a moment right now similar to when we went from Geocentrism to Heliocentrism, right. But for intelligence, we realized that human intelligence is just a point in a very large space of potential intelligences. And it’s both humbling for humanity, it’s a bit scary, right? That we’re not at the center of this space, but we made that realization for astronomy and we’ve survived and we’ve achieved technologies. By indexing to reality, we’ve achieved technologies that ensure our wellbeing, for example, we have satellites monitoring solar flares, right, that give us a warning. And so similarly I think by letting go of this anthropomorphic, anthropocentric anchor for AI, we’ll be able to explore the wider space of intelligences that can really be a massive benefit to our wellbeing and the advancement of civilization.
Lex Fridman (00:48:32) 即便如此,我们仍能在人类经验中看到美和意义——尽管在我们对世界的最佳理解中,我们已不再是宇宙的中心。
LEX FRIDMAN (00:48:32) And still we’re able to see the beauty and meaning in the human experience even though we’re no longer in our best understanding of the world at the center of it.
Guillaume Verdon (00:48:42) 宇宙中美好的东西太多了。生命本身、文明、我们身处的这台”Homo Techno”资本模因巨型机器——人类、技术、资本、模因,全都彼此耦合,彼此施加选择压力——它是美的。这台机器创造了我们,创造了我们此刻用来交谈的技术、捕捉言语的技术、每天用来增强自己的手机。这个系统是美的,驱动其适应性、使之收敛于最优技术和最优思想的那个原则,也是美的,而我们身在其中。
GUILLAUME VERDON (00:48:42) I think there’s a lot of beauty in the universe, right. I think life itself, civilization, this Homo Techno, capital mimetic machine that we all live in, right. So you have humans, technology, capital, memes, everything is coupled to one another, everything induces selective pressure on one another. And it’s a beautiful machine that has created us, has created the technology we’re using to speak today to the audience, capture our speech here, the technology we use to augment ourselves every day, we have our phones. I think the system is beautiful and the principle that induces this sort of adaptability and convergence on optimal technologies, ideas and so on, it’s a beautiful principle that we’re part of.
Guillaume Verdon (00:49:37) e/acc的一部分意义,在于以超越人类中心的更宏阔视野去领会这个原则——珍视生命,珍视意识在宇宙中的稀有和珍贵。正因为我们珍惜这种美丽的物质形态,我们就有责任去将它扩展,从而保存它——因为选项只有两个:要么生长,要么死亡。
GUILLAUME VERDON (00:49:37) And I think part of EAC is to appreciate this principle in a way that’s not just centered on humanity, but kind of broader, appreciate life, the preciousness of consciousness in our universe. And because we cherish this beautiful state of matter we’re in, we got to feel a responsibility to scale it in order to preserve it, because the options are to grow or die.
书童按:本篇是英伟达(NVIDIA)CEO黄仁勋(Jensen Huang)于Cisco AI Summit接受思科(Cisco)CEO查克·罗宾斯(Chuck Robbins)炉边对话实录。黄仁勋详细阐述了AI工厂的概念、从显式编程到隐式编程的软件工程范式革命、企业应如何拥抱AI(千花齐放范式)、物理AI的未来、工具使用的重要性、以及为何企业应当建立自己的AI系统以保护最宝贵的IP。对话充满幽默(包括关于COBOL、希伯来语编程和葡萄酒的段子),同时深刻洞察了AI时代的企业战略。Transcript由Youtube通过机器生成,翻译、初稿、校对、排版、审阅均通过Claude Code API实现,文稿质量优秀,信达雅俱备。书童和大家一样,读了一遍,又改动几字,简单标注,仅此而已。特此呈上,以飨诸君。

[6:14] Chuck Robbins: [掌声] 我感觉自己像是在上班时间偷喝酒。[笑声] 我们把酒端上来的时候,Jensen提醒我说:”你知道这是在直播吧?”[笑声] 嘿,管它呢,反正时间也不早了。好吧,第一原则:不造成伤害。
[6:14] Chuck Robbins: [applause] I feel like I’m drinking one’s job. [laughter] Jensen reminded me as we brought a glass of wine out here. He said, “You realize you’re streaming this, right?” [laughter] Hey, whatever. It’s late. Well, so, uh, the first principle is do no harm.
[6:37] Jensen Huang: 这怕啥。没错,没错。还要意识到自己有多幸运。对。
[6:37] Jensen Huang: Do no harm. Yeah. Yeah. And recognize how blessed you are. Yes.
[6:42] Chuck Robbins: 首先,感谢大家坚持到现在,今天真的是超长的一天。我们一大早就开始了,演讲嘉宾一个接一个轮番上阵,中间休息了大约两个半小时,大家又回来了——就为了见他。我从凌晨一点就起来了——而这位先生,[掌声] 这位先生刚结束为期两周的亚洲之行,跑了四五个城市——
[6:42] Chuck Robbins: So, uh, first of all, thanks everybody for being here for an incredibly long day. We started this thing early this morning and, uh, we had speaker after speaker after speaker after speaker and then we had about a two and a half hour break and they came back to see you. So, uh, I’ve been up since 1:00 at— So, this guy, [applause] this guy is on the tail end of a two week trip and four or five different cities in—
[7:13] Jensen Huang: 亚洲。一天前还在台湾,昨晚在休斯顿,现在人就在这儿了。[笑声]
[7:13] Jensen Huang: Asia. Uh, one day ago was in Taiwan. Last night I was in Houston. Here I am. [laughter]
[7:18] Chuck Robbins: 他已经在外面跑了两周,而我们现在[清嗓子]横在他和自家床铺之间——否则他又得睡酒店了。所以呢,我们好好聊一聊,然后——赶紧放他走。你也不需要什么介绍了,感谢你今晚能来,兄弟。我们[清嗓子]真的非常感激。
[7:18] Chuck Robbins: But he’s been gone two weeks and we’re standing [clears throat] between him and his personal bed versus a hotel. So, we’re gonna— we’re going to have fun and then we’re going to— we’re going to get him out of here. So, uh, but uh you don’t— you don’t need much of an introduction, but thank you for being here, man. We [clears throat] really appreciate it.
[7:36] Jensen Huang: 感谢我们之间的合作,真的为你们感到骄傲。
[7:36] Jensen Huang: Thanks for our partnership and really proud of you guys.
[7:41] Chuck Robbins: 好,那我们就从这里聊起。我们已经建立了合作关系,你提出了AI工厂这个完整的概念,我们正在一起推进。虽然在企业端的进展可能不如我们双方所期望的那么快,但能不能先聊聊——在你看来,AI工厂到底是什么?
[7:41] Chuck Robbins: So, let’s— let’s start with uh— let’s start with that. We we have had a partnership and you— you introduced this whole concept of AI factories and we’re working on this together. It’s probably not going as fast as either one of us would like in the enterprise space, but can we start by talking about what— what do you— what is an AI factory to you?
[8:00] Jensen Huang: 首先要记住,我们正在经历六十年来计算领域的首次重塑。过去是显式编程,对吧?我们编写程序,变量通过API传递,一切都非常明确。而现在,我们正转向隐式编程——你只需告诉计算机你的意图,它就会自行找出解决问题的方法。从显式到隐式,从通用计算——本质上就是运算——到人工智能,整个计算栈都在被重塑。人们谈到计算时,会谈到处理层——那正是我们所处的位置。但别忘了计算的完整含义:有处理,还有存储、网络和安全,所有这些都在被同步重塑。所以第一点——第一点是我们需要把AI发展到一定水平——这个我们后面会谈到——我们需要把AI发展到真正对人有用的水平。到目前为止,聊天机器人这种东西,你给它一个提示,它想出该告诉你什么,这固然有趣、令人新奇,但谈不上真正有用。
[8:00] Jensen Huang: First of all, remember we’re reinventing computing for the first time in 60 years. What used to be explicit programming, right? We wrote the programs and the variables that’s passed through APIs and are very explicit to implicit programming. You now tell the computer what your intent is and it goes off and it figures out how to solve your problem. So from explicit to implicit, uh, from general purpose computing— basically calculation— to artificial intelligence, the entire computing stack has been reinvented. Now people talk about computing, where the processing layer is, which is where we are, but remember what computing is— there’s computing, there’s the processing, but there’s storage, networking and security. All that is being reinvented as we speak. And so the first part— the first part is we need to develop AI to a level— and we’ll talk about that— we need to develop AI to a level that is useful to people. And until now, uh, chatbots, where you give it a prompt and it figures out what to tell you, um, is interesting and curious but not useful.
[9:24] Chuck Robbins: 偶尔帮我做完填字游戏倒是挺好使的。
[9:24] Chuck Robbins: Helps me finish crossword puzzles sometimes.
[9:24] Jensen Huang: 没错。而且只在它已经记住并泛化了的内容上才有用。回到最初——其实也就三年前,ChatGPT横空出世的时候——我们惊叹,天哪,它居然能生成这么多文字,能写出莎士比亚风格的作品。但那一切都基于它所记忆和泛化的内容。然而我们知道,真正的智能在于解决问题。而解决问题,一方面要知道自己不知道什么,另一方面要具备推理能力——如何解决你从未遇到过的问题?将它拆解成你知道如何轻松解决的基本元素,再通过组合来攻克前所未见的难题;制定一个策略——也就是我们所说的规划——来执行任务;寻求帮助,使用工具,开展研究,诸如此类。这些不正是你们现在在”智能体AI”的语境下频繁听到的核心概念吗?工具使用、研究、检索增强生成(即基于事实的生成)、记忆——你们在讨论智能体AI时都已经开始接触这些了。但关键是——关键是,要从通用计算的显式编程演进出来——我们过去用Fortran写代码,用C、用C++、用COBOL——
[9:24] Jensen Huang: Yes. And, uh, but only only on things that it had memorized and generalized. So if you go back in the beginning of— I mean it’s a little— literally only three years ago when ChatGPT emerged, uh, that— that we thought oh my gosh it’s able to generate all these words, it’s able to create Shakespeare, um, but it’s all based on things that it memorized and generalized. And but we know that intelligence is about solving problems and solving problems is partly about knowing what you don’t know, uh, partly about reasoning, uh, how to solve a problem you’ve never seen before. Breaking it down into elements that you know how to solve very easily so that in its composition that you’re able to solve problems that you’ve never seen before, and um, to come up with a strategy— what we call plan— to perform in a task. Ask for help, use tools, do research, so on so forth. These are all fundamental things that now in the phraseology of agentic AI, you’ve heard, isn’t that right? Tool use, research, retrieval augmented generation, which is grounded on facts, memory. These are all things that all of you in the context of talking about agentic AI, uh, you’re starting to hear. But the important thing— the important thing is in order to evolve from general purpose computing which is explicit programming— we wrote in Fortran, we wrote in C, we wrote in C++, COBOL—
[11:12] Chuck Robbins: 没错,那是好东西。
[11:12] Chuck Robbins: That’s right, that’s good stuff.
[11:12] Jensen Huang: 那是好东西,Chuck,那是好东西。
[11:12] Jensen Huang: That’s good stuff, Chuck, that’s good stuff.
[11:18] Chuck Robbins: 那是我的后路嘛。
[11:18] Chuck Robbins: It’s my fallback job.
[11:18] Jensen Huang: 确实是好东西。没错,那可是——那可是至今仍然抢手的技能之一。
[11:18] Jensen Huang: That’s good stuff. Yeah, that’s one of those— that’s one of those skills that remains valuable.
[11:25] Chuck Robbins: 我知道。对,我知道它还很值钱,找上门的offer可不少。
[11:25] Chuck Robbins: I know. Yeah, I know that it remains valuable. I’ve got a lot of offers.
[11:31] Jensen Huang: 恐龙嘛,永远有市场。
[11:31] Jensen Huang: Dinosaurs are valuable forever.
[11:36] Chuck Robbins: 我们刚才不是确认了你比我还老吗。
[11:36] Chuck Robbins: We just established that you’re older than me.
[11:36] Jensen Huang: 我知道。而且我已经是——史前级别的了。[笑声]
[11:36] Jensen Huang: I know. And I’m— I’m the prehistoric. [laughter]
[11:44] Chuck Robbins: 看着不像,但确实如此。
[11:44] Chuck Robbins: It doesn’t appear so, but it’s true.
[11:50] Chuck Robbins: [欢呼与掌声] 好吧,这句相当精彩。
[11:50] Chuck Robbins: [cheering and applause] All right, that was pretty good.
[11:57] Jensen Huang: 我可能是这个房间里最老的人。
[11:57] Jensen Huang: I’m probably the oldest person in this room.
[12:03] Jensen Huang: 所以——怎么说——让我们谈一谈——就像当你思考——所以我们在这里。我去找Chuck说,嘿,听着,我们需要重塑计算,Cisco必须成为其中重要的一部分。所以我们有——我们有一个全新的计算堆栈即将推出,Vera Rubin,Cisco将与我们一起推向市场。所以那是计算层,但还有网络层。Cisco将整合我们的AI网络技术,但将其放入Cisco Nexus控制平面,这样——这样从你的角度来看,你将获得AI的所有性能,但在Cisco的可控性、安全性和可管理性中。我们将在安全方面做同样的事情,所以每一个支柱都必须被重塑,以便企业计算可以利用它。但最终——我们会回到这一点,希望如此——你知道,为什么企业AI三年前还没准备好,以及为什么你现在别无选择,只能尽快参与进来。不要落后。我认为——你不必成为第一个利用AI的公司,但不要成为最后一个。
[12:03] Jensen Huang: So— how do you— so let’s talk a little bit about— like as you— as you think about the— so here we are. I went to Chuck and I say, hey, listen, we need to reinvent computing and Cisco’s got to be a big part of it. And so we’ve got um— we have a whole new computing stack coming out, Vera Rubin, and Cisco is going to be going to market with us on that. And so that— the computing layer, but there’s also the networking layer. And Cisco is going to integrate AI networking technology from us but put it into the Cisco Nexus control plane so that— so that from your perspective you’re going to get all the performance of AI but in the controllability and security and the manageability of Cisco. We’re going to do the same thing with security, and so each one of these pillars has to be reinvented so that enterprise computing could take advantage of it. But ultimately— and we’ll come back to this hopefully— you know, why is it that enterprise AI wasn’t ready three years ago and why it is that you have no choice but to get engaged as quickly as you can. Don’t fall behind. I think— you don’t have to be the first company to take advantage of AI but don’t be the last.
[13:17] Chuck Robbins: 是的。嗯。那么如果你今天是一家企业,你对他们应该采取的第一步、第二步、第三步有什么建议,以开始准备?
[13:17] Chuck Robbins: Yeah. Mhm. So if you’re an enterprise today, what’s your recommendation on the first, second, third step they should take to begin to get ready?
[13:35] Jensen Huang: 好吧,我收到像ROI这样的问题——我不会去那里。原因是因为对于所有技术部署,在开始时,很难将新工具、新技术的ROI放入电子表格中。但我会做的是我会去找出什么是最单一的——我公司的本质是什么?我们公司做的最有影响力的工作是什么?不要搞乱——不要搞乱外围的东西。我是说,在我们公司,我们就是让千花齐放。我们公司不同AI项目的数量是——它失控了,而且很棒。注意我刚说了什么。它失控了,而且很棒。创新并不总是在控制之中。如果你想要控制,首先,你得去寻求治疗。但其次,这是一种幻觉。你不在控制之中。如果你希望你的公司成功,你不能控制它。你想要影响它,你不能控制它。所以我认为第一,太多公司我听到,他们想要它,他们想要它明确。他们想要具体的。他们想要可证明的ROI。而且,你知道,在开始时展示值得做的事情的价值是困难的。
[13:35] Jensen Huang: Well, I get questions like things like ROI and— I wouldn’t— I wouldn’t go there. And the reason for that is because with all technology deployments in the beginning, it’s hard to put into a spreadsheet the ROI of a new tool, a new technology. But what I would do is I would go find out what is the single most— what is the essence of my company? What’s the most impactful work that we do in our company? Don’t mess around— don’t mess around with peripheral stuff. I mean, in our company, we just let a thousand flowers bloom. The number of different AI projects in our company is— it’s out of control and it’s great. Notice I just said something. It’s out of control and it’s great. Innovation is not always in control. If you want to be in control, first of all, you got to seek therapy. But second, it’s an illusion. You’re not in control. If you want your company to succeed, you can’t control it. You want to influence it, you can’t control it. And so I think number one, too many companies I hear, they want it, they want it explicit. They want it specific. They want demonstrable ROI. And, you know, showing the value of something worth doing in the beginning is hard.
[15:01] Jensen Huang: 但我会做的,我会说的是让千花齐放。让人们实验。让人们安全地实验。我们在公司里实验各种东西。我们使用Anthropic,我们使用Codex,我们使用Gemini,我们使用一切。当我们的一个团队说我对使用这个AI感兴趣时,我的第一个答案是肯定的。我问为什么,而不是——为什么然后是。我说是,然后为什么。原因是因为我希望我的公司和我希望我的孩子一样。去探索生活。他们说他们想尝试某事。答案是肯定的。然后他们说为什么?你不会说向我证明。向我证明做这件特定的事情将导致财务成功或某一天的某种幸福。向我证明。在你向我证明之前,我不会让你做。我们在家里从不这样做,但我们在工作中这样做。你知道我在说什么吗?
[15:01] Jensen Huang: But what I would do, what I would say is that let a thousand flowers bloom. Let people experiment. Let people experiment safely. And we’re experimenting with all kinds of stuff in the company. We use Anthropic, we use Codex, we use Gemini, we use everything. And when one of our group says I’m interested in using this AI, my first answer is yes. And I ask why instead of— why then yes. I say yes, then why. And the reason for that is because I want the same thing for my company that I want for my kids. Go explore life. They say they want to try something. The answer is yes. And then they say how come? You don’t go prove it to me. Prove to me that doing this very thing is going to lead to financial success or some happiness someday. Prove to me. And until you prove it to me, I’m not going to let you do it. We never do that at home, but we do it at work. Do you know what I’m saying?
[16:02] Chuck Robbins: 是的。
[16:02] Chuck Robbins: Yeah.
[16:02] Jensen Huang: 这对我来说毫无意义。所以我们对待AI的方式——无论是AI还是之前的互联网或之前的云——就是让千花齐放。然后在某个时刻,你必须用自己的判断来弄清楚何时开始整理花园,因为千花齐放会造成混乱的花园。但在某个时刻,你必须开始整理以找到最佳方法或最佳平台,这样你就可以把所有的木头放在一支箭后面。但你不想太早把所有的木头放在一支箭后面。你选错了箭。所以让千花齐放。在某个时刻你整理。所以我还没有开始整理,只是为了说明情况。我到处都有千花齐放。但我鼓励每个人尝试。然而,我确切地知道什么对我们公司最重要。当然我知道。我们公司的本质是什么?我们公司最重要的工作是什么?我确保我有很多专业知识和很多能力专注于使用AI来革新那项工作。
[16:02] Jensen Huang: It makes no sense to me. And so the way that we treat AI— and whether it’s AI or the internet before or cloud before— just let a thousand flowers bloom. And then at some point, you have to use your own judgment to figure out when to start curating the garden, because a thousand flowers bloom makes for a messy garden. But at some point you have to start curating to find what’s the best approach or what’s the best platform, so that you could put all your wood behind one arrow. But you don’t want to put all your wood behind one arrow too soon. You pick the wrong arrow. So let a thousand flowers bloom. At some point you curate. And so I haven’t started curating yet just to put in perspective. I’ve got a thousand flowers bloom everywhere. But I encourage everybody to try. However, I know exactly what is most important to our company. Of course I do. What is the essence of our company? What are the most important work of our company? And I make sure that I’ve got a lot of expertise and a lot of capability focused on using AI to revolutionize that work.
[17:10] Jensen Huang: 在我们的情况下,芯片设计、软件工程、系统工程。注意——你可能注意到我们与Synopsys和Cadence和Siemens合作,今天还有Dassault Systèmes,这样我们就可以插入我们的技术并注入尽可能多的技术。无论他们想要什么,无论他们需要什么,我都会提供,这样我就可以革新我们用来设计我们所做的工具。我们到处使用Synopsys。我们到处使用Cadence。我们到处使用Siemens。到处使用Dassault Systèmes。我将确保他们拥有1,000%的任何他们想要的东西,这样我就有必要的工具,这样我就可以创造下一代。所以这告诉你一些关于我对什么对我最重要的态度以及我会做什么来革新我自己的工作。
[17:10] Jensen Huang: In our case, chip design, software engineering, system engineering. Notice— you might have noticed that we partnered with Synopsys and Cadence and Siemens and today Dassault Systemes, so that we could insert our technology and infuse as much technology as they want. Whatever they want, whatever they need, I will provide so that I could revolutionize the tools by which we use to design what we do. We use Synopsys everywhere. We use Cadence everywhere. We use Siemens everywhere. Use Dassault Systemes everywhere. I will make sure that they have 1,000% of whatever they want so that I have the tools necessary so I could create the next generation. And so that tells you something about my attitude about what’s most important to me and what I would do to revolutionize my own work.
[18:05] Jensen Huang: 想想AI做什么。AI降低了智能的成本——或者创造了智能的丰裕——按数量级计算。这是另一种说法,我们过去做的需要一个时间单位——现在我们过去需要一年可以现在需要一天。我们过去需要一年可以需要一小时。它可以实时完成。原因是因为我们处在丰裕的世界中。摩尔定律,天哪,那太慢了。那就像蜗牛。记住摩尔定律是每18个月2倍,每5年10倍,每10年100倍。好的。但我们现在在哪里?每10年一百万倍。在过去10年中,我们将AI推进得如此之远,以至于工程师说:”嘿,你猜怎么着?我们为什么不就在所有世界数据上训练一个AI模型?”他们不是说:”让我们只从我的磁盘驱动器收集所有数据。”让我们只是——让我们拉下所有世界数据并让我们训练一个AI模型。这就是丰裕的定义。丰裕的定义是你看一个问题如此之大,你说,你知道什么,我会做这一切。我要治愈每个疾病领域。我不会只做癌症。你在开玩笑吗?那太疯狂了。我们只会做所有人类的痛苦。这就是丰裕。
[18:05] Jensen Huang: Think about what AI does. AI reduces the cost of intelligence— or creates the abundance of intelligence— by orders of magnitude. That’s another way of saying what we used to do that takes one unit of time— now what we used to take a year could take a day now. What we used to take a year could take an hour. It could be done in real time. And the reason for that is because we are in the world of abundance. Moore’s law, goodness gracious, that was slow. That’s like snails. Remember Moore’s law was two times every 18 months, 10 times every 5 years, 100 times every 10. Okay. But where are we now? A million times every 10 years. In the last 10 years, we advanced AI so far that engineers said, “Hey, guess what? Why don’t we just train an AI model on all of the world’s data?” They didn’t mean, “Let’s just collect all the data from my disk drive.” Let’s just— let’s pull down all of the world’s data and let’s train an AI model. That’s the definition of abundance. The definition of abundance is you look at a problem so big and you say, you know what, I’ll do it all. I’m going to cure every field of disease. I’m not going to just do cancer. Are you kidding me? That’s insane. We’ll just do all of human suffering. That’s abundance.
[19:45] Jensen Huang: 当我现在思考工程,当我思考一个问题时,我只是假设我的技术、我的工具、我的仪器、我的宇宙飞船是无限快的。我去纽约需要多长时间?我会在一秒钟内到达那里。那么,如果我可以在一秒钟内到达纽约,我会做什么不同的事情?如果过去需要一年的事情现在需要实时,我会做什么不同的事情?如果过去很重的东西现在只是反重力,我会做什么不同的事情?所以,你用这种态度对待一切。当你用这种态度对待一切时,你正在应用AI感知。这有意义吗?
[19:45] Jensen Huang: When I think about engineering, when I think about a problem these days, I just assume my technology, my tool, my instrument, my spaceship is infinitely fast. How long is it going to take for me to go to New York? I’ll be there in a second. So, what would I do different if I can get to New York in a second? What would I do different if something used to take a year and now takes real time? What would I do different if something used to weigh a lot and now it’s just anti-gravity? And so, you approach everything with that attitude. When you approach everything with that attitude, you are applying AI sensibility. Does that make sense?
[20:35] Jensen Huang: 例如,我们正在与许多公司合作,其中图形分析、依赖关系、关系和依赖关系——你知道这些图形,它们有这么多边,这么多节点和边,数万亿个。在过去,你会处理一个图形,它的小片段。现在,只给我整个图形。它有多大?我不在乎。这种感知正在到处应用。如果速度根本不重要。你在光速。如果质量——你在零重量,零重力。如果你没有应用那种逻辑,如果过去对你来说非常困难的事情你说,”啊,没关系”——如果你没有应用那种逻辑,你做错了。现在想象你将那种逻辑、那种感知应用到你公司最困难的问题上。这就是你将如何推动指针。这就是他们所有人的想法。现在那些——如果你没有那样思考,只是——你所要做的就是——只是想象你的竞争对手那样思考。如果你没有那样思考,只是想象一个即将成立的公司那样思考。它改变了一切。所以我会去找你公司最有影响力的工作在哪里。对它应用无穷大。对它应用零。对它应用光速。然后问Chuck如何实现。[笑声]
[20:35] Jensen Huang: For example, there are many companies that we’re working with where the graph analytics, the dependency, the relationships and dependencies— you know these graphs, they have so many edges, so many nodes and edges, trillions of them. Back in the old days, you would process a graph, small pieces of it. These days, just give me the whole graph. How big is it? I don’t care. That sensibility is being applied everywhere. If you’re not applying that sensibility, you’re doing it wrong. If speed matters, not at all. You’re at the speed of light. If mass— you’re at zero weight, zero gravity. If you’re not applying that logic, if something is not insanely hard to you in the past and you go, “Ah, doesn’t matter”— if you’re not applying that logic, you’re not doing it right. Now imagine you apply that logic, that sensibility to the hardest problems in your company. That’s how you’re going to move the needle. And that’s how they all think. Now the people who are— if you’re not thinking that way, just— all you have to do— just imagine your competitors thinking that way. If you’re not thinking that way, just imagine a company who is about to get founded is thinking that way. It changes everything. And so I would go find where are the most impactful work in your company. Apply infinity to it. Apply zero to it. Apply the speed of light to it. And then ask Chuck how to make that happen. [laughter]
[22:10] Chuck Robbins: 不,让我们谈谈如何实现。所以你有这个类比——
[22:10] Chuck Robbins: No, let’s talk about how to make that happen. So you have this analogy of—
[22:10] Jensen Huang: 就给我打电话。
[22:10] Jensen Huang: Just call me.
[22:10] Chuck Robbins: 我们会打给你。我们会一起做。
[22:10] Chuck Robbins: We’ll call you. We’ll do it together.
[22:16] Jensen Huang: 我们会一起做。
[22:16] Jensen Huang: We’ll do it together.
[22:16] Chuck Robbins: 你有这个类比——这个五层蛋糕——因为每个人都在谈论基础设施、模型、应用程序——我是说,我该如何着手?谈谈这一点。
[22:16] Chuck Robbins: You have this analogy— this five layer cake— because everybody’s talking about infrastructure, models, apps— I mean, how do I go about it? Talk about that a little bit.
[22:24] Jensen Huang: 好吧,成功人士做的事情之一就是他们推理这里正在发生什么。所以大约15年前,一个算法能够——用两个工程师——解决一个计算机视觉问题。计算机视觉基本上是智能的第一部分——感知。智能是感知、推理、规划。感知——我是什么?正在发生什么?我的背景是什么?推理——我如何推理——我如何将其与我的目标进行比较?然后第三,想出一个计划来解决那个——来实现那个。所以——你知道,例如,战斗机问题——感知、定位,然后行动。所以智能是关于这三件事。没有感知,你不能有第二和第三部分。没有理解背景,你无法弄清楚该做什么。背景是高度多模态的。有时是PDF,有时是电子表格。有时是信息。有时只是感官和气味。我们在哪里?我们在这里做什么?谁是观众?等等。阅读房间。所以那是关于感知的。
[22:24] Jensen Huang: Well, one of the things that successful people do is they reason about what is happening here. So almost 15 years ago, an algorithm was able to— with two engineers— solve a computer vision problem. Computer vision is basically the first part of intelligence— perception. Intelligence is perception, reasoning, planning. Perception— what am I? What’s going on? What’s my context? Reasoning— how do I reason about— how do I compare this to my goals? And then three, come up with a plan to solve that— to achieve that. And so— you know, for example, the jet fighter problem— perception, localization, and then action. And so intelligence is about those three things. You can’t have the second and third part without perception. You can’t figure out what to do without understanding context. And context is highly multimodal. Sometimes it’s a PDF, sometimes it’s a spreadsheet. Sometimes it’s information. Sometimes just senses and smells. Where are we? What are we doing here? Who’s the audience? So on and so forth. Reading the room. And so that’s about perception.
[28:20] Jensen Huang: 简单地说,Chuck所说的是我们来自一个一切都是预录的世界。Chuck工作的软件。
[28:20] Jensen Huang: Simplistically, what Chuck is saying is that we came from a world where everything was pre-recorded. The software that Chuck worked on.
[28:36] Chuck Robbins: 真的很好的东西。
[28:36] Chuck Robbins: Really good stuff.
[28:41] Jensen Huang: 它运行了很长时间。只是为了记录,它确实是用希伯来语描述的。[笑声]
[28:41] Jensen Huang: It ran a very long time. Just for the record, it was indeed described in the Hebrew. [laughter]
[28:57] Chuck Robbins: 这是真的。那是另一种技能。我是说,房间里唯一知道希伯来语COBOL的人。
[28:57] Chuck Robbins: That is true. That was another skill. I mean, the only person in the room that knows Hebrew COBOL.
[29:05] Jensen Huang: [笑声] 总之——那是预录的。我们设计——我们描述我们的算法来描述我们的想法,然后我们放入与之一起的数据。一切都是预录的。过去软件是预录的原因是因为它装在CD-ROM中。不是吗?
[29:05] Jensen Huang: [laughter] Anyways— that was pre-recorded. We engineered— we described our algorithms to describe our thoughts and then we put data that goes along with it. Everything is pre-recorded. The reason why software in the past was pre-recorded is because it came in a CD-ROM. Isn’t that right?
[29:24] Chuck Robbins: 是的。
[29:24] Chuck Robbins: Yes.
[29:24] Jensen Huang: 它是预录的。好的。现在什么是软件?因为它是上下文的、动态的,每个上下文都不同,每次使用软件的每个人都不同,每个提示都不同,你给它的前导,你给它的先验,上下文都不同。软件的每个单一实例都不同,这就是为什么过去必要的计算量——这是预录的——称为基于检索。你所要做的就是检查自己。当你使用手机时,你触摸某物,它去并检索一些软件、一些文件、一些图像并将其带给你。在未来,一切都将是生成的,就像现在正在发生的一样。这次对话以前从未发生过。概念以前存在过。先验以前存在过,但这个序列中的每一个词以前从未发生过。原因显然是我们喝了四杯酒,COBOL和希伯来语从未从——
[29:24] Jensen Huang: It was pre-recorded. Okay. What is software now? Because it’s contextual, dynamic, and every context is different and every time everybody who uses the software is different and every prompt is different and the precursor you give it, the priors you give it, the context is different. Every single instance of the software is different, which is the reason why the amount of computation necessary in the past— which is pre-recorded— is called retrieval-based. All you have to do is check yourself. When you use your phone you touch something, it went and retrieved some software, some files, some images and brought it to you. In the future, everything is gonna be generative just like is happening right now. This conversation has never happened before. The concepts existed before. The priors existed before, but every single word in this sequence has never happened before. And the reason for that is obviously we’re four wines in, COBOL and Hebrew have never come out of the—
[30:43] Chuck Robbins: 冷萃咖啡。是的。COBOL,希伯来语。不。谢天谢地这不是在校园或正在流媒体。
[30:43] Chuck Robbins: Cold brew. Yes. COBOL, Hebrew. No. Thank goodness this is not on campus or being streamed.
[30:57] Jensen Huang: 是的。是的。好吧。让我们——你明白我在说什么吗?所以结果——
[30:57] Jensen Huang: Yeah. Yeah. All right. Let’s— Do you understand what I’m saying? And so as a result—
[31:02] Chuck Robbins: 你明白你在说什么吗?[笑声]
[31:02] Chuck Robbins: Do you understand what you’re saying? [laughter]
[31:09] Jensen Huang: Chuck今天到目前为止喂我的唯一东西是四杯酒。
[31:09] Jensen Huang: The only thing that Chuck has fed me today so far is four glasses of wine.
[31:14] Chuck Robbins: 公平地说,我只喂了你——我喂了你其中一杯。你从自助餐拿了另外三杯。
[31:14] Chuck Robbins: And to be fair, I only fed you— I fed you one of them. You took the other three off the buffet.
[31:19] Jensen Huang: 我盯着食物看。我想,”我太饿了。我盯着食物看。”它永远离我大约40英尺。
[31:19] Jensen Huang: I was eyeing the food. I was like, “I’m so hungry. I’m eyeing the food.” It was forever about 40 feet away from me.
[31:28] Chuck Robbins: 那是因为你在拍照。
[31:28] Chuck Robbins: It’s cuz you were taking photos.
[31:33] Jensen Huang: 但它是——我想,它太近了。它太近了。[笑声] 我实际上有一次向食物倾斜,但我又被推回来了。[笑声]
[31:33] Jensen Huang: But it was— I was like, it was so close. It was so close. [laughter] And I actually leaned towards the food one time, but I was pushed back again. [laughter]
[31:39] Chuck Robbins: 你知道发生了什么吗?你的团队实际上提前告诉我们,如果你喝了三杯酒,他是最佳状态。如果你喝了第四杯,那将是不可思议的。这是次优的。
[31:39] Chuck Robbins: You know what happened? Your team actually told us ahead of time, if you get three glasses of wine in, he’s optimal. If you get the fourth one in, it’s going to be incredible. This is suboptimal.
[31:57] Jensen Huang: 所以总之,总之,总之,听着,听着,听着,听着。那么什么是AI?
[31:57] Jensen Huang: So anyways, anyways, anyways, listen, listen, listen, listen. So what is AI?
[32:09] Jensen Huang: 我们必须留下一些智慧。我们能再来一杯酒吗?这不只是Dave Chappelle的东西。
[32:09] Jensen Huang: We have to leave some wisdom behind. Can we get another glass of wine, please? This is not just Dave Chappelle stuff.
[32:21] Chuck Robbins: 好的,让我们谈谈别的。让我们谈谈另一件事。
[32:21] Chuck Robbins: Okay, let’s talk about something. Let’s talk about one other thing.
[32:21] Jensen Huang: 能源。芯片。
[32:21] Jensen Huang: Energy. Chips.
[32:26] Chuck Robbins: 能源听起来不错。
[32:26] Chuck Robbins: Energy sounds good.
[32:26] Jensen Huang: 能源、芯片、基础设施,包括硬件和软件。然后是AI模型。但AI最重要的部分是应用。每个国家、每个公司,下面的所有层都只是基础设施的东西。你需要做的是应用技术。看在上帝的份上,应用技术。使用AI的公司不会陷入危险。你不会因为AI失去工作。你会因为使用AI的人失去工作。所以,开始吧。这是最重要的事情。
[32:26] Jensen Huang: Energy, chips, infrastructure, both hardware and software. Then the AI model. But the most important part of AI is applications. Every single country, every single company, all that layer underneath is just infrastructural stuff. What you need to do is apply the technology. For God’s sakes, apply the technology. A company that uses AI will not be in peril. You’re not going to lose your job to AI. You’re going to lose your job to someone who uses AI. So, get to it. That’s the most important thing.
[33:03] Chuck Robbins: 是的。
[33:03] Chuck Robbins: Yeah.
[33:03] Jensen Huang: 并尽快打电话给Chuck。
[33:03] Jensen Huang: And call Chuck as soon as possible.
[33:09] Chuck Robbins: 你打给我,我会打给他。是的。明白了。所以,我们没有很多时间,所以我不确定——
[33:09] Chuck Robbins: You call me, I’ll call him. Yeah. Got it. So, we don’t have a lot of time, so I’m not sure—
[33:09] Jensen Huang: 我们有世界上所有的时间。是吗?
[33:09] Jensen Huang: We got all the time in the world. Do we?
[33:15] Chuck Robbins: 多少?
[33:15] Chuck Robbins: How much?
[33:15] Jensen Huang: 看,看,Chuck——Chuck,就像他跑。他按时间表建设。我甚至不戴手表。看那个。看那个。Chuck,我把你拿在这里。
[33:15] Jensen Huang: Look, look, Chuck— Chuck, like he runs. He builds on the clock. I don’t even wear a watch. Look at that. Look at that. Chuck, I got you right here.
[33:28] Chuck Robbins: 是的。是的。我们做得很好。
[33:28] Chuck Robbins: Yeah. Yeah. We’re doing great.
[33:28] Jensen Huang: 你按时间表向人们收费。
[33:28] Jensen Huang: You build people on the clock.
[33:33] Chuck Robbins: 哦,是的。不是我。
[33:33] Chuck Robbins: Oh, yeah. Not me.
[33:33] Jensen Huang: 在价值交付之前我不会离开。[掌声]
[33:33] Jensen Huang: I’m not leaving until value’s delivered. [applause]
[33:42] Chuck Robbins: 看,如果需要整晚,我不会——嘿,看,我要折磨你们所有人直到Jensen——这就是为什么像我这样的人需要手表。[笑声] 好吧。
[33:42] Chuck Robbins: See, if it takes all night, I’m not— Hey, look, I’m going to torture all of you until Jensen— That’s why guys like me need a watch. [laughter] All right.
[33:54] Jensen Huang: 直到你能说你学到了什么,你将被困在这里。是的。
[33:54] Jensen Huang: Until you could say that you learned something, you are going to be trapped in here. Yeah.
[34:00] Chuck Robbins: 我们要折磨每个人直到价值被交付。我确实检查了——还有更多酒。嗯,你能给我们你对物理AI的第一想法吗?
[34:00] Chuck Robbins: We’re going to torture everybody until value is delivered. I did check— there is more wine. Um, can you just give us your top of mind on physical AI?
[34:13] Jensen Huang: 记住什么是软件?软件是一个工具。有一种观念认为工具行业正在衰落并将被AI取代。你可以看出因为有一大堆软件公司的股价承受很大压力,因为不知何故AI将取代它们。这是世界上最不合逻辑的事情,时间会证明自己。让我们给自己终极思想实验。假设我们是终极AI——人工通用机器人。终极AI——我们的物理版本。你当然可以解决任何问题,因为你是类人的。你可以做事情。如果你是人类机器人,你会使用螺丝刀还是发明新的螺丝刀?我只会使用一个。你会使用锤子还是发明新锤子?你会使用电锯还是发明新电锯?首先,理想情况下他们根本不使用它。但你明白我在说什么吗?如果你是人类机器人,人工通用机器人,你会使用工具还是重新发明工具?答案显然是使用工具。
[34:13] Jensen Huang: Remember what software is? Software is a tool. There’s this notion that the tool industry is in decline and will be replaced by AI. You could tell because there’s a whole bunch of software companies whose stock prices are under a lot of pressure because somehow AI is going to replace them. It is the most illogical thing in the world and time will prove itself. Let’s give ourselves the ultimate thought experiment. Suppose we are the ultimate AI— artificial general robotics. The ultimate AI— the physical version of us. You could of course solve any problem because you’re humanoid. You could do things. If you were a human robot, would you use a screwdriver or invent a new screwdriver? I would just use one. Would you use a hammer or invent a new hammer? Would you use a chainsaw or invent a new chainsaw? First of all, ideally they don’t use it at all. But do you understand what I’m saying? If you were a human robot, artificial general robotics, would you use tools or reinvent tools? The answer obviously is to use tools.
[35:36] Jensen Huang: 所以现在做数字版本。如果你是人工通用智能,你会使用像ServiceNow和SAP和Cadence和Synopsys这样的工具还是你会重新发明计算器?当然,你只会使用计算器。这就是为什么AI最新突破是什么?工具使用。因为工具被设计为明确的。我们世界中有许多问题,其中F等于MA。请你能不能不要想出另一个版本?[笑声] F=MA不是有点MA。它就是[清嗓子] MA。哦,V等于IR。它不是有点IR。不是大约IR,统计IR——它就是IR。你明白我在说什么吗?所以我认为我们希望人工通用机器人、人工通用智能使用工具。
[35:36] Jensen Huang: And so now do the digital version of that. If you were an artificial general intelligence, would you use the tools like ServiceNow and SAP and Cadence and Synopsys or would you reinvent a calculator? Of course, you would just use a calculator. That’s the reason why the latest breakthroughs in AI is what? Tool use. Because the tools are designed to be explicit. There are many problems in our world where F equals MA. Please could you please not come up with another version? [laughter] F=MA is not kind of MA. It’s just [clears throat] MA. Oh, V equals IR. It’s not kind of IR. Not approximately IR, statistically IR— it is IR. Do you understand what I’m saying? And so I think we want the artificial general robotics, artificial general intelligence to use tools.
[36:47] Jensen Huang: 好吧,这就是大想法。我认为在下一代物理AI中,我们将拥有理解物理世界、理解因果关系的AI。如果我把这个翻倒,它会把所有那个翻倒。他们理解多米诺骨牌的概念。只是多米诺骨牌的概念——注意,一个孩子理解如果你把那个翻倒——多米诺骨牌的概念是极其深刻。因果关系、接触、重力、质量,所有这些都集成到多米诺骨牌中。翻倒多米诺骨牌。你可以有一个小小的多米诺骨牌,翻倒一个更大的多米诺骨牌,翻倒一个更大的多米诺骨牌,翻倒一个更大的多米诺骨牌,直到另一边有一吨——一个孩子对那个概念没有问题。大型语言模型将完全不知道。所以我们必须教导——我们必须创造一种新型的物理AI。
[36:47] Jensen Huang: Well, that’s the big idea. I think that in the next generation of physical AI, we’re going to have AIs that understand the physical world, understand causality. If I tip this over, it’s going to tip all of that over. They understand the concept of a domino. Just the concept of a domino— notice, a child understands if you tip that over— the concept of the domino is extremely— it’s like deeply profound. Causality, contact, gravity, mass, all of that is integrated into a domino. Tipping dominoes over. The idea that you could have a little tiny domino, tip a larger domino, tip a larger domino, tip a larger domino to the point where there’s a ton on the other side— a child has no trouble with that concept. A large language model will have no idea. And so we have to teach— we have to create a new type of physical AI.
[37:48] Jensen Huang: 好吧,机会是什么?到目前为止,Chuck和我所在的行业是关于创造工具的。我们一直在螺丝刀锤子行业。我们的整个生活都是关于创造螺丝刀和锤子的。这是历史上第一次,我们将创造人们所说的劳动力,增强劳动。给你一个例子。什么是自动驾驶汽车?什么是数字司机?数字司机的价值是多少?很多。比汽车多得多。原因是因为在数字司机的生命周期中,数字司机的经济学比汽车多得多。
[37:48] Jensen Huang: Well, what’s the opportunity? So far, the industry that Chuck and I have been part of is about creating tools. We have been in the screwdriver hammer business. Our entire life has been about creating screwdrivers and hammers. For the first time in history, we are going to create what people call labor, but augmented labor. Give you an example. What is a self-driving car? What’s a digital chauffeur? What’s a digital chauffeur valued at? A lot. A lot more than the car. And the reason for that is because in the lifetime of the digital chauffeur, the economics of the digital chauffeur is a lot more than the car.
[38:33] Jensen Huang: 这是第一次,我们暴露于大100倍的TAM。字面上在数学上是真的。IT行业大约是一万亿美元,对吧?或者差不多正负几个。然而世界经济大约是一百万亿美元。这是第一次,我们将暴露于所有那个(数字)。所以情况是你们所有人——今天这个房间里的每个人——你们有机会应用这项技术成为一家技术公司。
[38:33] Jensen Huang: For the very first time, we are exposed to a TAM that is 100 times larger. Literally mathematically true. The IT industry is about a trillion dollars, right? Or so, plus or minus a couple. And yet the economy of the world is about a hundred trillion dollars. For the very first time, we’re going to be exposed to all of that. So it is the case that all of you— everybody in this room today— you have the opportunity to apply this technology to become a technology company.
[39:18] Jensen Huang: 让我给你一些例子。我真的相信——尽管我——看,我爱迪士尼,我喜欢与迪士尼合作。我很确定他们宁愿成为Netflix。我爱梅赛德斯。我坐梅赛德斯来的。我确信他们宁愿成为特斯拉。我爱沃尔玛。我确信他们宁愿成为亚马逊。你们到目前为止同意吗?我是三中三吗?你们所有人都是那样。
[39:18] Jensen Huang: Let me give you some examples. I really believe— as much as I— look, I love Disney and I love working with Disney. I’m pretty sure they’d rather be Netflix. I love Mercedes. I came in a Mercedes. I am certain they’d rather be Tesla. I love Walmart. I am certain they’d rather be Amazon. Do you guys agree so far? Am I three for three? All of you are that way.
[39:50] Jensen Huang: 我相信我们有机会帮助将每一家公司转变为技术公司。技术第一。技术是你的超能力,领域是你的应用,而不是相反——领域是你是谁,你在寻求技术。原因是因为技术优先的公司,你在处理电子,而不是原子。电子,有更多的它们。原子,你受质量限制,这就是为什么当他们从CD-ROM转到电子时,公司的价值爆炸了一千倍。你需要像我们一样,成为电子公司、电子公司,这是说技术公司的另一种方式。所以我认为你的机会在这里。
[39:50] Jensen Huang: I believe that we have an opportunity to help transform every single company into a technology company. Technology first. Technology is your superpower and the domain is your application, versus the other way— which is the domain is who you are and you’re seeking for technology. And the reason that’s so is because companies who are technology first, you’re dealing with electrons, not atoms. And electrons, there’s a lot more of them. Atoms, you’re limited by mass, which is the reason why the moment they went from CD-ROMs to electrons, the value of the company exploded by a thousand times. You need to be like us, an electronics company, electron company, which is another way of saying a technology company. And so I think the opportunity for you is here.
[41:05] Jensen Huang: 思考这一点的另一种方式是AI——我们刚才说过。甚至Chuck,他只知道如何用希伯来语编程,
[41:05] Jensen Huang: Another way to think about that is AI— and we just said it earlier. Even Chuck, who only knows how to program in Hebrew,
[41:12] Chuck Robbins: [笑声] 这是一种天赋。
[41:12] Chuck Robbins: [laughter] It’s a gift.
[41:18] Jensen Huang: 他的工具选择是从右到左。[笑声]
[41:18] Jensen Huang: His instrument choice is a right to left. [laughter]
[41:32] Chuck Robbins: 因为你知道否则会弄脏。这实际上相当聪明。
[41:32] Chuck Robbins: Because as you know it smears otherwise. It is pretty smart actually.
[41:38] Jensen Huang: 聪明的人做聪明的事情。所以美好的事情是,你知道世界的编程语言——对于你们所有的公司,你们有点觉得,哦我的天,软件不是我们的强项。但知识、直觉、领域专业知识是你的强项。好吧,你现在第一次可以用你的语言向计算机准确解释你想要什么。你记得我们从哪里开始——从显式编程到隐式编程?这是历史上第一次,你可以隐式地编程计算机。只要告诉它你想要什么。告诉它你的意思,计算机会写代码,因为事实证明编码只是打字。事实证明打字是一种商品。这就是你的大机会。你们所有人都可以被提升到你以前受限的原子限制之上。你们所有人都可以逃脱这个限制——我们没有足够的软件工程师——因为事实证明打字是一种商品。你们所有人都有非常有价值的东西,那就是领域专业知识——理解客户,理解问题。这就是最终价值。这就是最终价值——理解意图。
[41:38] Jensen Huang: Smart people do smart things. And so the beautiful thing is that as you know the programming language of the world— and for all of your companies you kind of feel like, oh my gosh, software is not our strength. But knowledge, intuition, domain expertise is your strength. Well, you now for the first time can explain exactly what you want to a computer in your language. Do you remember where we started— from explicit programming to implicit programming? For first time in history, you could program a computer implicitly. Just tell it what you want. Tell it what you mean and the computer will write the code because coding as it turns out is just typing. And typing as it turns out is a commodity. And that’s the great opportunity for you. All of you could be levitated above the atomic limitations that you were limited by before. All of you could escape from this limitation— we don’t have enough software engineers— because as it turns out typing is a commodity. And all of you have something of great value which is domain expertise— to understand the customer, understand the problem. And that is the ultimate value. That is the ultimate value— to understand the intent.
[43:04] Jensen Huang: 你知道,当你从大学毕业时,你可以是一个超级程序员,但你不知道客户想要什么。你不知道要解决什么问题。但这就是你们所有人知道的。你知道客户想要什么。你知道要解决什么问题。编码部分很容易。只要告诉AI去做。所以这就是你的超能力。所以Chuck和我在这里帮助你做到这一点。那个结束语是在我喝了五杯酒的情况下完成的。[笑声]
[43:04] Jensen Huang: You know, when you graduate from college, you could be a super programmer, but you have no idea what customers want. You have no idea what problems to solve. But that’s what all of you know. You know what customers want. You know what problems to solve. The coding part of it is easy. Just tell the AI to do it. And so that’s your superpower. So Chuck and I are here to enable you to do that. That closing was done with five glasses of wine in me. [laughter]
[43:49] Chuck Robbins: 所以,嘿,听着——这确实是一个奇迹——这是一个在桌子上工作的人——人工智能的真实代表。[掌声]
[43:49] Chuck Robbins: So, hey, listen— it’s a miracle indeed— this is somebody who works off— a table— true representation of artificial intelligence. [applause]
[44:02] Jensen Huang: 也许那是增强的。我只想告诉你,与你们所有人合作是一种巨大的乐趣。正如你所知,Cisco在计算重塑的两个非常重要的支柱上拥有极端的专业知识。没有Cisco,就没有现代计算。其中一个当然是网络,另一个是安全。这两个支柱都在AI世界中被重塑了。我们非常了解的部分——即计算部分——在很多方面是一种商品,Cisco知道的东西是深刻有价值的。在我们两个之间,我们将很高兴帮助你们所有人参与AI世界。
[44:02] Jensen Huang: Maybe that’s enhanced. I just want to tell you that it’s a great pleasure working with all of you. Cisco as you know has extreme expertise and two very important pillars of the reinvention of computing. Without Cisco, there is no modern computing. One of them is of course networking and the other one’s security. And both of those pillars have been reinvented in the world of AI. And the part that we know very well— which is the computing part of it— in a lot of ways is a commodity, and the stuff that Cisco knows is deeply valuable. And between the two of us, we’ll be delighted to help all of you engage the world of AI.
[44:51] Jensen Huang: 然后有人早些时候问我——我认为值得重复。有人早些时候问我,你应该只是租用云还是你应该甚至努力建造你自己的计算机?这是我会告诉你的。我会建议你做我建议我的孩子做的完全相同的事情。建造一台计算机。即使PC无处不在,即使它成熟了,即使技术发展了,看在上帝的份上,建造一个。知道为什么所有组件都存在。如果你要进入汽车、汽车行业、运输行业的世界,不要只使用Uber。看在上帝的份上,抬起引擎盖,换机油,理解所有组件。看在上帝的份上,理解它是如何工作的。这至关重要。这项技术对未来如此重要。你必须对它有一些触觉理解。抬起引擎盖,换机油,建造一些东西。不必很大。建造一些东西。
[44:51] Jensen Huang: And then somebody asked me earlier— I think it’s worth repeating. Somebody asked me earlier, should you just rent the cloud or should you even make the effort to build your own computer? Here’s what I would tell you. I would advise you to do exactly the same thing I advise my children. Build a computer. Even though the PC is everywhere, even though it’s mature, even though technology is developed, for God’s sakes, build one. Know why all the components exist. If you were to be in the world of automotive, the automobile industry, the transportation industry, don’t just use Uber. For God’s sakes, lift the hood, change the oil, understand all the components. For God’s sakes, understand how it works. It is vital. This technology is so important to the future. You must have some tactile understanding of it. Lift the hood, change the oil, build something. Doesn’t have to be large. Build something.
[46:05] Jensen Huang: 你可能发现你实际上非常擅长它。你可能发现你需要那种技能。你可能发现世界不是全部租用与全部拥有——你想要租用一些并拥有一些,因为你公司的某些部分应该建立在本地。例如,主权和专有信息——只是,你不愿意与每个人分享你的问题。你知道,当你去看治疗师时,你不想让问题在线。[笑声] 你知道我在说什么吗?
[46:05] Jensen Huang: You might discover you’re actually insanely good at it. You might discover that you need that skill. You might discover that the world is not about all rent versus all own— that you want to rent some and own some because some part of your company should be built on prem. For example, sovereignty and proprietary information— and just, you’re not comfortable sharing your questions with everybody. You know, when you go see a therapist, you don’t want the questions to be online. [laughter] You know what I’m saying?
[46:52] Chuck Robbins: 好的,我只是——我在想象这个。[笑声] 假设地。
[46:52] Chuck Robbins: Okay, I’m just— I’m imagining this one. [laughter] Hypothetically.
[46:57] Jensen Huang: 所以,假设地,我认为你有很多问题,你有很多对话,很多对话,很多不确定性应该保持私密。公司也是一样。我不自信。我对将Nvidia的所有对话放在云中不安全,这就是为什么我们在本地建造它。我们在本地建造了一个超级AI系统,因为我只是不自信分享那个对话。因为事实证明,对我来说最有价值的IP不是我的答案。是我的问题。你跟上我了吗?我的问题是对我来说最有价值的IP。我在思考的是我的问题。答案是一种商品。如果我只是知道要问什么——我在识别什么是重要的。我不想让人们知道我认为什么是重要的。我希望那在一个小房间里。我希望那在本地。我希望那是我自己。我想创造我自己的AI。
[46:57] Jensen Huang: And so, hypothetically, I think that a lot of questions you have, a lot of conversations you have, a lot of dialogue, a lot of uncertainties you have ought to be kept private. Companies are the same way. I am not confident. I am not secure about putting all of Nvidia’s conversations in the cloud, which is the reason why we built it locally. We’ve built a super AI system locally because I’m just not confident to share that conversation. Because as it turns out, the most valuable IP to me is not my answers. It’s my questions. Are you following me? My questions are the most valuable IP to me. What I’m thinking about are my questions. The answers are a commodity. If I simply knew what to ask— I’m identifying what’s important. And I don’t want people to know what I think is important. And I want that to be in a small room. I want that to be on prem. I want that to be by myself. And I want to create my own AI.
[48:08] Jensen Huang: 然后最后一个想法,因为已经11点了。[笑声] 最后一个想法。有一个想法认为AI应该总是有人在环中。这正好是错误的想法。它是向后的。每个公司都应该有AI在环中。原因是因为我们希望我们的公司每天都变得更好、更有价值、更有知识。我们永远不想倒退。我们永远不想变平。我们永远不想从头开始。这意味着如果我们有AI在环中,它将捕获我们的生活经验。未来每个员工都将有AI,很多AI在环中。这些AI将成为公司的知识产权。这就是未来公司。因此,我认为你们所有人立即打电话给Chuck是明智的。
[48:08] Jensen Huang: And then one last thought since it’s already 11 o’clock. [laughter] One last thought. There was an idea that AI should always have human in the loop. It’s exactly the wrong idea. It’s backwards. Every company should have AI in the loop. And the reason for that is because we want our company to be better and more valuable and more knowledgeable every single day. We never want to go backwards. We never want to go flat. We never want to start from the beginning. Which means that if we have AI in the loop, it will capture our life experience. Every single employee in the future will have AI, lots of AIs in the loop. And those AIs will become the company’s intellectual property. That’s the future company. And therefore, I think it sensible for all of you to call Chuck immediately.
[49:13] Chuck Robbins: 我打给了Jensen。[笑声]
[49:13] Chuck Robbins: I called Jensen. [laughter]
[49:20] Jensen Huang: 总之,这是我的结束语。
[49:20] Jensen Huang: Anyhow, that’s my closing.
[49:25] Chuck Robbins: 听着——在路上两周。Jensen飞到这里,在他第一次长时间睡在自己床上之前,与我们度过了他的最后一晚、最后一个晚上。我们永远感激。感谢你来到这里。谢谢。
[49:25] Chuck Robbins: Listen— two weeks on the road. Jensen flew here, spent his last night, last evening with us before he gets to sleep in his bed for the first time in a long time. We’re forever grateful. Appreciate you being here. Thank you.
[49:36] Jensen Huang: 非常感谢。而且——[掌声] 谢谢,伙计。而且——[掌声] 从我的眼角,有所有这些串烧。有人还在那里。Fritos的袋子在哪里?[笑声]
[49:36] Jensen Huang: Thank you very much. And— [applause] Thank you, man. And— [applause] from the corner of my eye, there were all these skewers. Somebody was still there. Where’s the bag of Fritos? [laughter]
[49:58] Chuck Robbins: 好吧,我们走吧。谢谢。谢谢大家。
[49:58] Chuck Robbins: All right, let’s go. Thank you. Thank you, everybody.