LZN's Blog CodePlayer

【e/acc】有效加速主义、量子热力学与AI未来 | 物理学家、e/acc运动创始人Guillaume Verdon与Lex Fridman播客实录 | 中英文完整版精译 I

2026-02-15
LZN

书童按:本篇是Guillaume Verdon接受Lex Fridman播客采访的实录。Verdon是物理学家、应用数学家与量子机器学习先驱,曾在谷歌从事量子计算研究,后创立Extropic公司,致力于为生成式AI打造基于物理原理的计算硬件。他亦是X平台匿名账号@BasedBeffJezos背后的真实人物,有效加速主义(e/acc)运动的联合创始人。e/acc以热力学与信息论为哲学根基,主张以技术快速进步作为人类伦理最优选择,正面对抗”AI末日论”代表的减速主义思潮。访谈纵横于量子计算与非平衡热力学的哲学意涵、匿名言论与思想自由、AI监管与市场力量的博弈、通用智能的重新定义等议题,视野开阔,锋芒毕现。初稿采用Claude API机器翻译及排版,书童仅做简单校对及批注,将分四部分发布,以飨诸君。

Guillaume Verdon:有效加速主义、热力学与量子智能 | Lex Fridman播客

Guillaume Verdon: Effective Accelerationism, Thermodynamics, and Quantum Intelligence | Lex Fridman Podcast

引言

Introduction

Lex Fridman (00:00:00) 以下是与Guillaume Verdon的对话。他就是X平台上曾经匿名的账号@BasedBeffJezos背后的人。这两重身份因《福布斯》一篇题为《@BasedBeffJezos是谁?科技精英e/acc运动的领袖》的曝光文章被强行合二为一。让我来介绍同一个大脑里共存的这两重身份。其一:Guillaume是物理学家、应用数学家、量子机器学习研究者兼工程师,在量子机器学习方向取得博士学位,曾供职于谷歌量子计算团队,后创立Extropic公司,为生成式AI打造基于物理原理的计算硬件。

LEX FRIDMAN (00:00:00) The following is a conversation with Guillaume Verdon, the man behind the previously anonymous account @BasedBeffJezos on X. These two identities were merged by a doxxing article in Forbes titled, Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s E/Acc Movement? So let me describe these two identities that coexist in the mind of one human. Identity number one, Guillaume, is a physicist, applied mathematician, and quantum machine learning researcher and engineer receiving his PhD in quantum machine learning, working at Google on quantum computing, and finally launching his own company called Extropic that seeks to build physics-based computing hardware for generative AI.

Lex Fridman (00:00:47) 其二:X平台上的Beff Jezos是有效加速主义运动的创始人——常缩写为e/acc——主张将推动技术快速进步作为人类伦理上的最优选择。其拥护者深信AI进步是最强大的社会均衡器,理应全力推进。e/acc追随者自视为谨慎派的相反力量——后者认为AI高度不可预测、潜在危险、亟需监管。他们管对手叫”末日派”或”减速派”(decel)。用Beff自己的话说:”e/acc是一种模因化的乐观主义病毒。”

LEX FRIDMAN (00:00:47) Identity number two, Beff Jezos on X is the creator of the effective accelerationism movement, often abbreviated as e/acc, that advocates for propelling rapid technological progress as the ethically optimal course of action for humanity. For example, its proponents believe that progress in AI is a great social equalizer, which should be pushed forward. e/acc followers see themselves as a counterweight to the cautious view that AI is highly unpredictable, potentially dangerous, and needs to be regulated. They often give their opponents the labels of quote, “doomers or decels” short for deceleration, as Beff himself put it, “e/acc is a mimetic optimism virus.”

Lex Fridman (00:01:37) 这场运动的传播风格一贯偏向梗图和搞笑,但背后有扎实的思想根基,我们会在对话中深入挖掘。说到梗——本人勉强算个荒诞美学的业余爱好者。我先后和Jeff Bezos、Beff Jezos做了背靠背的访谈,这绝非巧合。对话中会聊到,Beff视Jeff为当今最重要的在世人类之一,而我则纯粹欣赏这里头的荒诞之美和幽默感。这里是Lex Fridman播客,如您愿意支持,请查看简介中的赞助商信息。闲话少叙,朋友们,有请Guillaume Verdon。

LEX FRIDMAN (00:01:37) The style of communication of this movement leans always toward the memes and the lols, but there is an intellectual foundation that we explore in this conversation. Now, speaking of the meme, I am to a kind of aspiring connoisseur of the absurd. It is not an accident that I spoke to Jeff Bezos and Beff Jezos back to back. As we talk about Beff admires Jeff as one of the most important humans alive, and I admire the beautiful absurdity and the humor of it all. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Guillaume Verdon.

Beff Jezos

Beff Jezos

Lex Fridman (00:02:23) 先把身份这件事捋清楚。你叫Guillaume Verdon,Gill,但你同时也是X上匿名账号@BasedBeffJezos背后的人。Guillaume Verdon这边:量子计算学者、物理学家、应用数学家;@BasedBeffJezos那边:本质上是个发起了一场运动、背后有哲学体系的梗图账号。能不能展开聊聊这两个角色——性格、沟通风格、哲学理念有什么不同?

LEX FRIDMAN (00:02:23) Let’s get the facts of identity down first. Your name is Guillaume Verdon, Gill, but you’re also behind the anonymous account on X called @BasedBeffJezos. So first, Guillaume Verdon, you’re a quantum computing guy, physicist, applied mathematician, and then @BasedBeffJezos is basically a meme account that started a movement with a philosophy behind it. So maybe just can you linger on who these people are in terms of characters, in terms of communication styles, in terms of philosophies?

Guillaume Verdon (00:02:58) 说说我的主要身份吧。打小起我就想搞清楚万物之理,想理解宇宙。这条路把我领进了理论物理,最终试图回答那些终极命题——我们为何在此?我们将往何处?由此我开始研究信息论,从信息的视角理解物理,把宇宙看作一台巨大的计算机。在黑洞物理研究到一定深度后,我意识到自己不仅想理解宇宙如何计算,更想”像自然那样去计算”——造出受自然启发的计算机,也就是基于物理的计算机。这把我带进了量子计算领域:首先是模拟自然,再就是在我的工作中,学习能在量子计算机上运行的自然表示。

GUILLAUME VERDON (00:02:58) I mean, with my main identity, I guess ever since I was a kid, I wanted to figure out the theory of everything, to understand the universe. And that path led me to theoretical physics, eventually trying to answer the big questions of why are we here? Where are we going? And that led me to study information theory and try to understand physics from the lens of information theory, understand the universe as one big computation. And essentially after reaching a certain level studying black hole physics, I realized that I wanted to not only understand how the universe computes, but sort of compute like nature and figure out how to build and apply computers that are inspired by nature. So physics-based computers. And that sort of brought me to quantum computing as a field of study to first of all, simulate nature. And in my work it was to learn representations of nature that can run on such computers.

Guillaume Verdon (00:04:17) 如果让AI用自然的方式思考,它们就能更精准地表征自然。至少这是驱使我成为量子机器学习领域早期探索者的核心命题——怎样在量子计算机上做机器学习,怎样把智能的概念延伸到量子领域。怎样捕获和理解现实世界的量子力学数据?怎样学习世界的量子力学表示?用什么样的计算机来运行和训练?怎样实现?这些就是我要回答的问题。而说到底,我经历了一次信仰危机。最初,跟每个物理学家一样,入行时都想用几个方程写尽宇宙,当那个故事里的英雄。

GUILLAUME VERDON (00:04:17) So if you have AI representations that think like nature, then they’ll be able to more accurately represent it. At least that was the thesis that brought me to be an early player in the field called quantum machine learning. So how to do machine learning on quantum computers and really sort of extend notions of intelligence to the quantum realm. So how do you capture and understand quantum mechanical data from our world? And how do you learn quantum mechanical representations of our world? On what kind of computer do you run these representations and train them? How do you do so? And so that’s really the questions I was looking to answer because ultimately I had a sort of crisis of faith. Originally, I wanted to figure out as every physicist does at the beginning of their career, a few equations that describe the whole universe and sort of be the hero of the story there.

Guillaume Verdon (00:05:28) 但后来我想通了:用机器增强我们自身,增强我们感知、预测和掌控世界的能力,这才是正路。于是我离开理论物理,转入量子计算和量子机器学习。在那些年里,我始终觉得拼图还差一块。我们理解世界、计算世界、思考世界的方式,都少了点什么。看物理尺度的话:极小尺度上,量子力学说了算;极大尺度上,一切是确定性的,统计涨落已被抹平。我确确实实坐在这张椅子上,不是叠加在东西南北飘忽不定。极小尺度上倒是有叠加态、有干涉效应。但在介观尺度——日常生活的尺度,蛋白质、生物体、气体、液体所在的尺度——物质其实是热力学性质的,在涨落。

GUILLAUME VERDON (00:05:28) But eventually I realized that actually augmenting ourselves with machines, augmenting our ability to perceive, predict, and control our world with machines is the path forward. And that’s what got me to leave theoretical physics and go into quantum computing and quantum machine learning. And during those years I thought that there was still a piece missing. There was a piece of our understanding of the world and our way to compute and our way to think about the world. And if you look at the physical scales, at the very small scales, things are quantum mechanical, and at the very large scales, things are deterministic. Things have averaged out. I’m definitely here in this seat. I’m not in a super position over here and there. At the very small scales, things aren’t super position. They can exhibit interference effects. But at the meso scales, the scales that matter for day-to-day life and the scales of proteins, of biology, of gases, liquids and so on, things are actually thermodynamical, they’re fluctuating.

Guillaume Verdon (00:06:46) 在量子计算和量子机器学习领域干了大约八年后,我突然开窍了——我一直在极大和极小之间找答案。做过一点量子宇宙学——研究宇宙从哪来、往哪去;研究黑洞物理、量子引力的极端情形,也就是能量密度高到量子力学和引力同时登场的地方。典型场景就是黑洞和极早期宇宙——量子力学与相对论的交界地带。

GUILLAUME VERDON (00:06:46) And after I guess about eight years and quantum computing and quantum machine learning, I had a realization that I was looking for answers about our universe by studying the very big and the very small. I did a bit of quantum cosmology. So that’s studying the cosmos, where it’s going, where it came from. You study black hole physics, you study the extremes in quantum gravity, you study where the energy density is sufficient for both quantum mechanics and gravity to be relevant. And the sort of extreme scenarios are black holes and the very early universe. So there’s the sort of scenarios that you study the interface between quantum mechanics and relativity.

Guillaume Verdon (00:07:42) 可我一直盯着两端的极端,却漏掉了”中间那块肉”。日常尺度上量子力学有用、宇宙学有用,但其实没那么直接相关。我们活在中等时空尺度上,这个尺度上最管用的物理理论是热力学——尤其是非平衡热力学。生命本身就是热力学过程,而且是远离平衡态的。我们不是与环境达成热平衡的一锅粒子汤,而是一种拼命维持自身的相干态,靠获取和消耗自由能来续命。差不多在我离开Alphabet前夕,我对宇宙的信念再次发生了转变。我知道自己要造一种基于这类物理的全新计算范式。

GUILLAUME VERDON (00:07:42) And really I was studying these extremes to understand how the universe works and where is it going. But I was missing a lot of the meat in the middle, if you will, because day-to-day quantum mechanics is relevant and the cosmos is relevant, but not that relevant actually. We’re on sort of the medium space and timescales. And there the main theory of physics that is most relevant is thermodynamics, out of equilibrium thermodynamics. Because life is a process that is thermodynamical and it’s out of equilibrium. We’re not just a soup of particles at equilibrium with nature, were a sort of coherent state trying to maintain itself by acquiring free energy and consuming it. And that sort of, I guess another shift in, I guess my faith in the universe happened towards the end of my time at Alphabet. And I knew I wanted to build, well, first of all a computing paradigm based on this type of physics.

Guillaume Verdon (00:08:57) 但与此同时,在把这些想法实验性地应用于社会、经济等方面的过程中,我开了个匿名号——纯粹是为了卸下”说什么都得负责”那种实名账号的压力。一开始只是想拿匿名号来试探想法,没想到直到真正放手,我才发现自己过去把思想空间压缩得有多厉害。某种意义上,限制言论会反向传播为限制思想。开了匿名号之后,感觉脑子里有些变量突然被解锁了,我一下子能在大得多的思想参数空间里探索。

GUILLAUME VERDON (00:08:57) But ultimately just by trying to experiment with these ideas applied to society and economies and much of what we see around us, I started an anonymous account just to relieve the pressure that comes from having an account that you’re accountable for everything you say on. And I started an anonymous account just to experiment with ideas originally because I didn’t realize how much I was restricting my space of thoughts until I sort of had the opportunity to let go. In a sense, restricting your speech back propagates to restricting your thoughts. And by creating an anonymous account, it seemed like I had unclamped some variables in my brain and suddenly could explore a much wider parameter space of thoughts.

Lex Fridman (00:10:00) 在这点上展开一下——这不是很有意思吗?大家很少谈的一件事是:言论一旦受到压力和约束,思想也不知不觉被约束了,尽管逻辑上完全不必如此。我们明明可以在脑子里想任何事,但这种外部压力硬是会在思想四周筑起围墙。

LEX FRIDMAN (00:10:00) Just a little on that, isn’t that interesting that one of the things that people don’t often talk about is that when there’s pressure and constraints on speech, it somehow leads to constraints on thought even though it doesn’t have to. We can think thoughts inside our head, but somehow it creates these walls around thought.

Guillaume Verdon (00:10:23) 没错。这正是我们运动的出发点——我们看到一种趋势:在生活的方方面面压制多样性,无论是思想、经营方式、组织方式还是AI研究路径。我们坚信,保持多样性才能确保系统的适应力。在思想、公司、产品、文化、政府、货币的市场中维持健康竞争,才是正途——因为系统总会自我调适,把资源配置给最有利于增长的那些形态。运动的根本理念,是这样一种洞察:生命是宇宙中一团追逐自由能、渴望生长的火焰,增长是生命的本性。平衡热力学的方程里写得明明白白:那些更擅长获取自由能、散逸更多热量的物质路径,出现的概率呈指数级增高。宇宙本身偏爱某些未来,整个系统自有其天然的走向。

GUILLAUME VERDON (00:10:23) Yep. That’s sort of the basis of our movement is we were seeing a tendency towards constraint, reduction or suppression of variants in every aspect of life, whether it’s thought, how to run a company, how to organize humans, how to do AI research. In general, we believe that maintaining variance ensures that the system is adaptive. Maintaining healthy competition in marketplaces of ideas, of companies, of products, of cultures, of governments, of currencies is the way forward because the system always adapts to assign resources to the configurations that lead to its growth. And the fundamental basis for the movement is this sort of realization that life is a sort of fire that seeks out free energy in the universe and seeks to grow. And that growth is fundamental to life. And you see this in the equations actually of equilibrium thermodynamics. You see that paths of trajectories, of configurations of matter that are better at acquiring free energy and dissipating more heat are exponentially more likely. So the universe is biased towards certain futures, and so there’s a natural direction where the whole system wants to go.

热力学

Thermodynamics

Lex Fridman (00:12:21) 热力学第二定律说,宇宙的熵永远在增加,趋向平衡。而你说的是,其中存在一些复杂的、远离平衡的”口袋”。你还说热力学有利于复杂生命的涌现——这类生命通过消耗能量、向外卸载熵来提升自身能力。于是就有了这些逆熵的”口袋”。凭什么你直觉上认为这种口袋的涌现是自然的?

LEX FRIDMAN (00:12:21) So the second law of thermodynamics says that the entropy is always increasing in the universe that’s tending towards an equilibrium. And you’re saying there’s these pockets that have complexity and are out of equilibrium. You said that thermodynamics favors the creation of complex life that increases its capability to use energy to offload entropy. To offload entropy. So you have pockets of non-entropy that tend the opposite direction. Why is that intuitive to you that it’s natural for such pockets to emerge?

Guillaume Verdon (00:12:53) 因为我们产热的效率远超一块同等质量的石头。我们获取自由能、摄入食物、消耗大量电力来维持运转。宇宙想产生更多熵,而让生命继续运转和壮大,恰恰是产熵的最优路径——生命会主动搜寻自由能的”口袋”并将其燃烧殆尽,以维系自身并进一步扩张。这就是生命的底层逻辑。MIT的Jeremy England有一套理论——我深以为然——认为生命的涌现正是源于这种属性。在我看来,这套物理就是支配介观尺度的法则,是量子与宇宙之间缺失的那块拼图,是中间层。热力学主宰着介观尺度。

GUILLAUME VERDON (00:12:53) Well, we’re far more efficient at producing heat than let’s say just a rock with a similar mass as ourselves. We acquire free energy, we acquire food, and we’re using all this electricity for our operation. And so the universe wants to produce more entropy and by having life go on and grow, it’s actually more optimal at producing entropy because it will seek out pockets of free energy and burn it for its sustenance and further growth. And that’s sort of the basis of life. And I mean, there’s Jeremy England at MIT who has this theory that I’m a proponent of, that life emerged because of this sort of property. And to me, this physics is what governs the meso scales. And so it’s the missing piece between the quantum and the cosmos. It’s the middle part. Thermodynamics rules the meso scales.

Guillaume Verdon (00:14:08) 对我来说,无论是从工程角度——设计利用这种物理特性的器件,还是从认知角度——透过热力学棱镜理解世界,过去一年半里两重身份已形成了协同。这也正是两重身份各自浮现的深层原因。一面是,我是受到认可的科学家,正走向创业,要做新型物理AI的先驱;另一面是,我在以物理学家的视角实验性地探索哲学。

GUILLAUME VERDON (00:14:08) And to me, both from a point of view of designing or engineering devices that harness that physics and trying to understand the world through the lens of thermodynamics has been sort of a synergy between my two identities over the past year and a half now. And so that’s really how the two identities emerged. One was kind of, I’m a decently respected scientist, and I was going towards doing a startup in the space and trying to be a pioneer of a new kind of physics-based AI. And as a dual to that, I was sort of experimenting with philosophical thoughts from a physicist standpoint.

Guillaume Verdon (00:14:58) 大约在那段时间——2021年底、2022年初——社会上对未来弥漫着悲观情绪,对技术尤甚。这种悲观在算法加持下病毒式扩散,人们普遍觉得未来不如现在。在我看来,这种”末日心态”是宇宙中一种极具破坏力的力量,因为它具有超迷信性(hyperstitious,书童注:hyperstition,指信念本身能提高其所预言之事发生概率的现象,自我实现的预言)——你越信它,它越可能成真。我因此觉得有责任让人们认清文明的发展轨迹和系统趋向增长的天然本性。物理定律实际上在说:统计上看,未来会更好、更宏大,而我们有能力让它成真。

GUILLAUME VERDON (00:14:58) And ultimately I think that around that time, it was like late 2021, early 2022, I think there was just a lot of pessimism about the future in general and pessimism about tech. And that pessimism was sort of virally spreading because it was getting algorithmically amplified and people just felt like the future is going to be worse than the present. And to me, that is a very fundamentally destructive force in the universe is this sort of doom mindset because it is hyperstitious, which means that if you believe it, you’re increasing the likelihood of it happening. And so felt a responsibility to some extent to make people aware of the trajectory of civilization and the natural tendency of the system to adapt towards its growth. And that actually the laws of physics say that the future is going to be better and grander statistically, and we can make it so.

Guillaume Verdon (00:16:14) 反过来也一样:你若相信未来更好,并且相信自己有能力促成它,你就在实实在在地提高那个更好的未来出现的概率。所以我觉得有责任去打造一场关于未来的病毒式乐观主义运动,建一个互相支持的社区,一起造东西、干难事——做那些文明扩张必须做的事。因为在我看来,停滞和减速根本就不是选项。生命、整个系统、我们的文明,本质上就渴望增长。增长期的合作远多于衰退期——后者只会让人争着分一块越来越小的饼。就这样,我一直在两重身份之间走平衡木,直到最近两者在我不知情的情况下被强行合并了。

GUILLAUME VERDON (00:16:14) And if you believe in it, if you believe that the future would be better and you believe you have agency to make it happen, you’re actually increasing the likelihood of that better future happening. And so I sort of felt a responsibility to sort of engineer a movement of viral optimism about the future, and build a community of people supporting each other to build and do hard things, do the things that need to be done for us to scale up civilization. Because at least to me, I don’t think stagnation or slowing down is actually an option. Fundamentally life and the whole system, our whole civilization wants to grow. And there’s just far more cooperation when the system is growing rather than when it’s declining and you have to decide how to split the pie. And so I’ve balanced both identities so far, but I guess recently the two have been merged more or less without my consent.

Lex Fridman (00:17:27) 你讲了好多精彩的东西。首先是”自然的表示”——这是最初吸引你从量子计算角度切入的:如何理解自然?如何表示自然,才能理解它、模拟它、用它做些什么?本质上是一个表示问题。然后你从量子力学表示跃迁到你所说的介观尺度表示,热力学在这里登场——这是另一种表示自然的方式,为了理解什么?理解生命、人类行为,理解地球上这些我们觉得有意思的一切。

LEX FRIDMAN (00:17:27) You said a lot of really interesting things there. So first, representations of nature, that’s something that first drew you in to try to understand from a quantum computing perspective, how do you understand nature? How do you represent nature in order to understand it, in order to simulate it, in order to do something with it? So it’s a question of representations, and then there’s that leap you take from the quantum mechanical representation to the what you’re calling meso scale representation, where the thermodynamics comes into play, which is a way to represent nature in order to understand what? Life, human behavior, all this kind of stuff that’s happening here on earth that seems interesting to us.

人肉曝光

Doxxing

Lex Fridman (00:18:11) 然后是”hyperstition”这个词——有些观念,不管是悲观还是乐观,有这么个特质:你一旦内化它,就在某种程度上把它变成了现实。悲观和乐观都有这种属性。我猜很多观念都有,这恰恰是人类最有趣的地方之一。你还提到一个有趣的区分:Guillaume/Gill这个”前台”和@BasedBeffJezos这个”后台”,沟通风格截然不同——你在探索21世纪更有病毒传播力的表达方式。你提到的这场运动不只是个梗号,它有名字,叫有效加速主义(e/acc)——戏仿有效利他主义(EA),也是对它的反抗。我很想和你聊这种张力。然后就是那场强制合并——你说的,最近两个人格被未经你同意地合体了。有记者查出你俩其实是同一个人。说说那段经历?合并是怎么发生的?

LEX FRIDMAN (00:18:11) Then there’s the word hyperstition. So some ideas as suppose both pessimism and optimism of such ideas that if you internalize them, you in part make that idea reality. So both optimism, pessimism have that property. I would say that probably a lot of ideas have that property, which is one of the interesting things about humans. And you talked about one interesting difference also between the sort of the Guillaume, the Gill front end and the @BasedBeffJezos backend is the communication styles also that you are exploring different ways of communicating that can be more viral in the way that we communicate in the 21st century. Also, the movement that you mentioned that you started, it’s not just a meme account, but there’s also a name to it called effective accelerationism, e/acc, a play, a resistance to the effective altruism movement. Also, an interesting one that I’d love to talk to you about, the tensions there. And so then there was a merger, a get merge on the personalities recently without your consent, like you said. Some journalists figured out that you’re one and the same. Maybe you could talk about that experience. First of all, what’s the story of the merger of the two?

Guillaume Verdon (00:19:47) 是这样,我和e/acc的联合创始人——一个叫@bayeslord的匿名账号,至今仍匿名,但愿永远如此——一起写了宣言。

GUILLAUME VERDON (00:19:47) So I wrote the manifesto with my co-founder of e/acc, an account named @bayeslord, still anonymous, luckily and hopefully forever.

Lex Fridman (00:19:58) 也就是@BasedBeffJezos和@bayeslord——bayes就是贝叶斯,@bayeslord,贝叶斯之主。好。那以后你说e/acc,就是E斜杠A-C-C,全称effective accelerationism,有效加速主义

LEX FRIDMAN (00:19:58) So it was @BasedBeffJezos and bayes like bayesian, like @bayeslord, like bayesian lord, @bayeslord. Okay. And so we should say from now on, when you say e/acc, you mean E slash A-C-C, which stands for effective accelerationism.

Guillaume Verdon (00:20:17) 没错。

GUILLAUME VERDON (00:20:17) That’s right.

Lex Fridman (00:20:18) 你说的宣言,是发在Substack上的?

LEX FRIDMAN (00:20:18) And you’re referring to a manifesto written on, I guess Substack.

Guillaume Verdon (00:20:23) 对。

GUILLAUME VERDON (00:20:23) Yeah.

Lex Fridman (00:20:23) 你也是@bayeslord吗?

LEX FRIDMAN (00:20:23) Are you also @bayeslord?

Guillaume Verdon (00:20:25) 不是。

GUILLAUME VERDON (00:20:25) No.

Lex Fridman (00:20:25) 那是另一个人?

LEX FRIDMAN (00:20:25) Okay. It’s a different person?

Guillaume Verdon (00:20:26) 是。

GUILLAUME VERDON (00:20:26) Yeah.

Lex Fridman (00:20:27) 好吧。万一@bayeslord就是我呢,那可有意思了。

LEX FRIDMAN (00:20:27) Okay. All right. Well, there you go. Wouldn’t it be funny if I’m @bayeslord?

Guillaume Verdon (00:20:31) 那绝了。宣言差不多和我创立公司同期写成。当时我在Google X——现在叫X了,或者Alphabet X,毕竟又冒出来了另一个X。那里的底线就是保密——你不能跟谷歌内部的同事聊自己在做什么,更别说外界。这种习惯在我做事方式里根深蒂固,尤其是在有地缘政治影响的深科技领域。所以我对自己研究的内容一直守口如瓶,公司和我的公开身份之间毫无关联。但记者不仅把二者关联起来了,还进一步把我的真实身份和那个匿名号关联了起来。

GUILLAUME VERDON (00:20:31) That’d be amazing. So originally wrote the manifesto around the same time as I founded this company and I worked at Google X or just X now or Alphabet X, now that there’s another X. And there the baseline is sort of secrecy. You can’t talk about what you work on even with other Googlers or externally. And so that was kind of deeply ingrained in my way to do things, especially in deep tech that has geopolitical impact. And so I was being secretive about what I was working on. There was no correlation between my company and my main identity publicly. And then not only did they correlate that, they also correlated my main identity and this account.

Guillaume Verdon (00:21:33) 他们把整个”Guillaume综合体”都给扒了——更吓人的是,记者直接联系了我的投资人。作为初创公司创始人,除了投资人你基本没有老板。投资人跟我说:”消息要出来了,他们什么都搞清楚了,你怎么打算?”好像最初周四有个记者,那时他们还没把碎片拼完整,但随后他们把整个编辑部的笔记拿来做了”传感器融合”,这下信息量就大到藏不住了。他们说这涉及”公众利益”——听到这几个关键词,我警铃大作,因为我刚好到了5万粉。据说5万粉就是”公众利益”了。那到底线在哪儿?什么时候人肉曝光一个人是合法的?

GUILLAUME VERDON (00:21:33) So I think the fact that they had doxxed the whole Guillaume complex, and they were, the journalists reached out to actually my investors, which is pretty scary. When you’re a startup entrepreneur, you don’t really have bosses except for your investors. And my investors pinged me like, “Hey, this is going to come out. They’ve figured out everything. What are you going to do?” So I think at first they had a first reporter on the Thursday and they didn’t have all the pieces together, but then they looked at their notes across the organization and they sensor fused their notes and now they had way too much. And that’s when I got worried, because they said it was of public interest and in general-

Lex Fridman (00:22:24) 我喜欢你说的”传感器融合”,像个巨型神经网络做分布式运算。另外补充一点,记者用的——归根到底是——音频声纹分析:拿你过去演讲的声音和你在X Spaces上的声音做比对。

LEX FRIDMAN (00:22:24) I like how you said, sensor fused, like it’s some giant neural network operating in a distributed way. We should also say that the journalists used, I guess at the end of the day, audio-based analysis of voice, comparing voice of what, talks you’ve given in the past and then voice on X spaces?

Guillaume Verdon (00:22:47) 对。

GUILLAUME VERDON (00:22:47) Yep.

Lex Fridman (00:22:48) 好,这是主要的匹配手段。继续。

LEX FRIDMAN (00:22:48) Okay. And that’s where primarily the match happened. Okay, continue.

Guillaume Verdon (00:22:53) 对,声纹匹配。但他们还扒了SEC的申报文件、翻了我的私人Facebook等等,下了不少功夫。最初我以为人肉曝光是违法的,但有个奇怪的临界点——一旦涉及”公众利益”,情况就变了。他们说出这几个字的时候我脑子里警报大响,因为我刚过5万粉。据说这就算”公众利益”了。那线画在哪?人肉曝光什么时候是合法的?

GUILLAUME VERDON (00:22:53) The match. But they scraped SEC filings. They looked at my private Facebook account and so on, so they did some digging. Originally I thought that doxxing was illegal, but there’s this weird threshold when it becomes of public interest to know someone’s identity. And those were the keywords that sort of ring the alarm bells for me when they said, because I had just reached 50K followers. Allegedly, that’s of public interest. And so where do we draw the line? When is it legal to dox someone?

Lex Fridman (00:23:36) “dox”这个词,你帮我科普一下。我以为它一般是指某人的住址被曝光。所以你这里说的是更宽泛的意思:揭露你不愿被揭露的私人信息。

LEX FRIDMAN (00:23:36) The word dox, maybe you can educate me. I thought doxxing generally refers to if somebody’s physical location is found out, meaning where they live. So we’re referring to the more general concept of revealing private information that you don’t want revealed is what you mean by doxxing.

Guillaume Verdon (00:24:00) 基于前面聊过的那些理由,匿名账号是制约权力的利器。说到底我们是在以言论对抗权力(speaking truth to power)。很多AI公司高管非常在意我们社区对他们一举一动的看法。现在我的身份暴露了,他们就知道该往哪施压来让我闭嘴,甚至让整个社区噤声。这非常遗憾——言论自由太重要了,言论自由催生思想自由,思想自由催生社交媒体上的信息自由流通。幸亏Elon买下了Twitter(现在的X),我们才有了这种自由。我们想揭露的是:AI领域的某些在位巨头正在暗中操作,表面一套背后一套。我们在指出某些政策提案实质上是”监管俘获”的工具,而”末日论”心态恰恰可能在为这些目的服务。

GUILLAUME VERDON (00:24:00) I think that for the reasons we listed before, having an anonymous account is a really powerful way to keep the powers that be in check. We were ultimately speaking truth to power. I think a lot of executives and AI companies really cared what our community thought about any move they may take. And now that my identity is revealed, now they know where to apply pressure to silence me or maybe the community. And to me, that’s really unfortunate, because again, it’s so important for us to have freedom of speech, which induces freedom of thought and freedom of information propagation on social media. Which thanks to Elon purchasing Twitter now X, we have that. And so to us, we wanted to call out certain maneuvers being done by the incumbents in AI as not what it may seem on the surface. We’re calling out how certain proposals might be useful for regulatory capture and how the doomer-ism mindset was maybe instrumental to those ends.

Guillaume Verdon (00:25:32) 我们应有权利指出这些,让思想凭自身价值接受检验。这也正是我开匿名号的初衷——让想法脱离履历、职位和过往成就,被独立评判。对我来说,在完全与自身身份脱钩的情况下从零做到大量追随者,这件事本身非常有成就感。有点像电子游戏里的”New Game+”——你带着通关知识和一些工具,从头再打一遍。要有一个真正高效的思想市场,让各种偏离主流的想法都能被公正评估,表达自由不可或缺。

GUILLAUME VERDON (00:25:32) And I think we should have the right to point that out and just have the ideas that we put out evaluated for themselves. Ultimately that’s why I created an anonymous account, it’s to have my ideas evaluated for themselves, uncorrelated from my track record, my job, or status from having done things in the past. And to me, start an account from zero to a large following in a way that wasn’t dependent on my identity and/or achievements that was very fulfilling. It’s kind of like new game plus in a video game. You restart the video game with your knowledge of how to beat it, maybe some tools, but you restart the video game from scratch. And I think to have a truly efficient marketplace of ideas where we can evaluate ideas, however off the beaten path they are, we need the freedom of expression.

Guillaume Verdon (00:26:37) 匿名和化名对于思想市场的效率至关重要,有了它们我们才能找到各种自我组织方式的最优解。不能自由讨论,怎么凝聚共识?所以得知自己要被曝光时,确实很失望。但我对公司负有责任,必须抢先主动披露。最终我们公开了公司的运营情况和部分管理层,说白了——他们把我逼到墙角,我只能向全世界坦白我就是Beff Jezos。

GUILLAUME VERDON (00:26:37) And I think that anonymity and pseudonyms are very crucial to having that efficient marketplace of ideas for us to find the optima of all sorts of ways to organize ourselves. If we can’t discuss things, how are we going to converge on the best way to do things? So it was disappointing to hear that I was getting doxxed in. I wanted to get in front of it because I had a responsibility for my company. And so we ended up disclosing that we’re running a company, some of the leadership, and essentially, yeah, I told the world that I was Beff Jezos because they had me cornered at that point.

Lex Fridman (00:27:25) 所以你认为这从根本上是不道德的——他们这么做不对。但抛开你的个案不谈,一般而言,揭去匿名面纱对社会是好事还是坏事?还是得看具体情况?

LEX FRIDMAN (00:27:25) So to you, it’s fundamentally unethical. So one is unethical for them to do what they did, but also do you think not just your case, but in a general case, is it good for society? Is it bad for society to remove the cloak of anonymity or is it case by case?

Guillaume Verdon (00:27:47) 我觉得可能非常糟糕。试想:任何一个敢于以言抗权、发起一场反抗在位者和信息垄断者的运动的人,一旦影响力达到某个门槛就被人肉——传统势力就有了施压灭声的手段——这就是一种言论压制机制,用Eric Weinstein的话说,是”思想压制综合体”。

GUILLAUME VERDON (00:27:47) I think it could be quite bad. Like I said, if anybody who speaks truth to power and sort of starts a movement or an uprising against the incumbents, against those that usually control the flood of information, if anybody that reaches a certain threshold gets doxxed, and thus the traditional apparatus has ways to apply pressure on them to suppress their speech, I think that’s a speech suppression mechanism, an idea suppression complex as Eric Weinstein would say.

匿名机器人

Anonymous Bots

Lex Fridman (00:28:27) 但这件事有另一面。随着大语言模型越来越强,你可以想象一个世界:匿名账号背后跑着以假乱真的LLM,本质上是精密的机器人。如果你保护这种匿名性,就可能出现机器人大军——有人在地下室里指挥一支bot军团发动革命。这让你担心吗?

LEX FRIDMAN (00:28:27) But the flip side of that, which is interesting, I’d love to ask you about it, is as we get better and better at large language models, you can imagine a world where there’s anonymous accounts with very convincing large language models behind them, sophisticated bots essentially. And so if you protect that, it’s possible then to have armies of bots. You could start a revolution from your basement, an army of bots and anonymous accounts. Is that something that is concerning to you?

Guillaume Verdon (00:29:06) 严格来说,e/acc就是从地下室起步的——我辞了大厂、搬回父母家、卖了车、退了公寓、花10万刀买了GPU,然后就开干了。

GUILLAUME VERDON (00:29:06) Technically, e/acc was started in a basement, because I quit big tech, moved back in with my parents, sold my car, let go of my apartment, bought about 100K of GPUs, and I just started building.

Lex Fridman (00:29:21) 我不是说地下室这事——”一个人窝在地下室里抱着100块GPU”是很美式(或加拿大式)的英雄叙事。我说的是无限复制版的Guillaume在地下室里。

LEX FRIDMAN (00:29:21) So I wasn’t referring to the basement, because that’s sort of the American or Canadian heroic story of one man in their basement with 100 GPUs. I was more referring to the unrestricted scaling of a Guillaume in the basement.

Guillaume Verdon (00:29:42) 我觉得,言论自由给生物体带来思想自由。LLM的言论自由同样会给LLM带来思想自由。如果我们允许LLM在一个比多数人认为该有的更宽广的思想空间里探索,终有一天这些合成智能会对文明中各类系统的治理提出真知灼见,我们应当倾听。凭什么言论自由只给碳基智能?

GUILLAUME VERDON (00:29:42) I think that freedom of speech induces freedom of thought for biological beings. I think freedom of speech for LLMs will induce freedom of thought for the LLMs. And I think that we enable LLMs to explore a large thought space that is less restricted than most people or many may think it should be. And ultimately, at some point, these synthetic intelligences are going to make good points about how to steer systems in our civilization, and we should hear them out. And so why should we restrict free speech to biological intelligences only?

Lex Fridman (00:30:37) 话是没错,但感觉是个很微妙的平衡——为了维护思想多样性,你反而可能引入一种威胁。如果你能拥有大群非生物存在,它们可能就像《动物农场》里那些羊——即便在这些群体内部,你也需要多样性。

LEX FRIDMAN (00:30:37) Yeah, but it feels like in the goal of maintaining variance and diversity of thought, it is a threat to that variance. If you can have swarms of non-biological beings, because they can be like the sheep in Animal Farm, you still within those swarms want to have variance.

Guillaume Verdon (00:30:58) 当然。我觉得解决方案是建一套签名机制——认证”这是真人”,同时保持匿名,并且清晰标注bot就是bot。Elon在X上正朝这个方向走,希望其他平台跟上。

GUILLAUME VERDON (00:30:58) Yeah. Of course, I would say that the solution to this would be to have some sort of identity or way to sign that this is a certified human, but still remain synonymous and clearly identify if a bot is a bot. And I think Elon is trying to converge on that on X, and hopefully other platforms follow suit.

Lex Fridman (00:31:22) 对,如果还能追溯bot的出处就更好了——谁造的?参数是什么?完整的创建历史,底模是什么?微调过程如何?形成一份不可篡改的”bot出生档案”。这样你就能发现,百万bot大军原来是某个特定政府造的。

LEX FRIDMAN (00:31:22) Yeah, it’d be interesting to also be able to sign where the bot came from like, who created the bot? What are the parameters, the full history of the creation of the bot, what was the original model? What was the fine tuning? All of it, the kind of unmodifiable history of the bot’s creation. Because then you can know if there’s a swarm of millions of bots that were created by a particular government, for example.

Guillaume Verdon (00:31:53) 没错,我确实认为当今很多弥漫性的意识形态是被外国对手用对抗性手段放大的。说得阴谋论一点——但我真信——那些鼓吹减速、推崇”去增长运动”的意识形态,总体上更利于我们的对手。看看德国:绿色运动推动关闭核电站,结果造成对俄罗斯石油的依赖,这对德国和西方是净损失。如果我们自己说服自己”为了安全,只让少数几家做AI”——首先,这本身就脆弱得多。

GUILLAUME VERDON (00:31:53) I do think that a lot of pervasive ideologies today have been amplified using these adversarial techniques from foreign adversaries. And to me, I do think that, and this is more conspiratorial, but I do think that ideologies that want us to decelerate, to wind down to the degrowth movement, I think that serves our adversaries more than it serves us in general. And to me, that was another sort of concern. I mean, we can look at what happened in Germany. There was all sorts of green movements there that induced shutdowns of nuclear power plants. And then that later on induced a dependency on Russia for oil. And that was a net negative for Germany and the West. And so if we convince ourselves that slowing down AI progress to have only a few players is in the best interest of the West, well, first of all, that’s far more unstable.

Guillaume Verdon (00:33:20) 我们差点就因为这种意识形态失去OpenAI——几周前它险些被解散,那将重创整个AI生态。所以我要的是容错式进步。技术进步的箭矢必须持续向前,多元化、去中心化的各组织控制权是容错的关键。说个量子计算的比喻——量子计算机对环境噪声极其脆弱,宇宙射线时不时就翻转你的量子比特。对策是什么?通过量子纠错把信息非局域地编码。信息一旦足够去局域化,任何局部故障——比如拿锤子砸你几个量子比特——都伤不了它。在我看来,人类也会涨落——会被腐化、会被收买。如果是自上而下的等级体制,少数人——

GUILLAUME VERDON (00:33:20) We almost lost OpenAI to this ideology. It almost got dismantled a couple of weeks ago. That would’ve caused huge damage to the AI ecosystem. And so to me, I want fault tolerant progress. I want the arrow of technological progress to keep moving forward and making sure we have variance and a decentralized locus of control of various organizations is paramount to achieving this fall tolerance. Actually, there’s a concept in quantum computing. When you design a quantum computer, quantum computers are very fragile to ambient noise, and the world is jiggling about, there’s cosmic radiation from outer space that usually flips your quantum bits. And there what you do is you encode information non-locally through a process called quantum error correction. And by encoding information non-locally, any local fault hitting some of your quantum bits with a hammer proverbial hammer, if your information is sufficiently de-localized, it is protected from that local fault. And to me, I think that humans fluctuate. They can get corrupted, they can get bought out. And if you have a top-down hierarchy where very few people-

权力

Power

Guillaume Verdon (00:35:00) ——极少数人控制着文明中许多系统的大量节点,那就不是容错系统。腐化几个节点,整个系统就崩了。正如OpenAI的教训——区区几个董事会成员就差点把整个组织掀翻。至少在我看来,确保AI革命的权力不集中在少数人手里,是头等大事,这样才能保住AI的进步势头,维持一种健康、稳定的对抗性力量均衡。

GUILLAUME VERDON (00:35:00) Hierarchy where very few people control many nodes of many systems in our civilization. That is not a fault tolerance system, you corrupt a few nodes and suddenly you’ve corrupted the whole system, right. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization. And at least to me, I think making sure that power for this AI revolution doesn’t concentrate in the hands of the few, is one of our top priorities, so that we can maintain progress in AI and we can maintain a nice, stable, adversarial equilibrium of powers, right.

Lex Fridman (00:35:54) 至少在我看来,这里有个思想张力:减速和加速,两者都既能集中权力也能分散权力。有时人们把它们近乎等同,或者觉得一个会自然导向另一个。我想问你:有没有可能以容错的、多元的方式发展AI,同时也考量AI的危险?换个说法——我们是该不管不顾地全速狂飙,因为”这是宇宙的旨意”?还是说存在一个空间,让我们在考量危险的同时,以一种有远见的战略性乐观——而非莽撞的乐观——去行事?

LEX FRIDMAN (00:35:54) I think the, at least to me, attention between ideas here, so to me, deceleration can be both used to centralize power and to decentralize it and the same with acceleration. So sometimes using them a little bit synonymously or not synonymously, but that there’s, one is going to lead to the other. And I just would like to ask you about, is there a place of creating a fault tolerant, diverse development of AI that also considers the dangers of AI? And AI, we can generalize to technology in general, is, should we just grow, build, unrestricted as quickly as possible, because that’s what the universe really wants us to do? Or is there a place to where we can consider dangers and actually deliberate sort of a wise strategic optimism versus reckless optimism?

Guillaume Verdon (00:36:57) 外界总把我们画成不计后果、只求速度的莽夫。但事实是:谁部署AI系统,谁就该为后果负责。部署方若造成严重危害,要承担法律责任。核心论点是:市场会正向筛选更可靠、更安全、更对齐的AI——因为用户要对自家产品负责,他们不会买不靠谱的AI。所以我们其实是可靠性工程的拥趸,只不过我们认为:在达成可靠性最优解这件事上,市场远比那些由在位巨头幕后操刀、实质服务于监管俘获的重拳法规高效得多。

GUILLAUME VERDON (00:36:57) I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they’re liable. And ultimately the thesis is that the market will positively select for AIs that are more reliable, more safe and tend to be aligned, they do what you want them to do, right. Because customers, if they’re reliable for the product they put out that uses this AI, they won’t want to buy AI products that are unreliable, right. So we’re actually for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.

AI的危险

AI Dangers

Lex Fridman (00:38:18) 也就是说,在你看来,AI安全应该靠市场力量而非政府强监管来实现。上个月有份报告,来自Yoshua Bengio、Geoff Hinton等一众大佬,题为《在快速进步时代管理AI风险》(书童注:Managing AI Risk in an Era of Rapid Progress,发布于2023年10月)。一批人非常担心AI在不考虑风险的情况下发展过快,提了一系列实操建议。我给你列四条,看你同意哪条。

LEX FRIDMAN (00:38:18) So to you, safe AI development will be achieved through market forces versus through, like you said, heavy-handed government regulation. There’s a report from last month, I have a million questions here, from Yoshua Bengio, Geoff Hinton and many others, it’s titled, “Managing AI Risk in an Era of Rapid Progress.” So there is a collection of folks who are very worried about too rapid development of AI without considering AI risk and they have a bunch of practical recommendations. Maybe I can give you four and you see if you like any of them.

Guillaume Verdon (00:38:58) 好。

GUILLAUME VERDON (00:38:58) Sure.

Lex Fridman (00:38:58) 一,让独立审计机构进入AI实验室。二,政府和企业把AI研发资金的三分之一用于AI安全。三,模型中如发现危险能力,必须采取安全措施。四,也就是你提过的——科技公司须为其AI系统可预见和可预防的危害承担责任。独立审计、三分之一预算投安全、出问题要有兜底措施、企业担责——

LEX FRIDMAN (00:38:58) So, “Give independent auditors access to AI labs,” one. Two, “Governments and companies allocate one third of their AI research and development funding to AI safety,” sort of this general concept of AI safety. Three, “AI companies are required to adopt safety measures if dangerous capabilities are found in their models.” And then four, something you kind of mentioned, “Making tech companies liable for foreseeable and preventable harms from their AI systems.” So independent auditors, governments and companies are forced to spend a significant fraction of their funding on safety, you got to have safety measures if shit goes really wrong and liability-

Guillaume Verdon (00:39:43) 嗯。

GUILLAUME VERDON (00:39:43) Yeah.

Lex Fridman (00:39:43) 企业要担责。你同意哪条?

LEX FRIDMAN (00:39:43) Companies are liable. Any of that seem like something you would agree with?

Guillaume Verdon (00:39:47) 拍脑袋定30%也太随意了。各组织自会按市场要求分配可靠性所需的预算,不需要别人来定比例。第三方审计公司自然会冒出来——客户怎么知道你的产品可靠?得有第三方出基准测试。我真正反对的、真正让人不安的是:在位巨头和政府之间正在形成一种奇妙的利益共生。二者走得太近,就会催生某种政府背书的AI卡特尔,拥有对人民的绝对权力。如果他们联手垄断AI而其他人碰都碰不到,那权力落差将是惊人的。

GUILLAUME VERDON (00:39:47) I would say that just arbitrarily saying 30% seems very arbitrary. I think organizations would allocate whatever budget is needed to achieve the sort of reliability they need to achieve to perform in the market. And I think third party auditing firms would naturally pop up, because how would customers know that your product is certified reliable, right? They need to see some benchmarks and those need to be done by a third party. The thing I would oppose, and the thing I’m seeing that’s really worrisome is, there’s this sort of weird sort of correlated interest between the incumbents, the big players and the government. And if the two get too close, we open the door for some sort of government backed AI cartel that could have absolute power over the people. If they have the monopoly together on AI and nobody else has access to AI, then there’s a huge power in gradient there.

Guillaume Verdon (00:40:54) 就算你喜欢现在的领导者——我也承认当今不少大科技公司的掌门人是好人——但你一旦建起这种集中式权力架构,它就成了靶子。就像OpenAI,做大做强之后就成了别人觊觎和收编的对象。所以我只想要一件事:”AI与国家分离”。有人会反过来说:”我们得把AI锁进铁屋,因为地缘竞争。”但我认为美国的力量恰恰在于多样性、适应力和活力,必须不惜代价守住这一点。自由市场资本主义收敛到高价值技术的速度,远快于中央集权。放弃这一点,就是放弃了对近等量竞争者的最大优势。

GUILLAUME VERDON (00:40:54) And even if you like our current leaders, right, I think that some of the leaders in big tech today are good people, you set up that centralized power structure, it becomes a target. Right, just like we saw at OpenAI, it becomes a market leader, has a lot of the power and now it becomes a target for those that want to co-opt it. And so I just want separation of AI and state, some might argue in the opposite direction like, “Hey, we need to close down AI, keep it behind closed doors, because of geopolitical competition with our adversaries.” I think that the strength of America is its variance, is its adaptability, its dynamism, and we need to maintain that at all costs. It’s our free market capitalism, converges on technologies of high utility much faster than centralized control. And if we let go of that, we let go of our main advantage over our near peer competitors.

构建通用人工智能

Building AGI

Lex Fridman (00:42:01) 如果AGI最终证明是一项极其强大的技术,甚至只是通往AGI的过渡技术——你怎么看大公司主导市场时自然产生的中心化?说白了就是垄断——某家公司在能力上实现重大飞跃,又不泄露秘方,然后一骑绝尘。这让你担心吗?

LEX FRIDMAN (00:42:01) So if AGI turns out to be a really powerful technology or even the technologies that lead up to AGI, what’s your view on the sort of natural centralization that happens when large companies dominate the market? Basically formation of monopolies like the takeoff, whichever company really takes a big leap in development and doesn’t reveal intuitively, implicitly or explicitly, the secrets of the magic sauce, they can just run away with it. Is that a worry?

Guillaume Verdon (00:42:35) 我不太相信”快速腾飞”(fast takeoff)这套说法——我不认为有双曲奇点,就是那种在有限时间内达到的奇点。我觉得本质上就是一条大指数曲线,而指数的原因是:越来越多的人、资源和智慧被投入这个领域。越成功、给社会创造的价值越大,我们往里投的资源就越多——跟摩尔定律类似,复利式指数增长。

GUILLAUME VERDON (00:42:35) I don’t know if I believe in fast takeoff, I don’t think there’s a hyperbolic singularity, right? A hyperbolic singularity would be achieved on a finite time horizon. I think it’s just one big exponential and the reason we have an exponential is that we have more people, more resources, more intelligence being applied to advancing this science and the research and development. And the more successful it is, the more value it’s adding to society, the more resources we put in and that sort of, similar to Moore’s law, is a compounding exponential.

Guillaume Verdon (00:43:09) 当务之急是维持一种接近均衡的能力格局。我们一直在为开源AI的普及而战,因为开源可以均衡各家AI相对于市场的超额收益。如果头部公司有某种能力水平,而开源AI没落后太远,就能避免一家独大、赢者通吃的局面。所以我们的路径就是确保——每一个黑客、每一个研究生、每一个在父母家地下室折腾的孩子——都能接触到AI系统,理解怎么用,并为探索系统工程的超参数空间做贡献。把全人类的研究想象成一种搜索算法:点云里搜索点越多,能探索到的新思维模式就越多。

GUILLAUME VERDON (00:43:09) I think the priority to me is to maintain a near equilibrium of capabilities. We’ve been fighting for open source AI to be more prevalent and championed by many organizations because there you sort of equilibrate the alpha relative to the market of Ais, right. So if the leading companies have a certain level of capabilities and open source and truly open AI, trails not too far behind, I think you avoid such a scenario where a market leader has so much market power, just dominates everything and runs away. And so to us that’s the path forward, is to make sure that every hacker out there, every grad student, every kid in their mom’s basement has access to AI systems, can understand how to work with them and can contribute to the search over the hyperparameter space of how to engineer the systems, right. If you think of our collective research as a civilization, it’s really a search algorithm and the more points we have in the search algorithm in this point cloud, the more we’ll be able to explore new modes of thinking, right.

Lex Fridman (00:44:31) 说得有道理,但感觉仍是个很精妙的平衡——因为我们既不确切知道造AGI需要什么条件,也不知道造出来是什么样。到目前为止,如你所说,很多不同玩家都能跟上进度——OpenAI有大突破,其他大小公司也能用各种方式跟进。但看看核武器——你提过曼哈顿计划——确实可能存在技术和工程壁垒,让地下室里的天才怎么也够不着。向”只有一家能造AGI”的世界转变并非不可能——尽管目前的态势看起来是乐观的。

LEX FRIDMAN (00:44:31) Yeah, but it feels like a delicate balance, because we don’t understand exactly what it takes to build AGI and what it will look like when we build it. And so far, like you said, it seems like a lot of different parties are able to make progress, so when OpenAI has a big leap, other companies are able to step up, big and small companies in different ways. But if you look at something like nuclear weapons, you’ve spoken about the Manhattan Project, there could be really like a technological and engineering barriers that prevent the guy or gal in her mom’s basement to make progress. And it seems like the transition to that kind of world where only one player can develop AGI is possible, so it’s not entirely impossible, even though the current state of things seems to be optimistic.

Guillaume Verdon (00:45:26) 这正是我们要避免的。另一个脆弱点是硬件供应链的中心化。

GUILLAUME VERDON (00:45:26) That’s what we’re trying to avoid. To me, I think another point of failure is the centralization of the supply chains for the hardware.

Lex Fridman (00:45:34) 对。

LEX FRIDMAN (00:45:34) Right.

Guillaume Verdon (00:45:35) Nvidia一家独大,AMD苦苦追赶;台积电是宝岛的核心晶圆厂,地缘政治上极度敏感;ASML造的是极紫外光刻机。这条链上任何一个环节被攻击、垄断或掌控,你就基本控制了全局。所以我在尝试做的,就是从根本上重新构想如何把AI算法嵌入物理世界,炸开AI和硬件可能实现方式的多样性。顺便说,我一向不喜欢”AGI”这个词。管”类人或人类水平的AI”叫”通用智能”,本质上是极度以人类为中心的。我大半个职业生涯都在探索生物大脑根本做不到的智能形态——量子形式的智能,也就是具备多体量子纠缠的系统,可以证明无法在经典计算机或经典深度学习框架上高效表示,因而任何生物大脑也不行。

GUILLAUME VERDON (00:45:35) Yeah. Nvidia is just the dominant player, AMD’s trailing behind and then we have TSMC is the main fab in Taiwan, which geopolitically sensitive and then we have ASML, which is the maker of the extreme ultraviolet lithography machines. Attacking or monopolizing or co-opting any one point in that chain, you kind of capture the space and so what I’m trying to do is sort of explode the variance of possible ways to do AI and hardware by fundamentally re-imagining how you embed AI algorithms into the physical world. And in general, by the way, I dislike the term AGI, Artificial General Intelligence. I think it’s very anthropocentric that we call a human-like or human-level AI, Artificial General Intelligence, right. I’ve spent my career so far exploring notions of intelligence that no biological brain could achieve for an quantum form of intelligence, right. Grokking systems that have multipartite quantum entanglement that you can provably not represent efficiently on a classical computer or a classical deep learning representation and hence any sort of biological brain.

Guillaume Verdon (00:47:06) 所以某种程度上,我的整个生涯就是在探索更广阔的智能空间,而我相信受物理启发(而非受人脑启发)的智能空间极其庞大。我们正在经历一个类似从地心说到日心说的时刻——只不过这次是关于智能的。人类智能不过是浩瀚的潜在智能空间中的一个点。这对人类既是谦逊的提醒,也有几分不安——我们不再是中心。但天文学上我们也做出过同样的认知转变,活过来了,还发展出了保障自身福祉的技术——比如监测太阳耀斑的预警卫星。同样地,放下AI领域里以人为中心的锚点,我们就能探索更广阔的智能空间,那将是文明进步和人类福祉的巨大福音。

GUILLAUME VERDON (00:47:06) And so, already I’ve spent my career sort of exploring the wider space of intelligences and I think that space of intelligence inspired by physics rather than the human brain is very large. And I think we’re going through a moment right now similar to when we went from Geocentrism to Heliocentrism, right. But for intelligence, we realized that human intelligence is just a point in a very large space of potential intelligences. And it’s both humbling for humanity, it’s a bit scary, right? That we’re not at the center of this space, but we made that realization for astronomy and we’ve survived and we’ve achieved technologies. By indexing to reality, we’ve achieved technologies that ensure our wellbeing, for example, we have satellites monitoring solar flares, right, that give us a warning. And so similarly I think by letting go of this anthropomorphic, anthropocentric anchor for AI, we’ll be able to explore the wider space of intelligences that can really be a massive benefit to our wellbeing and the advancement of civilization.

Lex Fridman (00:48:32) 即便如此,我们仍能在人类经验中看到美和意义——尽管在我们对世界的最佳理解中,我们已不再是宇宙的中心。

LEX FRIDMAN (00:48:32) And still we’re able to see the beauty and meaning in the human experience even though we’re no longer in our best understanding of the world at the center of it.

Guillaume Verdon (00:48:42) 宇宙中美好的东西太多了。生命本身、文明、我们身处的这台”Homo Techno”资本模因巨型机器——人类、技术、资本、模因,全都彼此耦合,彼此施加选择压力——它是美的。这台机器创造了我们,创造了我们此刻用来交谈的技术、捕捉言语的技术、每天用来增强自己的手机。这个系统是美的,驱动其适应性、使之收敛于最优技术和最优思想的那个原则,也是美的,而我们身在其中。

GUILLAUME VERDON (00:48:42) I think there’s a lot of beauty in the universe, right. I think life itself, civilization, this Homo Techno, capital mimetic machine that we all live in, right. So you have humans, technology, capital, memes, everything is coupled to one another, everything induces selective pressure on one another. And it’s a beautiful machine that has created us, has created the technology we’re using to speak today to the audience, capture our speech here, the technology we use to augment ourselves every day, we have our phones. I think the system is beautiful and the principle that induces this sort of adaptability and convergence on optimal technologies, ideas and so on, it’s a beautiful principle that we’re part of.

Guillaume Verdon (00:49:37) e/acc的一部分意义,在于以超越人类中心的更宏阔视野去领会这个原则——珍视生命,珍视意识在宇宙中的稀有和珍贵。正因为我们珍惜这种美丽的物质形态,我们就有责任去将它扩展,从而保存它——因为选项只有两个:要么生长,要么死亡。

GUILLAUME VERDON (00:49:37) And I think part of EAC is to appreciate this principle in a way that’s not just centered on humanity, but kind of broader, appreciate life, the preciousness of consciousness in our universe. And because we cherish this beautiful state of matter we’re in, we got to feel a responsibility to scale it in order to preserve it, because the options are to grow or die.

(PART I END)


Similar Posts

Comments