原文地址:https://openai.com/blog/openai-elon-musk。
以下为译文。
我们致力于 OpenAI 的使命,并一直在努力实现这一使命。
作者
OpenAI 的使命是确保 AGI(通用人工智能)造福全人类,这意味着建立安全有益的 AGI,并帮助创造广泛分布的利益。我们现在分享我们在实现使命方面学到的东西,以及我们与埃隆关系的一些事实。我们打算驳回埃隆的所有指控。
埃隆表示,我们应该宣布向 OpenAI 提供 10 亿美元的初始资金承诺。总的来说,这家非营利组织从埃隆那里筹集了不到 4500 万美元,从其他捐助者那里筹集了 9000 多万美元。
在 2015 年末启动 OpenAI 时,Greg 和 Sam 最初计划筹集 1 亿美元。埃隆在一封电子邮件中说:“我们需要一个比 1 亿美元大得多的数字,以避免听起来毫无希望……我认为我们应该说,我们从 10 亿美元的资金承诺开始……我会支付其他人没有提供的任何费用。”[1]
我们花了很多时间试图设想一条可行的 AGI 之路。2017 年初,我们意识到构建 AGI 将需要大量的计算。我们开始计算 AGI 可能需要多少计算。我们都明白,我们需要更多的资金才能成功完成我们的使命——每年数十亿美元,这远远超过了我们任何人,尤其是埃隆,认为我们作为非营利组织能够筹集到的资金。
当我们讨论营利性结构以推进任务时,埃隆希望我们与特斯拉合并,或者他希望完全控制特斯拉。埃隆离开了 OpenAI,表示需要有一个与谷歌 /DeepMind 相关的竞争对手,他将亲自完成这项工作。他说他会支持我们找到自己的路。
2017 年末,我们和埃隆决定下一步的任务是创建一个营利性实体。埃隆希望获得多数股权、最初的董事会控制权,并成为首席执行官。在这些讨论进行到一半时,他扣留了资金。Reid Hoffman 弥补了这一缺口,以支付工资和运营费用。
我们不能同意与埃隆的营利性条款,因为我们觉得任何个人对 OpenAI 拥有绝对控制权都是违背使命的。然后,他建议将 OpenAI 并入特斯拉。2018 年 2 月初,埃隆向我们转发了一封电子邮件,建议 OpenAI 应该“把特斯拉作为摇钱树”,并评论说这是“完全正确的……特斯拉是唯一一条有望与谷歌抗衡的道路。即使在那时,成为谷歌抗衡者的可能性也很小。它不是零”。2.
埃隆很快选择离开 OpenAI,说我们成功的概率是 0,他计划在特斯拉内部建立一个 AGI 竞争对手。当他于 2018 年 2 月底离开时,他告诉我们的团队,他支持我们找到自己的道路来筹集数十亿美元。2018 年 12 月,埃隆给我们发了一封电子邮件,说“即使筹集数亿美元也不够。这需要每年数十亿美元,否则就算了。”[3]
我们正在以增强人们能力和改善他们日常生活的方式,包括通过开源贡献,使我们的技术广泛可用。
我们提供了对当今最强大的人工智能的广泛访问,包括数亿人每天使用的免费版本。例如,阿尔巴尼亚正在使用 OpenAI 的工具将其加入欧盟的速度加快 5.5 年;数字绿色正在帮助肯尼亚和印度的农民提高收入,通过建立 OpenAI 将农业推广服务的成本降低 100 倍;罗德岛州最大的医疗保健提供商 Lifespan 使用 GPT-4 将其手术同意书从大学阅读水平简化为六年级;冰岛正在使用 GPT-4 来保护冰岛语。
埃隆明白,此次任务并不意味着 AGI 的开源。正如伊利亚告诉埃隆的那样:“随着我们离构建人工智能越来越近,开始变得不那么开放是有意义的。开放式人工智能意味着每个人都应该在人工智能构建后从中受益,但不分享科学是完全可以的……”埃隆回答道:“是的”。[4]
我们很难过,事情发生在一个我们深为钦佩的人身上——他激励我们追求更高的目标,然后告诉我们我们会失败,创办了一个竞争对手,然后在没有他的情况下,当我们开始朝着 OpenAI 的使命取得有意义的进展时,起诉了我们。
我们专注于推进我们的使命,还有很长的路要走。随着我们不断改进我们的工具,我们很高兴能够部署这些系统,让它们赋予每个人权力。
邮件就不翻译了,翻不动了。
[1]
From: Elon Musk <▆▆▆▆> To: Greg Brockman <▆▆▆▆> CC: Sam Altman <▆▆▆▆> Date: Sun, Nov 22, 2015 at 7:48 PM Subject: follow up from call Blog sounds good, assuming adjustments for neutrality vs being YC-centric. I'd favor positioning the blog to appeal a bit more to the general public -- there is a lot of value to having the public root for us to succeed -- and then having a longer, more detailed and inside-baseball version for recruiting, with a link to it at the end of the general public version. We need to go with a much bigger number than
1B funding commitment. This is real. I will cover whatever anyone else doesn't provide. Template seems fine, apart from shifting to a vesting cash bonus as default, which can optionally be turned into YC or potentially SpaceX (need to understand how much this will be) stock.
[2]
From: Elon Musk <▆▆▆▆> To: Ilya Sutskever <▆▆▆▆>, Greg Brockman <▆▆▆▆> Date: Thu, Feb 1, 2018 at 3:52 AM Subject: Fwd: Top AI institutions today ▆▆▆▆ is exactly right. We may wish it otherwise, but, in my and ▆▆▆▆’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn't zero. Begin forwarded message: From: ▆▆▆▆ <▆▆▆▆> To: Elon Musk <▆▆▆▆> Date: January 31, 2018 at 11:54:30 PM PST Subject: Re: Top AI institutions today Working at the cutting edge of AI is unfortunately expensive. For example, ▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences). I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary. It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale. A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it's unclear if a company could “catch up” to Google scale, and the investors might exert too much pressure in the wrong directions.The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale. I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade. ▆▆▆▆
[3]
From: Elon Musk <▆▆▆▆> To: Ilya Sutskever <▆▆▆▆>, Greg Brockman <▆▆▆▆> CC: Sam Altman <▆▆▆▆>, ▆▆▆▆ <▆▆▆▆> Date: Wed, Dec 26, 2018 at 12:07 PM Subject: I feel I should reiterate My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise. Even raising several hundred million won't be enough. This needs billions per year immediately or forget it. Unfortunately, humanity's future is in the hands of ▆▆▆▆. ▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆ And they are doing a lot more than this. ▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆ ▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆ I really hope I'm wrong. Elon
[4]
Fwd: congrats on the falcon 9 3 messages
From: Elon Musk <▆▆▆▆> To: Sam Altman <▆▆▆▆>, Ilya Sutskever <▆▆▆▆>, Greg Brockman <▆▆▆▆> Date: Sat, Jan 2, 2016 at 8:18 AM Subject: Fwd: congrats on the falcon 9 Begin forwarded message: From: ▆▆▆▆ <▆▆▆▆> To: Elon Musk <▆▆▆▆> Date: January 2, 2016 at 10:12:32 AM CST Subject: congrats on the falcon 9 Hi Elon Happy new year to you, ▆▆▆▆▆▆▆▆▆▆▆▆! Congratulations on landing the Falcon 9, what an amazing achievement. Time to build out the fleet now! I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem? There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world. Some of the more obvious points are well articulated in this blog post, that I'm sure you've seen, but there are also other important considerations: http://slatestarcodex.com/2015/12/17/should-ai-be-open/ I’d be interested to hear your counter-arguments to these points. Best ▆▆▆▆
From: Ilya Sutskever <▆▆▆▆> To: Elon Musk <▆▆▆▆>, Sam Altman <▆▆▆▆>, Greg Brockman <▆▆▆▆> Date: Sat, Jan 2, 2016 at 9:06 AM Subject: Fwd: congrats on the falcon 9 The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff. As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).
From: Elon Musk <▆▆▆▆> To: Ilya Sutskever <▆▆▆▆> Date: Sat, Jan 2, 2016 at 9:11 AM Subject: Fwd: congrats on the falcon 9 Yup