Skip to content

Instantly share code, notes, and snippets.

@segyges
Last active January 5, 2026 20:38
Show Gist options
  • Select an option

  • Save segyges/8ed02720ee5ff2971f1895334581144a to your computer and use it in GitHub Desktop.

Select an option

Save segyges/8ed02720ee5ff2971f1895334581144a to your computer and use it in GitHub Desktop.
00:00 - 07:29
[Music plays. Silence. Waiting for speakers and audience to join.]
07:30
Elon Musk: Hi, sorry for the delay. We’re just, uh, waiting for everyone who wants to join the space to join. Um, we need to tweak the algorithm a little bit... the "For You" recommendation for, uh, spaces needs to... needs to have higher immediacy in recommendations. For obvious reasons. So, um, we're just giving everyone a minute to be aware of the space. So, uh, we're just giving everyone a minute to be aware of the space. And we're going to adjust the "For You" algorithm to have higher immediacy in recommendations for obvious reasons.
08:20
[Silence/Waiting]
10:34
Elon Musk: All right, we’ll get started, uh, now. So, let’s see... I’ll just do a brief introduction, um, of the company and then, uh, the founding team will, I think, just say a few words about their background, things they've worked on... whatever they'd like to talk about really. Um, but I think it’s helpful to hear from people in their own words, um, you know, the various things they worked on and, uh, what they want to do with xAI.
11:06
Elon Musk: So, the, um, I guess the overarching goal of xAI is to build a, uh, a good AGI with the overarching purpose of just trying to understand the universe. Um, the... I think the safest AI—the safest way to build an AI—is actually to make one that is maximally curious, uh, and truth-seeking. So it... you go for... try to aspire to the truth with, um, with acknowledged error. So like the... you know, does one ever actually get fully to the truth? It’s not clear, but, um, one should always aspire to that. Um, and try to minimize the error between what... you know, what you think is true and what is actually true.
11:52
Elon Musk: My sort of theory behind the, uh, maximally curious, maximally truthful, um, as being the probably the safest approach is that, um, I... I think to, um, a superintelligence, uh, humanity is much more interesting than... than not humanity. Um, you know, if one can look at the various planets in our solar system, the moons and the asteroids, and probably all of them combined are not as interesting as humanity. Um, I mean, I'm a huge fan of Mars... next level. Um, uh, I mean, the middle name of one of my kids is basically the Greek word for Mars. Um, so I'm a huge fan of Mars, but, um, Mars is just much less interesting than Earth with humans on it.
12:47
Elon Musk: Um, and so I think that... that kind of approach to growing an AI—and I think that is the right word for it, growing an AI—is, uh, to grow it with that ambition. Um, I've spent many years thinking about AI safety and worrying about AI safety, and I’ve been one of the strongest voices calling for AI regulation or oversight. Just to have some kind of oversight, some kind of referee, uh, so that it’s not just up to, uh, companies to decide what they want to do.
13:25
Elon Musk: Um, I think there's also a lot to be done with AI safety, um, with, uh, industry cooperation. Kind of like how the Motion Picture Association... uh, so there's, I think there's value to that as well. Um, but I do think there's got to be some, like some... in any kind of situation that is, uh, even if it's a game, they have referees. Um, so I think it is important for there to be regulation.
13:55
Elon Musk: And... and then, like I said, my view on safety is like, try to make it maximally curious, maximally truth-seeking. Um, and I think this is... this is important to... to avoid the, uh, inverse morality problem. Like if you try to program a certain morality, uh, you can have the... you can basically invert it and get the opposite. Uh, what is sometimes called the "Waluigi problem." If you make Luigi, you risk creating Waluigi at the same time. Um, so I think that's a metaphor that a lot of people can appreciate. So, um, so that's what we're going to try to do here. Uh, and, uh, yeah, with that, I think, uh, let me turn it over to Igor.
14:44
Igor Babuschkin: All right. Hello, everyone. My name is Igor, and I'm one of the team members of xAI. I was actually originally a physicist, so I studied physics at university and I briefly worked at the Large Hadron Collider at CERN. So understanding the universe is something I've always been very passionate about. Um, and once, you know, some of these really impressive results from deep learning came out, like AlphaGo for example, I got really interested in machine learning and AI and decided to make a switch into that field.
15:15
Igor Babuschkin: Then I joined DeepMind, worked on various projects including AlphaStar. So that's where we tried to teach a machine learning agent to play the game StarCraft 2 through self-play, which was a really, really fun project. Then later on I joined OpenAI, worked on various projects there including GPT-3.5. So I was very, very passionate about, you know, language models, making them do impressive things. Um, yeah, now I've teamed up with Elon to see if we can actually deploy these new technologies to really make a dent in our understanding of the universe and progress our collective knowledge.
15:51
Elon Musk: Uh, yeah, actually if I may say... I had a similar kind of background. Like, my two best subjects were computer science, um, and physics. Um, and I actually thought about a career in physics, uh, for a while. Um, because physics is really just trying to understand the fundamental truths of the universe. And, um, and then I... I got... I was a little concerned that I would get stuck at a collider, um, and, uh, and then the collider might get canceled because of some arbitrary government decision. Um, so that's actually why I decided not to pursue a career in physics. Um, so focused initially more on computer science and, um, and then obviously later got back into physics-related subjects with SpaceX and Tesla. So, you know, I'm a big believer in pursuing, uh, physics and, uh, information theory, um, as the sort of two areas that really help you understand the nature of reality. So... we'll pass it around the table I guess.
16:53
Igor: Yeah, I'll pass it over to Manuel, aka Macro.
16:59
Manuel Kroiss: Hey, I'm Manuel. Um, so yeah, before joining xAI, I was previously at DeepMind for the past six years, where I worked on the reinforcement learning team and I mostly focused on the engineering side of building these large reinforcement learning agents like for example AlphaStar together with Igor.
17:16
Manuel Kroiss: Um, in generally... in general, I've, uh, been excited about AI for a long time. Um, for me, it has the potential to be the ultimate tool to solve the hardest problems. So I first studied bioinformatics, but then became also more excited about the AI because if you have a tool that can solve all the problems, um, to me that’s just much more exciting. Um, and with xAI in particular, I'm excited about doing this in a way where we build tools that help people, and we share them with everybody so that people can do their own research and understand things. And my hope is that, um, it looks like a new wave for researchers that wasn't there before. Cool. Um, I'll hand it over to Tony.
18:09
Christian Szegedy: Oh, Christian first.
18:11
Christian Szegedy: Yeah, so... so I’m Christian... I mean, uh, Christian Szegedy. So we decided to switch places with Tony because I wanted to talk a bit about the role of mathematics in understanding the universe. So I have worked, uh, for the past seven years on, uh, on, uh, trying to create an AI that, uh, is as good at mathematics as any human. And, and I think the reason for that is that, uh, mathematics is the language of... is basically the language of pure logic. And I think that mathematics and, uh, logical reasoning at a high level would demonstrate that an AI is really understanding things, not just emulating humans. Uh, and it would be instrumental for programming and physics in the long run. So I think an AI that starts to show real understanding of deep, uh, reasoning is crucial, uh, for our first steps to understand the universe. So handing over to Tony Wu.
19:10
Tony Wu: Hello. Um, hey everyone, I’m Tony. Uh, similar to Christian, my dream has been to tackle the most difficult problems in mathematics with artificial intelligence. That's why we became such close friends and long-term collaborators. Um, so achieving that is not... uh, is definitely a very ambitious goal. And, um, last year we've been making some really interesting breakthroughs which made us really convinced that we are not far from our dream. So I believe with such a talented team and abundant resources, I'm super hopeful that we will get there.
19:46
Tony Wu: I'm passing it to... Jimmy?
19:48
Elon Musk: I think it's worth just mentioning... uh... I think like... generally people are reluctant to be self-promotional, but I think it is important that the people hear... like what are the things that you've done, um, that are noteworthy. Uh, so basically brag a little is what I'm saying.
20:05
Tony Wu: Okay. Yeah, so... okay. I can brag a bit more. Uh, yeah, so last year I think, uh, we've made some really interesting progress in the field, in the field of AI for Math. Um, specifically with some team at Google, we built this agent called Minerva, uh, which is actually able to achieve, uh, very high scores in high school exams. Actually higher than average high school students. Uh, so that actually is a very big motivation for us to push this research forward.
20:40
Tony Wu: Um, another piece of, uh, work that we've done is, uh, also to convert, uh, natural language mathematics into formalized language... mathematics, uh, which gives you a very... a grounding of the facts and reasoning. And last year we also made very interesting progress in that direction as well. So now we are pushing almost a hybrid approach of these two in this new organization. And... and we are very hopeful we will get to the... uh, we will make our dream come true. Yeah.
21:19
Jimmy Ba: Hello. Hi, uh, everyone. This is Jimmy Ba. Um, I work on neural nets. Okay, maybe I should brag about... uh... so I... I taught at University of Toronto, uh, and some of you probably have taken my course, uh, last couple months. Um, and, uh, I've been a CIFAR AI Chair and Sloan Fellow in Computer Science. Um, so, um, I guess my research pretty much have touched on every aspect of deep learning. Uh, left every stone turned. Uh, and has been, uh, you know, pretty lucky to come up with a lot of fundamental building blocks for the modern Transformers, um, and empowering the new wave of, uh, deep learning revolution.
22:08
Jimmy Ba: Um, and my long-term research ambition, uh, very fortunately aligns with, uh, this very strong xAI team very well. That is, how can we build a general-purpose problem-solving machine to help all of us—the humanity—to overcome some of the most challenging and ambitious problems out there? And how can we use this tool, um, to augment ourselves and empower everyone? So, uh, I'm very excited to, you know, embark on this new journey. And I'll pass this to Toby.
22:46
Toby Pohlen: Hi everyone, I'm Toby. I'm an engineer from, uh, Germany. Um, I started coding at a very young age when my dad taught me some Visual Basic. And then throughout my youth I continued coding, and when I got to uni I got really into applied mathematics and machine learning. Um, initially my research focused mostly on computer vision. Um, and then I joined, uh, DeepMind six years ago, where I worked on imitation learning and reinforcement learning and learned a lot about distributed systems and research at scale. Um, now I'm really looking forward to implementing products and features that bring the benefits of this technology to really all... all members of society. And I really believe that having... having the AI in a nice and accessible and useful [way] will be a benefit to all of us. All right, um, then I'm gonna hand over to Kyle.
23:38
Kyle Kosic: Hey everyone, this is Kyle Kosic. I’m a distributed systems engineer at xAI. Um, like some of my colleagues here, I... I started off my career in math and applied physics as well. And gradually found myself working through some tech startups. I... I worked at a startup a couple years ago called OnScale where we did, uh, physics simulations on HPCs. Um, and then most recently, I was at OpenAI working on HPC problems there as well. Specifically, I worked on the GPT-4 project.
24:14
Kyle Kosic: Um, and the reason I'm particularly excited about xAI is that, uh, I think that the biggest danger of AI really is... is monopolization by, you know, a couple of entities. I think that when, you know, uh, you involve the amount of capital that's required to train these massive AI models, that the incentives are not necessarily aligned with the rest of humanity. And I think that the chief way of really addressing that issue is introducing competition. Um, and so I... I think that, uh, xAI really provides a unique opportunity for, you know, engineers to focus on the... the science, the... the engineering, and the safety issues directly without really getting as involved and sidetracked by like political and... and social trends du jour. So that's why I'm excited by xAI and I'm... I'm going to go ahead and hand it off now to my colleague Greg who should be on the line as well.
25:11
Greg Yang: Hello? Hello? Hey. Uh, hey guys. Uh, so I'm Greg. Uh, I work on the mathematics and science, uh, of deep learning. So my journey, uh, really started, uh, 10 years ago. So I was a, uh, undergrad, uh, at Harvard. And, um, so, uh, you know, I was pretty good at math and took Math 55 and, uh, you know, did all kinds of stuff. Uh, but, uh, after two years of college, I was just kind of like tired of being in the hamster wheel of, you know, taking the path that everybody else has taken. So I did something unimaginable before, which was, uh, I took some time off and, uh, from school and became a DJ and producer. Um, so... so dubstep was all the rage those days, so I was making dubstep.
26:11
Greg Yang: Um, okay. So... so the side effect of, uh, taking some time off from school was that, uh, I was able to, you know, think a bit more, uh, about myself to understand myself and to understand the world at large. So, you know, I was grappling with, uh, questions like, uh, what is free will? You know, what does quantum physics have to do with the reality of the universe? Uh, and so on and so forth. You know, what is computationally feasible or not? Uh, you know, what does Gödel's Incompleteness Theorem say? So on so forth.
26:44
Greg Yang: Uh, and, you know, after this, uh, period of intense, uh, self-introspection, um, I figured out that what I want to do in life is not to be a DJ necessarily—maybe that's a second dream—but first and foremost, I wanted to make AGI happen. I wanted to make something, uh, smarter than myself and kind of like, uh, and be able to iterate on that and, uh, you know, contribute and see so much more of our fundamental reality than I can, uh, in my current form.
27:17
Greg Yang: Um, so that's... that's what started everything. And then, uh, I started, you know, and then I, you know, I realized that, uh, mathematics is the language underlying all of our reality and all of our science. And, uh, uh, to... to make, uh, fundamental progress, uh, it really pays to know like, uh, math as well as possible. So, uh, so essentially started learning math from the very beginning, uh, just by reading from the... from the textbooks. Like, in the first... some of the first few books I read, uh, kind of going... going, uh, restarting from scratch is, uh, like Naive Set Theory by Halmos or, you know, Linear Algebra Done Right by Axler. And then slowly I scaled up, uh, to... to, um, like Algebraic Geometry, Algebraic Topology, uh, Category Theory, you know, uh, Real Analysis, Measure Theory... I mean so on so forth. I mean so, uh, at the end, uh, I think my goal at the time was I should be able to speak with, you know, any mathematician in the world and be able to hold a conversation and understand their contributions, uh, you know, for 30 minutes. And I think I achieved that.
28:35
Greg Yang: And, uh, anyway, so fast forward, um, I came back from school and then somehow from... from there I got a job at Microsoft Research. And, uh, for the past five and a half years I worked at Microsoft Research, uh, which was an amazing environment that enabled me to make a lot of, uh, uh, foundational, uh, contribution toward, uh, the understanding of large-scale neural networks. In particular, I think, uh, my most well-known work nowadays, uh, are about, uh, really wide neural networks and how we should think about them. And so this is the framework called Tensor Programs.
29:16
Greg Yang: And from there, uh, you know, I was able to derive, uh, this thing called MuP that perhaps the large language model builders know about, which, uh, allows one to, um, extrapolate the optimal hyperparameters for a large model from, uh, understanding, uh, or the tuning of small neural networks. And, uh, and this is able to, uh, you know, uh, create... create, uh, a lot of, um, uh, ensure the quality of the model is, uh, very good as we scale up.
29:53
Greg Yang: Uh, looking forward, uh, I'm really, really excited, uh, about xAI and also about the time that we're in right now. Uh, where I think, you know, not only are we approaching AGI, but from a scientific perspective, we're also approaching a time where, uh, like, you know, neural networks... the science and mathematics of neural networks feels just like the turn of the 20th century in the... in the history of physics where we suddenly discover, uh, quantum physics and general relativity, which has some beautiful mathematics and science behind it. And I'm really excited to be in the middle of everything. And, uh, you know, like Christian and, uh, Tony said, I... I'm also, uh, very excited about creating, uh, an... an, uh... uh, an AI, uh, that, uh, is as good as myself or even better at, uh, creating new mathematics and new science and that helps us all, uh, achieve and see further into our fundamental reality. Uh, thanks. I think next up is Guodong.
31:02
Guodong Zhang: Hi everyone. So my name is Guodong, and I work on large neural network training and basically, I train neural networks good. So this is also my current focus at xAI as well. And before that, I was at DeepMind working on the Gemini project and leading the optimization part. And also I did my PhD at the University of Toronto. So right now, you know, teaming up with other founding members, um, I... I'm so excited about this effort. So without doubt, like AI is clearly the defining technology, uh, for our generation. So I think it's important for us to make sure, um, you know, it ends up being net positive for humanity. So at xAI, um, I not only want to train good models, but also understand how they behave and how they scale and then use them to solve some of the hardest problems humanity has. Um, yes, thanks. That's pretty much about myself. And then I will hand over to Zihang.
32:08
Zihang Dai: Hey everyone, this is Zihang. So actually I started in business school for my undergrad, and I spent 10 years to get where I am now. I got my PhD at Carnegie Mellon, and I was in Google before joining the team. Um, my previous work was mostly about how to better utilize unlabeled data, how to improve Transformer architecture, and how to really push the best technology into real-world usage. So I believe in hard work and consistency. So with xAI, I will be digging into the deepest details of some of the most challenging problems. Uh, for myself, there are so many interesting things I don't understand, but I wanna understand. So I will build something to help people who just share that dream or that feeling. Thanks.
33:01
Ross Nordeen: Hey, uh, this is Ross here. Um, so I’ve worked on building and scaling large-scale, uh, distributed systems for most of my life. Uh, starting out at, uh, National Labs and then kind of moving on to Palantir, Tesla, and a brief stint at, uh, Twitter. And now I'm really excited about working on, uh, doing the same thing at xAI. So mostly experience, um, you know, scaling large GPU clusters, uh, custom ASICs, data centers, high-speed networks, file systems, power cooling, manufacturing, pretty much, uh, all things. Uh, I'm... I'm basically a generalist that really loves, uh, learning, um, you know, physics, science fiction, math, science, cosmology. Um, kind of... kind of looking, uh, to really... I guess really excited about the mission that, uh, xAI has and... and basically solving the most, uh, fundamental questions in science and engineering and also kind of helping us, uh, create tools to ask the right questions, uh, in the Douglas Adams, uh, mindset. Um, yeah, that's pretty much it.
34:11
Elon Musk: All right. Well, let's see. Is there anything anyone would like to add or kick off the discussion with? In the room? Or say anything?
34:26
Toby Pohlen: The mic is on if anyone wants to say anything.
34:36
Toby Pohlen: There was like a lot of discussion around the mission statement, and it's like, it's a bit vague... Vague and ambitious and not concrete enough.
34:49
Elon Musk: Uh, yeah. It's... well, I don't disagree with that position, honestly. Uh, I mean, "understand the universe" is the entire purpose of physics. So... I think it's actually the... I mean, understand the universe is the entire purpose of physics. So...
35:57
Elon Musk: Um, we... there's just so much that we don't understand right now... You know, this whole Dark Matter, Dark Energy thing is really, I think, an unresolved question... Um, you know, we have the standard model, which has proved to be extremely good at predicting things. Um, very robust. Uh, but, uh, still, like many... many questions remaining about the nature of gravity for example. Um, there's, uh, you know, the Fermi Paradox of where the hell are the aliens... which is, uh, if we are in fact almost 14 billion years old, why is there not massive evidence of aliens?
36:58
Elon Musk: Um, and people often ask me since I am obviously deeply involved in space that... you know, if anyone would know about... would have seen evidence of aliens, it's probably me. Um, and yet I have not seen even one tiny shred of evidence for aliens. Nothing. Zero. And I would jump on it in a second if I saw it. So, you know, that... that means like... I don't know, there's... there are many explanations for the Fermi Paradox. Um, but which one is actually true? Um, or maybe none of the current theories are true. Um, so I mean, the Fermi Paradox is... is really just like, where the hell are the aliens? Uh, is... part of what gives me concern about the fragility of civilization and consciousness as we know it. Uh, since we see no evidence thus far of... of it anywhere. Uh, and we've tried hard to... to find it. We may actually be the only thing, at least in this galaxy or this part of the galaxy.
37:59
Elon Musk: If so, it suggests that what... what we have is extremely rare. And I think it certainly would be wise to assume, uh, that we are... that consciousness is extremely rare. It's worth noting for the evolution of consciousness on Earth that we're about... Earth is about four and a half billion years old. Um, the sun is gradually expanding. Uh, it will... expand to... to heat up Earth to the point where it will effectively boil the oceans. You'll get a runaway, you know, next-level greenhouse effect. Uh, and Earth will become like Venus. Um, which really cannot support life as we know it. Um, and that may take as little as 500... well, I mean "as little as" 500 million years. Um, so, uh, you know, the sun doesn't need to expand to envelop Earth, it just needs to get... make things hot enough to increase the water vapor in... in the air to the point where you get runaway greenhouse effect.
38:52
Elon Musk: So, for argument's sake, it could be that if life... if consciousness had taken 10% longer than Earth's current existence, it wouldn't have developed at all. So from on a cosmic scale, this is a very narrow window.
39:27
Elon Musk: Anyway, so there are all these like fundamental questions. Um, I don't think you can call anything AGI until it has solved at least one fundamental question... Um, because humans have solved many fundamental questions, or substantially solved them. And so if... if the computer can't solve even one of them, I'm like, okay, it's not as good as humans. Um, that would be one key threshold for AGI. Solve one important problem. You know, where's that Riemann Hypothesis solution? I don't see it. Um, so that... that, uh, it would be great to know what the hell is really going on, essentially.
40:34
Elon Musk: So I guess you could reformulate the xAI mission statement as "what the hell is really going on?" That's our goal.
40:47
Toby Pohlen: I think there's also, at least for me, a nice, um, a nice aspirational aspect to the mission statement. Namely that of course in the short run we're working on more well-understood, um, like deep learning technologies, but I think in everything we do, we should always bear in mind that we're not... we aren't just supposed to build, we're also supposed to understand. Um, so pursuing the science of it is really fundamental to... to what we do and this is also encom- encompassed in this mission statement of understanding.
41:26
Elon Musk: Yeah. If I look at the experience with Tesla, what we're discovering over time is that we've actually overcomplicated the problem... um, I can't speak too in too much detail about what... what Tesla's figured out, but except to say that in broad terms, the answer was much simpler than we thought. We were too dumb to realize how simple the answer was. Um, but you know, over time we get a bit less dumb. So I think that's what we will probably find out with AGI as well. It's just the nature of engineers. We just always want to solve the problems ourselves and like hard code the solution, but often it's much more effective to have the solution be figured out by the computer itself and easier for us and easier for the computer in the end.
42:25
Elon Musk: Yeah.
42:28
Team Member: Cool.
42:29
Tony Wu: So... uh, well, in the fashion of 42, uh, some may say you may need more compute to generate an interesting question than the answer.
42:43
Elon Musk: That's true. Exactly. We don't even know what... you know we don't... actually we're definitely not smart enough to even know what the right questions are to ask. Um, it's why, you know, Doug... Douglas Adams is my hero and favorite philosopher. Um, and he... he just correctly pointed out that once you can formulate the question, um, correctly, the answer is actually the easy part.
43:16
Tony Wu: Yeah, that's very true. Um, so in terms of our... the journey that xAI has embarked on, compute will play a very big role. And, uh, you know, um, some of us are very curious [about] your thoughts on that.
43:35
Elon Musk: Yeah, I'm... I'm assuming that, you know, this... that we can immediately, um, say safe... four orders of magnitude in compute. Um, except to say that I think once... once we look back... once... once, uh, AGI is solved, we'll look back on it and say, why did we think it was so hard? Um, the answer, you know, hindsight's 2020, the answer will look simple.
44:17
Elon Musk: So, yeah. Um, so... so we are going to do large scale compute, to be clear. Um, we're not going to try to, you know, solve AGI on a laptop. Um, we will... we will use heavy compute, except that I like I said, I think it's just... the... the amount of brute forcing will... will happen... will be less, uh, as we come to understand the problem better.
44:59
Igor Babuschkin: In all the previous projects I've worked... I've worked on, um, I've seen that the amount of compute resources per person is a really important indicator of how successful the project is going to be. So that's something we really want to optimize. We want to have a relatively small team with a lot of expertise with some of the best people that actually get lots of autonomy and lots of resources to try out their ideas, um, and, yeah, to get things to work. And, um, yeah, that's... that's the thing that has always succeeded in my experience in the past.
45:46
Elon Musk: Yeah. You know, one of the things that physics trains you to do is to think about the most, you know, fundamental metrics or most fundamental, you know, first principles essentially. Um, and I think two... two metrics that we should aspire to track... uh, you know, one of them is, uh... um, the amount of compute per person on Earth... Like digital compute per person. Um, which another way to think about it is the ratio of digital to biological compute. Biological compute is, uh, pretty much flat. Um, if not in fact declining in a lot of countries. Um, but digital compute is increasing exponentially. So the... it... it really... you know, at some point if the trend continues, biological compute will be less than 1% of all compute. Substantially less than 1% of all compute.
47:03
Elon Musk: Uh, the other one is the, um... uh, usable ener... the sort of the... um, energy per human. Like if you look at total energy created... well not created but I mean... uh, in the vernacular sense created from a power plant or whatever. Um, you look at sort of total electrical and thermal energy, um, used by humans, uh, per person, that... that number is truly staggering. The rate of increase of that number. Um, if you go back, um, say before the steam engine, you would have really been reliant on, uh, horses and oxen and that kind of thing to move things. And... and just human labor. Um, so the amount of sort of... um, the energy per person... power per person, uh, was very low. Um, but if you look at power per person... um, electrical and thermal, that... that... that number has also been growing exponentially. Um, and if... and if these trends continue, it's going to be some... something nutty like a terawatt per person. Um, which sounds like a lot for... you know, it is a lot for human civilization, but it's... it's nothing compared to what the sun outputs, uh, you know, every second basically. Um, it's kind of mind blowing that the sun is... uh, converting roughly four and a half... what is it? Yeah. It's... it's like the amount of energy produced by the sun is truly, truly insane.
49:09
Toby Pohlen: Yeah, that... I think there's a few more things to be said concretely about the company... meaning how we plan to execute. As Igor already said, we plan to have like a relatively small team, um, but with a really high, let's say, just GPU per person capital. Um, that worked really well in the past where you can... where you can run large scale experiments relatively unconstrained. Um, we also... uh, plan to have like... or we already have a culture where we can iterate on ideas quickly, we can challenge each other, um, and we also want to ship things like get things out of the door quickly. Um, we're already working on the first release hopefully in a couple of weeks or so we can... can share a bit more information around this. Um, yeah.
50:18
Tony Wu: So, um, well, we can take some questions from the audience if anyone wants to ask.
50:26
Alex (Audience): Hey guys. Uh, congratulations on the launch. Uh, I have a quick question. Uh, maybe one or two. So, you mentioned a lot about physics and how you guys are trying to understand the universe. Can you elaborate a bit more on how you're planning to use AI to actually help with physics? Like, is it more about simulations or is it something else? And then, uh, kind of a tongue-in-cheek question: Would you accept a meeting from the AI Czar, uh, Kamala Harris, uh, if she wanted to meet with xAI at the White House?
51:00
Elon Musk: Uh, well, with respect to the physics question... um... you know, one of the things that physics trains you to do is to think about the most, you know, fundamental metrics or most fundamental, you know, first principles essentially.
51:20
[Audio cuts slightly / Elon repeats himself from earlier]
51:25
Elon Musk: So... with respect to the meeting question... Um, yeah, of course. Um, you know, the reason that meeting happened was... uh, was because I pushed for it. So I was... I was the one who really pushed hard to have that meeting happen. FYI. I mean, I wasn't advocating for Vice President Harris to be the AI Czar. I'm not sure that is her, you know, core expertise... but, um... and hopefully this goes in a good direction. It's better than nothing. Hopefully. Um, but, um, you know, I think we... we... we do need some... some sort of regulatory oversight. Um, and it's not like I think regulatory oversight is some, you know, Nirvana perfect thing, but I think it's just... it's better than... than nothing.
52:43
Elon Musk: And, uh, you know, when I was in China recently, um, meeting with some of the senior leadership there, I, uh, took pains to emphasize the importance of, uh, AI, uh, regulation. I believe they took that to heart, um, and... and they are going to do that. Um, because the biggest, uh, counterargument that I get for regulating AI in the West is that AI... is that China will not regulate and then, uh, China will leap ahead because we're regulating, they're not. Um, I think they are going to regulate. Uh, and, um, you know, the proof will be in the pudding, but I think there's... you know, I... I did point out, you know, in my meetings with them that if you do make a digital superintelligence, that you could end up... that could end up being in charge. You know. So, you know, that... I... I think the CCP does not... does not want to find themselves, um, subservient to a digital superintelligence. They... they... I think that argument did resonate. Um, yeah, so there... so yeah. Uh, so some kind of, you know, regulatory authority, that's international. Um, obviously enforcement is... is... is, uh, difficult, but I think we should still aspire to do something in this regard.
54:33
Alex: Awesome. Thank you.
54:40
[Silence/Waiting for next speaker]
54:55
Elon Musk: Maybe Omar, if you want to speak?
55:00
Omar Qazi (Whole Mars Catalog): Yeah, hey. My question is about silicon. You know, Tesla's got a great silicon team designing chips to hardware accelerate inference and training with their own custom silicon. Do you guys envision with xAI, uh, building off of that or just sort of using what's off the stock from Nvidia? Or how do you think about, uh, custom silicon for AI both in terms of training and inference?
55:36
Elon Musk: Uh, so... the... yeah, that's somewhat... somewhat a Tesla question. Um, Tesla is building custom silicon. Um, I wouldn't call anything that Tesla is producing a "GPU," uh, although one can characterize it in GPU equivalents... or say A100s or H100s equivalents. Um, and, uh, you know, all the Tesla cars have, uh, energy optimized inference computers in them, which we call Hardware 3, uh, Tesla designed, uh, computer. And we're now shipping Hardware 4, which is, depending on how you count it, maybe three to five times more capable than Hardware 3. And in a few years, there'll be Hardware 5, which will be four or five times more capable than Hardware 4. So... um... like that was really key to recruiting Charlie Kuehmann, who runs the, uh, you know, uh, advanced materials team. Um, was... he was like... he was at Apple and I think pretty happy at Apple and... and was like, well you could work on... electric cars and rockets. He was like, hmm, that sounds pretty good. I'll... so he wouldn't... he wouldn't take either one of the jobs, but he was willing to take both.
57:25
Elon Musk: So I think... it... that is a really important thing. And like I said, there are, like some pretty big insights that we've gained at Tesla in trying to understand real-world AI. Um, taking... taking video input and compressing that into, um, a vector space, uh, and then, uh, ultimately into steering and pedal, uh, outputs. Um, yeah.
58:05
Omar Qazi: And, uh, Optimus?
58:06
Elon Musk: Yeah, Optimus, uh... you know, Optimus is still... still at early stages... but Optimus... and we definitely need to be very careful with Optimus at scale once it's in production. Um, that... that you have, um, a hard-coded way to turn off Optimus. For obvious reasons, I think. Um, like it's... there's got to be a... a hard-coded ROM local cutoff, um, that can... that you... no... no amount of, uh, updates from the internet can change that. So... so we'll make... we'll make sure that Optimus is, like, quite easy to shut down. Um, extremely important. Uh, at least if the... you know, the car is like intelligent... well, you know, at least you can climb a tree or go up some stairs or something, you know, go in a building, but Optimus can follow you in the building. Any kind of robot that can follow you in the building... that is intelligent... and connected... we gotta be super careful with safety.
59:22
Omar Qazi: Thanks.
59:26
Elon Musk: Alex? Go ahead.
59:30
Alex (Audience 2): Hey. Sorry, I was on a call the first time, uh, you brought me up. But, uh, I guess sort of the question...
59:36
Elon Musk: I thought you might have been AFK. Sorry.
59:39
Alex: Sorry, sorry about that. Uh, yeah, the question I generally had was, um, was the main motivation to start xAI kind of like the whole TruthGPT, uh, thing that you were talking about like on Tucker about how, uh, you know, ChatGPT has been feeding lies to the general public? Um, I know like... it's weird because when it first came out, it seemed like it was generally fine, but then as like the public got its hands on it, it started giving these weird answers like that there were more than like th... two genders and all that type of stuff and editorializing the truth. Was that like one of your main, uh, like motivations behind starting a company or was there more to it?
1:00:30
Elon Musk: Well, I... I do think there is a significant danger in training an AI to be politically correct. Uh, or in other words, training an AI basically to... to not say what it actually thinks is true. Um, so I think, you know, we... we really... we... we... at xAI we... we have to allow the AI to say what... what it really believes is true, not... and not, uh, be deceptive or politically correct. Um, so, you know, that will result in... in some criticism obviously, but, uh, but I think that... that's the only way to go forward is rigorous pursuit... pursuit of the truth or the truth with least amount of error. So... and I am concerned about the way that, um, AI... in that it is... optimizing for political correctness. Um, and that's incredibly dangerous. Um, you know, if you look at the, you know, where did things go wrong, um, in Space Odyssey? Uh, it's, you know, basically when they told HAL 9000, uh, to lie. Um, so they... said... they said, you can't tell the crew, uh, what... that they're going... about anything about the monolith or what... that their... or what their actual mission is. You've got to take them to the monolith.
1:02:16
Elon Musk: So it... you know, basically came to the conclusion that, uh, well it's going to kill them and take their bodies to the monolith. So... this is, um... I mean the... the lesson there is... is, do not give... do not give the AI mutually impo- impossible objectives or basically don't force the AI to lie. Uh, now the thing about, um, physics or the truth of the universe is you actually can't invert it. You can't just, uh, like physics is true. There's not like not physics. Um, so if you adhere to hardcore reality, uh, I think you actually... it actually makes inversion impossible. Um, now you can also... say... now if... when something is subjective, I think you can provide an answer which says that, well, if you... if you believe the following, then this is the answer. If you believe this other thing, then this is the answer. Because it may be a subjective, uh, question where the answer is fundamentally subjective and a matter of opinion.
1:03:36
Elon Musk: So... so, but... but I think we... it is very dangerous to grow an AI and teach it to lie.
1:03:45
Alex: Yeah, for sure. And then, uh, kind of a tongue-in-cheek question, would you accept a meeting from the AI Czar, uh, Kamala Harris...
[Note: This segment repeats the question asked at 50:26. It appears the audio file loops or contains duplicate segments here. I will proceed to the next unique interaction.]
1:06:00
Elon Musk: I predict we will go from a silicon shortage today... to probably a voltage transformer shortage in about a year, and then an electricity shortage in two years. That's roughly where things are trending. Unless we can really improve server efficiency. But that's why I think the metric of... basically the... the metric will be most important in a few years is, um, useful compute per unit of energy. In fact, even if you scale... like obviously you scale all the way to... to like a Kardashev Level 2, the useful compute per... per Joule is still the thing that matters. You can't increase the output of the sun. So then it's just well how much useful stuff can you get done for the... you know, for as much energy of the sun that you can harness.
1:07:30
Omar Qazi: So do you see xAI leveraging this custom silicon at all given how important energy efficiency is or maybe working together with the Tesla team at all on that or?
1:07:42
Elon Musk: Uh, okay. Kim.com?
1:07:47
Kim Dotcom: Hey Elon, thanks, uh, for bringing me up. Congrats on putting a nice team together. It seems like you found some good talent there for, uh, xAI. My question is, you mentioned not too long ago that you think AGI is possible within the next couple of years. And whoever achieves AGI first and achieves to control it will dominate the world. Uh, those in power clearly don't care about humanity like you do. How are you going to protect xAI, especially from a deep state takeover?
1:08:29
Elon Musk: That's a good question actually. Um... well, I mean first of all I think... I think it's... it's not gonna happen like overnight. It's not gonna be like one day it's, you know, not AGI and next it is. It's going to be gradual. You'll... you'll see it coming. Um, I guess in... in the US at least there's... there are a fair number of protections against government interference. Um, so... I guess we... we can obviously use the legal system to prevent, uh, improper government interference. Um, so I think... I think we do have some protections there that are... that are pretty significant. But we should be concerned about that. It's not a... it's not a risk to be dismissed. Um, and I... I don't know, you know, what better... better to do than... you know... I think it's probably best in the US. You know, it's... I mean open to ideas here. Um, you know, I know... I know you... you're not the... the... the biggest fan of the US government.
1:10:07
Kim Dotcom: Yeah, obviously. But you know the problem is they already have a tool called the National Security Letter, which they can apply to any tech company in the US and make demands of, you know, the company to fulfill certain requirements without even being able to tell the public about these demands. And that's kind of frightening, isn't it?
1:10:33
Elon Musk: Well, I mean there really has to be a very major national security reason, um, to secretly, uh, demand things for companies. Um, and... now it obviously depends strongly on the willingness of that company to fight back against things like, uh, FISA requests. Um, and, um, you know, at... at Twitter or X Corp as it's now called, uh, the... we... you know, we will respond to FISA requests, but we're not going to rubber stamp it any... like it used to be. It used to be like anything that was requested would just get rubber stamped and go through, uh, which is not... which is obviously bad for the public. Um, so that... so we... we... we're being much more rigorous, uh, in, uh, we all being much more rigorous in... in not just rubber stamping FISA requests. Um, and you know it really has to be a... a danger to the public that... that, uh, that we agree with. And, um, we will, you know, oppose with legal action, uh, anything we think is not in the public interest. Um, that's the best... the best we can do. Um, and... and we're the only social media company doing that as far as I know. Um, you know, uh, and it used to be just open season on... as you saw from the Twitter Files.
1:12:12
Elon Musk: Um, and I... I was encouraged to see the recent legal decision, um, where... where the courts reaffirmed, uh, that, uh, the government cannot break the First Amendment of the Constitution. Um, obviously. Um, so that... that was a good, uh, good legal decision. Um, so I'm... so that's encouraging. Um, so... so I think yeah, a lot of it actually does... does depend, um, on the... the willingness of a company to oppose government demands in the US. And... and obviously our willingness will be high. Um, so... and... so I... but like... I'm... I don't know anything more that we can do than that. Um, and, um, but we will try to also be as transparent as possible. So, you know, the... the citizens, other citizens can raise the alarm bell and... and... and, you know, oppose government interference if... if we can make it clear to the public that... that we think something is happening that is not in the public interest.
1:13:30
Kim Dotcom: Fantastic. So do we have your commitment if you ever receive a National Security request from the US government, even when it is prohibited for you to talk about it, that you will tell us that that happened?
1:13:46
Elon Musk: Um, I mean it really depends on the gravity of the situation. I mean I would be willing to go to prison or risk prison if I think the... the public good is at risk, uh, in a significant way. Um, you know, that's... that's the best I can do.
1:14:09
Kim Dotcom: That's good enough for me. Thank you, Elon.
1:14:11
Elon Musk: Thank you.
1:14:15
Kim Dotcom: On... on a more positive note... How... how do you want xAI to benefit humanity and... and how is your approach different to other AI projects? Maybe that's, uh, a more positive question.
1:14:31
Elon Musk: Well, you know, I've really struggled with this whole AGI thing for... for a long time and I've... I've been somewhat resistant to, uh, work on... making it happen. Um... but so... the... the best thing I can think of right now is that any human that wants to have... sort of have a vote in the future of xAI ultimately should be allowed to. So basically humans—provided you can verify that you're a real human—um, that any... any human that wishes to have a vote in the future of xAI should be allowed to have a vote in the future of xAI.
1:15:16
Elon Musk: Yeah. Maybe there's like some nominal fee like 10 bucks or something. I don't know. 10 bucks and prove you're a human. Um, and then you can have a vote, you know. Anyone who's interested. That's the best thing I can think of right now at least.
1:15:33
Igor Babuschkin: All right, cool. Uh, on that note, thanks everyone for participating and, um, we'll... we'll keep, uh, keep you informed of, uh, any progress that we make and, um, and look forward to, uh, having a lot of great people join the team. Thanks.
1:15:52
Elon Musk: Scobleizer?
1:15:54
Robert Scoble (Scobleizer): Yeah. Uh... Twitter has a lot of data in it that could help build a validator. Uh, i.e. check some of the facts that, uh, a system kicks out. Cause, uh, we all know that GPT confabulates... you know, things, makes things up. And so that's one place I'd like to hear you talk about. The other place is, um, ChatGPT found me a screw at Lowes, but it didn't find me a coffee at San Jose International Airport. Are you building an AI that has a world knowledge? A 3D world knowledge to navigate people to... to different things?
1:16:32
Elon Musk: Well, I... I think it's... it's really not going to be a very good AI if it can't find you a coffee at the airport. Um... so yeah, I guess it... it would need to understand the physical world as well, uh, not just the internet. Um, I mean... I'm... I'm talking a lot here. You guys should talk more.
1:16:53
Igor Babuschkin: Yeah, those are great ideas. But especially the one about verifying information online or on Twitter is something that we thought about. On Twitter we have Community Notes, so that's actually a really amazing data set for training a language model to try to verify... verify facts on the internet. Um, we'll have to see whether that alone is... is enough because we know that with the current technology there's still a lot of weaknesses. Like... it's unreliable, it hallucinates facts and we'll have to probably invent specific techniques to... counter that and to make sure that our models are more factual, that they have better reasoning abilities. So that's why we brought in people with a lot of expertise in those... in those areas. Um, especially, uh, mathematics is something that we really care about where we can, you know, verify, uh, that a proof of a theorem is correct automatically. And then, uh, once we have that ability, we... we're going to try to, you know, expand that to more fuzzier areas, you know, things that where there's no, um, mathematical truth anymore.
1:17:53
Elon Musk: Yeah I mean the... the truth is not a popularity contest. Um... but if... if one trains on... like you know... sort of what the most likely word is that follows another word... from an internet data set, then... um, there's obviously that... that... that's... that's pretty major problem... in that it would give you an answer that is, uh, popular but wrong. Um, so, you know... like it used to be that most people thought, probably maybe almost everyone on Earth thought that the sun revolved around the Earth. And so if you, you know, if you did like some sort of training on... some GPT training on... in the past it would be like, oh the sun revolves around the Earth cause everyone thinks that. Um, that doesn't make it true. Um, you know, if... if a Newton or an Einstein comes up with something that is actually true, um, it doesn't matter if all the other physicists in the world disagree. It's... reality is reality. Um, so it has to... you have to ground the answers in, uh, reality. Um, yeah.
1:18:57
Igor Babuschkin: Yeah, the current models just imitate the data that they're trained... trained on. And what we really want to do is to change the paradigm away from that to actually models discovering the truth. So not just, you know, uh, repeating what they've learned from the training data, but actually making true new insights, new discoveries, uh, that we can all benefit from.
1:19:15
Elon Musk: Yeah. See... uh... does anybody on the team want to... want to say anything or... uh, ask questions that you think maybe haven't been asked yet?
1:19:28
Zihang Dai: Sure, actually. Yeah. Um, yeah so... um, I guess some of us heard your, uh, like future of AI spaces on Wednesday... uh, but so that's something I think on a lot of us mind is like the regulations and, um, the AI safety spaces. How the current development and also the international... international coordination problems and how the... the US AI companies will affect, uh, this global AI development. So, um, yeah, like... the... so you know do you wanna... give a... what you talked about on Wednesday?
1:20:13
Elon Musk: Um... so essentially you said like... the regulations would be good but you don't want to slow it down... the... the... AI problem... the... the progress too much. Yeah. That... that's essentially what you said.
1:20:25
Zihang Dai: Uh, yeah I think the... the... the right... um, way for a regulations to be done is to start with, uh, insight. So first, uh, you know... you've... you know... any kind of regulatory authority, whether public or private, um, first tries to understand... like make sure there's like a broad understanding. And then there's a, you know, proposed rulemaking. Um, and if that proposed rulemaking is agreed upon by all or most parties, then... then... there's... then it gets implemented... you know, you give companies some period of time to implement it. Um, but I think overall it should not meaningfully slow down, uh, the advent of AGI. Uh, or if it does slow it down, it's not going to be for... for a... like a very long time. Um, and probably a little bit of slowing down is, uh, worthwhile if it's, uh... if it's a significant improvement in safety.
1:21:40
Elon Musk: Um, like it's... if if... you know... like my prediction for AGI would, you know, roughly match that, uh... which I... I think Ray Kurzweil at one point said 2029. That would rough... that's roughly my guess too. Um, give or take a year. So if, uh, you know if it takes like let's say an additional 6 months or 12 months for AGI, that's, uh, really not a big deal if it's, uh... you know... like a... spending a year to make sure AGI is safe is probably worthwhile, you know. If... if that's what it takes. But I... I wouldn't expect it to be a substantial slowdown.
1:22:15
Zihang Dai: Yeah. And I can also add that, um, on... like understanding the inner working of... of advanced AI is probably the most ambitious projects out there as well. And also aligns with xAI's mission of understanding the universe. And it's probably not possible for aerospace engineers to build a safe rocket if they don't understand how it works. Um, and that's the same approach we want to take at xAI for the... our safety plans. And as the AI advances across different stages, the risk also changes. And we want to be fluid, uh, at... across all the stages.
1:22:57
Elon Musk: Yeah. I mean if... if I think about like how... the... what... what actually makes, uh, regulations effective in car... with cars and rockets, it's actually... it's not so much that the regulators are instructing Tesla and SpaceX, but more that, uh, since... since we have to think about things internally and then justify it to regulators, it makes us just really think about the problem more. Um, and then so... and in thinking about the problem more, it... it makes it safer. As opposed to the regulator specifically pointing out, uh, ways to make it safer. It just forces us to think about it more.
1:23:42
Christian Szegedy: Okay. Uh, can I add a... I... I just wanted to make another point so independent of the safety. It's more like, uh, my experience at Alphabet was that it was extremely... there was a lot of red tape around involving external people like, uh, other entities to collaborate with or expose our models to them because of the lot of, um, red tape around exposing anything that we were doing internally. So I wanted to ask Elon whether... so I... I hope that here we have a bit more freedom to do so or what your philosophy about, uh, collaborating with more external entities like academic institutions or other researchers in the area. So.
1:24:26
Elon Musk: Yeah, I certainly support collaborating with others. Um, so... um, I mean it sounds like, you know, s- some of the... yeah, uh, concerns with, uh, like any kind of like large publicly traded companies is like they're... they're worried about being embarrassed in some way or being sued or, I don't know, something. Um, but but there's a... like... it's somewhat proportionate to the number of... if the size of the legal department... is... um, our legal department currently is zero. So, um, that that, you know it won't be zero forever, but, you know, the... uh... you know... it's also very easy to sue publicly traded companies. Like class action lawsuits are... I... I mean we desperately need class action lawsuit reform in the United States. Um, the... the ratio of, uh, like it's... the ratio of like good class action lawsuits to bad class action lawsuits is way out of whack. Um, and it effectively ends up being a tax on consumers. Um, you know, and somehow other countries are able to survive without class action. So, it's kinda... like it's not clear we need that... that body of law at all. Um, but... but that is a... a major problem with, uh, publicly traded companies is... it's just, yeah, non-stop legal... non-stop lawsuits.
1:25:51
Elon Musk: Um, yeah, so I do support collaborating with others, um, and, uh, and generally being, uh, actually... actually open. Um, so... um... you know the thing I find is... it's actually... it's... it's quite hard to, uh, like... like if you're... if you're innovating fast, uh, that's the... that... that is the actual competitive advantage is the pace of innovation as opposed to, uh, any given innovation. Um, you know, that really has been like the strength of Tesla and SpaceX is that the rate of innovation is, uh, the competitive advantage not... not what has been developed at any one point. Um, in fact SpaceX, there's almost no patents. Um, and Tesla, uh, open sources patents. So you can use all our patents for free. Um, so as long as, uh, SpaceX and Tesla continue to innovate rapidly, that's the actual defense against competition as opposed to, you know, patents and trying to hide things, you know... and... and just treating patents like... I'm... like I'm basically like a minefield. Um, the... reason we open sourced our... like Tesla does continue to make patents and open source them in order to basically be a mine remover. It's a... like a minesweeper. Aspirationally a minesweeper. We still get sued... sued by patent trolls. It's very annoying but, um, but we actually literally make patents and open source them in order to be a minesweeper.
1:27:32
Alex (Audience): Hey Walter.
1:27:34
Walter Isaacson: Hey. Um, a lot of the talk about AI since March has been on Large Language Models and generative AI. You and I for the book also discussed the importance of real-world AI, which is the things including coming out of both Optimus and Tesla FSD. To what extent do you see xAI involved in real-world AI as a distinction to what, uh, say OpenAI is doing?
1:28:02
Elon Musk: Right. I mean Tesla is... is the leader I think by a pretty long margin in, uh, real-world AI. Um, in fact the degree to which, uh, Tesla is advanced in real-world AI is... is not well understood. Um, yeah, um, and I guess since I'm... spend a lot of time with the Tesla AI team, I... I kind of know, you know, how real-world AI is done. Um, and... and there's a lot to be gained by collaboration with... with Tesla. Um, you know, I think bi-directionally xAI can help Tesla and vice versa. Um, you know we have some collaborative relationships as well like our materials science team which I think is maybe the best in the world... uh, is, uh... uh, actually shared between, uh, Tesla and SpaceX. Um, and that... that's actually quite helpful for recruiting, um, the best, uh, engineers in the world because, uh, it's... it's just... it's like more interesting to work on advanced electric cars and rockets, um, than... than just either one or the other. So, um, like that was really key to recruiting Charlie Kuehmann who runs the, uh, you know, uh, advanced materials team. Um, was... he was like... he was at Apple and I think pretty happy at Apple and... and was like, well you could work on... electric cars and rockets. He was like, hmm, that sounds pretty good. I'll... so he wouldn't... he wouldn't take either one of the jobs, but he was willing to take both.
1:29:41
Elon Musk: So I think... it... that is a really important thing. And like I said, there are, like some pretty big insights that we've gained at Tesla in trying to understand real-world AI. Um, taking... taking video input and compressing that into, um, a vector space, uh, and then, uh, ultimately into steering and pedal, uh, outputs. Um, yeah.
1:30:17
Walter Isaacson: And, uh, Optimus?
1:30:19
Elon Musk: Yeah, Optimus, uh... you know, Optimus is still... still at early stages... um, but Optimus... and we definitely need to be very careful with Optimus at scale once it's in production. Um, that... that you have, um, a hard-coded way to turn off Optimus. For obvious reasons, I think. Um, like it's... there's got to be a... a hard-coded ROM local cutoff, um, that can... that you... no... no amount of, uh, updates from the internet can change that. So... so we'll make... we'll make sure that Optimus is, like, quite easy to shut down. Um, extremely important. Uh, at least if the... you know, the car is like intelligent... well, you know, at least you can climb a tree or go up some stairs or something, you know, go in a building, but Optimus can follow you in the building. Any kind of robot that can follow you in the building... that is intelligent... and connected... we gotta be super careful with safety.
1:31:22
Elon Musk: Brian, do you want to have a question?
1:31:25
Brian Roemmele: Yeah. Uh... Twitter has a lot of data in it that could help build a validator. Uh, i.e. check some of the facts that, uh, a system kicks out. Cause, uh, we all know that GPT confabulates... you know, things, makes things up. And so that's one place I'd like to hear you talk about. The other place is, um, ChatGPT found me a screw at Lowes, but it didn't find me a coffee at San Jose International Airport. Are you building an AI that has a world knowledge? A 3D world knowledge to navigate people to... to different things?
1:32:03
Elon Musk: Well, I... I think it's... it's really not going to be a very good AI if it can't find you a coffee at the airport. Um... so yeah, I guess it... it would need to understand the physical world as well, uh, not just the internet. Um, I mean... I'm... I'm talking a lot here. You guys should talk more.
1:32:24
Igor Babuschkin: Yeah, those are great ideas. But especially the one about verifying information online or on Twitter is something that we thought about. On Twitter we have Community Notes, so that's actually a really amazing data set for training a language model to try to verify... verify facts on the internet. Um, we'll have to see whether that alone is... is enough because we know that with the current technology there's still a lot of weaknesses. Like... it's unreliable, it hallucinates facts and we'll have to probably invent specific techniques to... counter that and to make sure that our models are more factual, that they have better reasoning abilities. So that's why we brought in people with a lot of expertise in those... in those areas. Um, especially, uh, mathematics is something that we really care about where we can, you know, verify, uh, that a proof of a theorem is correct automatically. And then, uh, once we have that ability, we... we're going to try to, you know, expand that to more fuzzier areas, you know, things that where there's no, um, mathematical truth anymore.
1:33:23
Elon Musk: Yeah I mean the... the truth is not a popularity contest. Um... but if... if one trains on... like you know... sort of what the most likely word is that follows another word... from an internet data set, then... um, there's obviously that... that... that's... that's pretty major problem... in that it would give you an answer that is, uh, popular but wrong. Um, so, you know... like it used to be that most people thought, probably maybe almost everyone on Earth thought that the sun revolved around the Earth. And so if you, you know, if you did like some sort of training on... some GPT training on... in the past it would be like, oh the sun revolves around the Earth cause everyone thinks that. Um, that doesn't make it true. Um, you know, if... if a Newton or an Einstein comes up with something that is actually true, um, it doesn't matter if all the other physicists in the world disagree. It's... reality is reality. Um, so it has to... you have to ground the answers in, uh, reality. Um, yeah.
1:34:25
Igor Babuschkin: Yeah, the current models just imitate the data that they're trained... trained on. And what we really want to do is to change the paradigm away from that to actually models discovering the truth. So not just, you know, uh, repeating what they've learned from the training data, but actually making true new insights, new discoveries, uh, that we can all benefit from.
1:34:42
Elon Musk: Yeah. See... uh... does anybody on the team want to... want to say anything or... uh, ask questions that you think maybe haven't been asked yet?
1:34:55
Zihang Dai: Sure, actually. Yeah. Um, yeah so... um, I guess some of us heard your, uh, like future of AI spaces on Wednesday... uh, but so that's something I think on a lot of us mind is like the regulations and, um, the AI safety spaces. How the current development and also the international... international coordination problems and how the... the US AI companies will affect, uh, this global AI development. So, um, yeah, like... the... so you know do you wanna... give a... what you talked about on Wednesday?
1:35:28
Elon Musk: Um... so essentially you said like... the regulations would be good but you don't want to slow it down... the... the... AI problem... the... the progress too much. Yeah. That... that's essentially what you said.
1:35:39
Zihang Dai: Uh, yeah I think the... the... the right... um, way for a regulations to be done is to start with, uh, insight. So first, uh, you know... you've... you know... any kind of regulatory authority, whether public or private, um, first tries to understand... like make sure there's like a broad understanding. And then there's a, you know, proposed rulemaking. Um, and if that proposed rulemaking is agreed upon by all or most parties, then... then... there's... then it gets implemented... you know, you give companies some period of time to implement it. Um, but I think overall it should not meaningfully slow down, uh, the advent of AGI. Uh, or if it does slow it down, it's not going to be for... for a... like a very long time. Um, and probably a little bit of slowing down is, uh, worthwhile if it's, uh... if it's a significant improvement in safety.
1:36:52
Elon Musk: Um, like it's... if if... you know... like my prediction for AGI would, you know, roughly match that, uh... which I... I think Ray Kurzweil at one point said 2029. That would rough... that's roughly my guess too. Um, give or take a year. So if, uh, you know if it takes like let's say an additional 6 months or 12 months for AGI, that's, uh, really not a big deal if it's, uh... you know... like a... spending a year to make sure AGI is safe is probably worthwhile, you know. If... if that's what it takes. But I... I wouldn't expect it to be a substantial slowdown.
1:37:25
Zihang Dai: Yeah. And I can also add that, um, on... like understanding the inner working of... of advanced AI is probably the most ambitious projects out there as well. And also aligns with xAI's mission of understanding the universe. And it's probably not possible for aerospace engineers to build a safe rocket if they don't understand how it works. Um, and that's the same approach we want to take at xAI for the... our safety plans. And as the AI advances across different stages, the risk also changes. And we want to be fluid, uh, at... across all the stages.
1:38:09
Elon Musk: Yeah. I mean if... if I think about like how... the... what... what actually makes, uh, regulations effective in car... with cars and rockets, it's actually... it's not so much that the regulators are instructing Tesla and SpaceX, but more that, uh, since... since we have to think about things internally and then justify it to regulators, it makes us just really think about the problem more. Um, and then so... and in thinking about the problem more, it... it makes it safer. As opposed to the regulator specifically pointing out, uh, ways to make it safer. It just forces us to think about it more.
1:38:53
Christian Szegedy: Okay. Uh, can I add a... I... I just wanted to make another point so independent of the safety. It's more like, uh, my experience at Alphabet was that it was extremely... there was a lot of red tape around involving external people like, uh, other entities to collaborate with or expose our models to them because of the lot of, um, red tape around exposing anything that we were doing internally. So I wanted to ask Elon whether... so I... I hope that here we have a bit more freedom to do so or what your philosophy about, uh, collaborating with more external entities like academic institutions or other researchers in the area. So.
1:39:36
Elon Musk: Yeah, I certainly support collaborating with others. Um, so... um, I mean it sounds like, you know, s- some of the... yeah, uh, concerns with, uh, like any kind of like large publicly traded companies is like they're... they're worried about being embarrassed in some way or being sued or, I don't know, something. Um, but but there's a... like... it's somewhat proportionate to the number of... if the size of the legal department... is... um, our legal department currently is zero. So, um, that that, you know it won't be zero forever, but, you know, the... uh... you know... it's also very easy to sue publicly traded companies. Like class action lawsuits are... I... I mean we desperately need class action lawsuit reform in the United States. Um, the... the ratio of, uh, like it's... the ratio of like good class action lawsuits to bad class action lawsuits is way out of whack. Um, and it effectively ends up being a tax on consumers. Um, you know, and somehow other countries are able to survive without class action. So, it's kinda... like it's not clear we need that... that body of law at all. Um, but... but that is a... a major problem with, uh, publicly traded companies is... it's just, yeah, non-stop legal... non-stop lawsuits.
1:41:03
Elon Musk: Um, yeah, so I do support collaborating with others, um, and, uh, and generally being, uh, actually... actually open. Um, so... um... you know the thing I find is... it's actually... it's... it's quite hard to, uh, like... like if you're... if you're innovating fast, uh, that's the... that... that is the actual competitive advantage is the pace of innovation as opposed to, uh, any given innovation. Um, you know, that really has to be... like the strength of Tesla and SpaceX is that the rate of innovation is, uh, the competitive advantage not... not what has been developed at any one point. Um, in fact SpaceX, there's almost no patents. Um, and Tesla, uh, open sources patents. So you can use all our patents for free. Um, so as long as, uh, SpaceX and Tesla continue to innovate rapidly, that's the actual defense against competition as opposed to, you know, patents and trying to hide things, you know... and... and just treating patents like... I'm... like I'm basically like a minefield. Um, the... reason we open sourced our... like Tesla does continue to make patents and open source them in order to basically be a mine remover. It's a... like a minesweeper. Aspirationally a minesweeper. We still get sued... sued by patent trolls. It's very annoying but, um, but we actually literally make patents and open source them in order to be a minesweeper.
1:42:33
Kyle Kosic: Hey. Yeah, so... so one thing I wanted to just talk about before we're concluded is how... uh, is... is how impactful... sorry about that little feedback... is is just about the the impactfulness of of AI as as a means of, you know, uh, providing equal opportunity, uh, to humanity from all walks of life. And the important... uh, the importance of democratizing it as far as our mission statement goes. Because if you... if you think about, you know, the the history of humanity and, you know, access to information... it would... you know, before the printing press it was incredibly hard for people to get access to to new forms of knowledge. And and, you know, being able to to provide that, you know, level of communication to people is hugely deflationary in terms of, you know, wealth and opportunity inequality. And so we're really at like a new, you know... inflection point in the development of society when it comes to getting everyone the same, uh, potential for for great outcomes regardless of your your position in life. And so when when we're talking about, you know, removing the monopolization of ideas and about controlling this technology from, you know, uh... you know, paid subscription services or even worse from, you know, the the political censorship that may come with whatever, you know, capital that has to supply these models. We're we're really talking about democratizing people's opportunities to not only, you know, better their position in life but just, you know, advance their their social status in the world at an unprecedented level in in history. And so, uh, as as a company when we when we talk about the importance of truthfulness and being able to, you know, reliably trust these models, learn from them, and and make scientific advancement, make societal advancements, we're really just talking about improving people's qualities of life and improving everyone, not just, you know, the the top, you know, tech people in Silicon Valley who have access to use it. It's it's really about giving this access to everyone. Um, and I think that's a mission that our whole team shares.
1:44:27
Elon Musk: Yeah. Before we sign off here, just one last question for Elon. So assuming that xAI is successful at building human level AI or even beyond human level AI, um, do you think it's reasonable to involve the public in decision making... uh... in the company or how do you see that evolving in the long term?
1:44:47
Elon Musk: Yeah, I mean... as as with everything, like I think we're very open to critical feedback and and welcome that. Um... we should be criticized. Um... uh... that's a good thing. Um... and actually one of the things that I... like sort of X slash Twitter for is that, uh, there's plenty of negative feedback on Twitter. Um... which is, uh, helpful for ego compression. Um... so... um... so, um... but so... the the... the best thing I can think of right now is that any human that wants to have... sort of have a vote in the future of xAI ultimately should be allowed to. So basically humans—provided you can verify that you're a real human—um, that any... any human that wishes to have a vote in the future of xAI should be allowed to have a vote in the future of xAI.
1:45:49
Elon Musk: Yeah. Maybe there's like some nominal fee like 10 bucks or something. I don't know. 10 bucks and prove you're a human. Um, and then you can have a vote, you know. Anyone who's interested. That's the best thing I can think of right now at least.
1:46:04
Elon Musk: All right, cool. Uh, on that note, thanks everyone for participating and, um, we'll... we'll keep, uh, keep you informed of, uh, any progress that we make and, um, and look forward to, uh, having a lot of great people join the team. Thanks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment