- The following is a conversation all about the state-of-the-art in artificial intelligence, including some of the exciting technical breakthroughs and developments in AI that happened over the past year, and some of the interesting things we think might happen this upcoming year. At times, it does get super technical, but we do try to make sure that it remains accessible to folks outside the field without ever dumbing it down. It is a great honor and pleasure to be able to do this kind of episode with two of my favorite people in the AI community, Sebastian Raschka and Nathan Lambert. They are both widely respected machine learning researchers and engineers who also happen to be great communicators, educators, writers, and X posters. Sebastian is the author of two books I highly recommend for beginners and experts alike. First is Build a Large Language Model from Scratch and Build a Reasoning Model from Scratch. I truly believe in the machine learning world, the best way to learn and understand something is to build it yourself from scratch. Nathan is the post-training lead at the Allen Institute for AI, author of the definitive book on Reinforcement Learning from Human Feedback. Both of them have great X accounts, great Substacks. Sebastian has courses on YouTube, Nathan has a podcast. And everyone should absolutely follow all of those. those. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, get feedback, and so on. And now, dear friends, here's Sebastian Raschka and Nathan Lambert. So I think one useful lens to look at all this through is the so-called DeepSeek moment. This happened about a year ago in January 2025, when the open-weight Chinese company DeepSeek released DeepSeek R1, that I think it's fair to say surprised everyone with near-state-of-the-art performance, with allegedly much less compute for much cheaper. And from then to today, the AI competition has gotten insane, both on the research and product level. It's just been accelerating. discuss all of this today, and maybe let's start with some spicy questions if we can. Who's winning at the international level? Would you say it's the set of companies in China or the set of companies in the United States? And Sebastian, Nathan, it's good to see you guys. guys. So Sebastian, who do you think is winning? - Winning is a very broad term. I would say you mentioned the DeepSeek moment, and I think DeepSeek is winning the hearts of the people who work on open-weight models because they share these as open models. Winning, I think, has multiple timescales to it. We have today, we have next year, we have in 10 years. One thing I know for sure is that I don't think nowadays, in 2026, that there will be any company that has access to technology that no other company has access to. That is mainly because researchers are frequently changing jobs and labs. They rotate. I don't think there will be a clear winner in terms of technology access. However, I do think there will be, The differentiating factor will be budget and hardware constraints. I don't think the ideas will be proprietary, but rather the resources needed to implement them. I don't see currently a winner-take-all scenario. I can't see that. At the moment. - Nathan, what do you think? - You see the labs put different energy into what they're trying to do, and I think to demarcate the point in time when we're recording this, the hype over Anthropic's Claude Opus 4.5 model has been absolutely insane, which is just... I mean, I've used it and built stuff in the last few weeks, and it's... it's almost gotten to the point where it feels like a bit of a meme in terms of the hype. And it's kind of funny because this is very organic, and then if we go back a few months ago, we can see the release date and the notes, as Gemini 3 from Google got released, and it seemed like the marketing and just, like, wow factor of that release was super high. But then at the end of November, Claude Opus 4.5 was released and the hype has been growing, but Gemini 3 was before this. And it kind of feels like people don't really talk about it as much, even though when it came out, everybody was like, this is Gemini's moment to retake Google's structural advantages in AI. And Gemini 3 is a fantastic model, and I still use it. It's just kind of differentiation is lower. And I agree with Sebastian; what you're saying with all these, the idea space is very fluid, but culturally Anthropic is known for betting very hard on code, which is the Claude Code thing, is working out for them right now. So I think that even if the ideas flow pretty freely, so much of this is bottlenecked by human effort and the culture of organizations, where Anthropic seems to at least be presenting as the least chaotic. It's a bit of an advantage, if they can keep doing that for a while. But on the other side of things, there's a lot of ominous technology from China where there's way more labs than DeepSeek. So DeepSeek kicked off a movement within China, I say kind of similar to how ChatGPT kicked off a movement in the US where everything had a chatbot. There's now tons of tech companies in China that are releasing very strong frontier open-weight models, to the point where I would say that DeepSeek is kind of losing its crown as the preeminent open model maker in China, and the likes of Z.ai with their GLM models, Minimax's models, Kimi Moonshot, especially in the last few months, has shown more brightly. The new DeepSeek models are still very strong, but that's kind of a... it could look back as a big narrative point where in 2025 DeepSeek came and it provided this platform for way more Chinese companies that are releasing these fantastic models to kind of have this new type of operation. So these models from these Chinese companies are open-weights, and depending on this trajectory of business models that these American companies are doing, they could be at risk. But currently, a lot of people are paying for AI software in the US, and historically in China and other parts of the world, people don't pay a lot for software. - So some of these models like DeepSeek have the love of the people because they are open-weight. How long do you think the Chinese companies keep releasing open-weight models? - I would say for a few years. I think that, like in the US, there's not a clear business model for it. I have been writing about open models for a while, and these Chinese companies have realized it. So I get inbound from some of them. And they're smart and realize the same constraints: a lot of top US tech companies and other IT companies won't pay for an API subscription to Chinese companies for security concerns. This has been a long-standing habit in tech, and the people at these companies then see open weight models as an ability to influence and take part of a huge growing AI expenditure market in the US. And they're very realistic about this, and it's working for them. I think that the government will see that that is building a lot of influence internationally in terms of uptake of the technology, so there's going to be a lot of incentives to keep it going. But building these models and doing the research is very expensive, so at some point, I expect consolidation. But I don't expect that to be a story of 2026, where there will be more open model builders throughout 2026 than there were in 2025. And a lot of the notable ones will be in China. - You were going to say something? - Yes. You mentioned DeepSeek losing its crown. I do think to some extent, yes, but we also have to consider though, they are still, I would say, slightly ahead. And the other ones—it's not that DeepSeek got worse, it's just that the other ones are using the ideas from DeepSeek. For example, you mentioned Kimi—same architecture, they're training it. And then again, we have this leapfrogging where they might be at some point in time a bit better because they have the more recent model. And I think this comes back to the fact that there won't be a clear winner. It will just be like that: one person releases something, the other one comes in, and the most recent model is probably always the best model. - Yeah. We'll also see the Chinese companies have different incentives. Like, DeepSeek is very secretive, whereas some of these startups are like the MiniMaxs and Z.ais of the world. Those two literally have filed IPO paperwork, and they're trying to get Western mindshare and do a lot of outreach there. So I don't know if these incentives will change the model development, because DeepSeek famously is built by a hedge fund, Highflyer Capital, and we don't know exactly what they use the models for or if they care about this. - They're secretive in terms of communication; they're not secretive in terms of the technical reports that describe how their models work. They're still open on that front. And we should also say, on the Claude Opus 4.5 hype, there's the layer of something being the darling of the X echo chamber, on the Twitter echo chamber, and the actual amount of people that are using the model. I think it's probably fair to say that ChatGPT and Gemini are focused on the broad user base that just want to solve problems in their daily lives, and that user base is gigantic. So the hype about the coding may not be representative of the actual use. - I would say also a lot of the usage patterns are, like you said, name recognition, brand and stuff, but also muscle memory almost, where, you know, ChatGPT has been around for a long time. People just got used to using it, and it's almost like a flywheel: they recommend it to other users and that stuff. One interesting point is also the customization of LLMs. For example, ChatGPT has a memory feature, right? And so you may have a subscription and you use it for personal stuff, but I don't know if you want to use that same thing at work. Because it's a boundary between private and work. If you're working at a company, they might not allow that or you may not want that. And I think that's also an interesting point where you might have multiple subscriptions. One is just clean code. It has nothing of your personal images or hobby projects in there. It's just like the work thing. And then the other one is your personal thing. So I think that's also something where there are two different use cases, and it doesn't mean you only have to have one. I think the future is also multiple ones. - What model do you think won 2025, and what model do you think is going to win '26? - I think in the context of consumer chatbots, it's a question of: are you willing to bet on Gemini over ChatGPT? Which I would say, in my gut, feels like a bit of a risky bet because OpenAI has been the incumbent, and there are so many benefits to that in tech. I think the momentum, if you look at 2025, was on Gemini's side, but they were starting from such a low point. And RIP Bard and these earlier attempts at getting started. Huge credit to them for powering through the organizational chaos to make that happen. But also it's hard to bet against OpenAI because they always come off as so chaotic, but they're very good at landing things. And I think, personally, I have very mixed reviews of GPT-5, but it must have saved them so much money with the high-line feature being a router where most users are no longer charging their GPU costs as much. So I think it's very hard to dissociate the things that I like out of models versus the things that are going to actually be a general public differentiator. - What do you think about 2026? Who's going to win? - I'll say something, even though it's risky. I think Gemini will continue to make progress on ChatGPT. I think Google's scale, when both of these are operating at such extreme scales—and Google has the ability to separate research and product a bit better, whereas you hear so much about OpenAI being chaotic operationally and chasing the high-impact thing, which is a very startup culture. And then on the software and enterprise side, I think Anthropic will have continued success, as they've again and again been set up for that. And obviously Google Cloud has a lot of offerings, but I think this kind of Gemini name brand is important for them to build. Google Cloud will continue to do well, but that's a more complex thing to explain in the ecosystem, because that's competing with the likes of Azure and AWS rather than on the model provider side. - So in infrastructure, you think TPU is giving an advantage? - Largely because the margin on NVIDIA chips is insane, and Google can develop everything from top to bottom to fit their stack and not have to pay this margin. And they've had a head start in building data centers. So all of these things that have both high lead times and very hard margins on high costs, Google has a just kind of historical advantage there. And if there's going to be a new paradigm, it's most likely to come from OpenAI where their research division again and again has shown this ability to land a new research idea or a product. Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI, and that's got to be one of their top traits as an organization. So it's kind of hard to bet against that, but I think a lot of this year will be about scale and optimizing what could be described as low-hanging fruit in models. - And clearly there's a trade-off between intelligence and speed. This is what ChatGPT-5 was trying to solve behind the scenes. It's like, do people actually want intelligence, the broad public, or do they want speed? - I think it's a nice variety, or the option to have a toggle there. I mean, for my personal usage, most of the time when I look something up, I use ChatGPT to ask a quick question, get the information I wanted fast. For most daily tasks, I use the quick model. Nowadays, I think the auto mode is pretty good where you don't have to specifically say thinking or non-thinking. Then again, I also sometimes want the pro mode. Very often what I do is, when I have something written, I put it into ChatGPT and say, "Hey, do a very thorough check. Are all my references correct? Are all my thoughts correct? Did I make any formatting mistakes and are the figure numbers wrong?" Or something like that. And I don't need that right away. I finish my stuff, maybe have dinner, let it run, come back and go through this. I think this is where it's important to have this option. I would go crazy if for each query I would have to wait 30 minutes or 10 minutes even. - That's me. I'm sitting over here losing my mind that you use the router and the non-thinking model. I'm like, "How do you live with that?" That's like my reaction. I've been heavily on ChatGPT for a while. I never touched ChatGPT-5 non-thinking. I find its tone and then its propensity for errors—it has a higher likelihood of errors. Some of this is from back when OpenAI released o3, which was the first model to do this deep search and find many sources and integrate them for you. I became habituated with that. So I will only use GPT-5.2 Thinking or Pro when I'm finding any sort of information query for work, whether that's a paper or some code reference that I found. And I will regularly have like five Pro queries going simultaneously, each looking for one specific paper or feedback on an equation or something. - I have a fun example where I needed the answer as fast as possible for this podcast before I was going on the trip. like a local GPU running at home and I wanted to run a long RL experiment. And usually I also unplug things because you never know if you're not at home, you don't want things plugged in. And I accidentally unplugged the GPU. My wife was already in the car and it's like, "Oh dang." Then basically I wanted as fast as possible a Bash script that runs my different experiments and the evaluation. And it's something I know, I learned how to use the Bash interface or Bash terminal, but in that moment I just needed like 10 seconds, give me the command. - This is a hilarious situation but yeah, so what did you use? - So I did the non-thinking fastest model. It gave me the Bash command to chain different scripts to each other and then the thing is like you have the tee thing where you want to route this to a log file. Top of my head I was just like in a hurry, I could have thought about it myself. - By the way I don't know if there's a representative case, wife waiting in the car-... you have to run, you know, unplug the GPU. You have to generate a Bash script. This sounds like a movie, like- Mission Impossible. - I use Gemini for that. So I use thinking for all the information stuff and then Gemini for fast things or stuff that I could sometimes Google, which is like it's good at explaining things and I trust that it has this kind of background of knowledge and it's simple. And the Gemini app has gotten a lot better and- It's good for those sorts of things. And then for code and any sort of philosophical discussion, I use Claude Opus 4.5. Also always with extended thinking. Extended thinking and inference time scaling is just a way to make the models marginally smarter. And I will always err on that side when the progress is very high because you don't know when that'll unlock a new use case. And then sometimes use Grok for real-time information or finding something on AI Twitter that I knew I saw and I need to dig up and I just fixated on. Although when Grok 4 came out, the Grok 4 SuperGrok Heavy, which was like their pro variant was actually very good and I was pretty impressed with it, and then it just kind of like muscle memory lost track of it with having the ChatGPT app open. So I use many different things. - Yeah. I actually do use Grok 4 Heavy for debugging. For like hardcore debugging that the other ones can't solve, I find that it's the best at. And... it's interesting 'cause you say ChatGPT is the best interface. For me, for that same reason, but this could be just momentum- Gemini is the better interface for me. I think because I fell in love with their best needle in the haystack. If I ever put something that has a lot of context but I'm looking for very specific kinds of information to make sure it tracks all of it, I find at least that Gemini for me has been the best. So it's funny with some of these models, if they win your heart over- for one particular feature on one particular day, for that particular query, that prompt, you're like, "This model's better." And so you'll just stick with it for a bit until it does something really dumb. There's like a threshold effect. Some smart thing and then you fall in love with it and then it does some dumb thing and you're like, "You know what? I'm gonna switch and try Claude or ChatGPT." And all that kind of stuff. - This is exactly it: you use it until it breaks, until you have a problem, and then you change the LLM. And I think it's the same as how we use anything, like our favorite text editor, operating systems, or the browser. I mean, there are many options: Safari, Firefox, Chrome. They're relatively similar, but then there are edge cases, extensions you want, and then you switch. But I don't think anyone types the same thing into different browsers and compares them. You only do that when something breaks. So that's a good point. You use it until it breaks, then you explore other options. - On the long context thing, I was also a Gemini user, but the GPT-5.2 release blog had crazy long context scores. People were like, "Did they just figure out some algorithmic change?" It went from 30% to 70% in this minor model update. It's very hard to keep track of all of these things, but now I look more favorably at GPT-5.2's long context. So it's just like, "How do I actually get to testing this?" It's a never-ending battle. - Well, it's interesting that none of us talked about the Chinese models from a usage perspective. What does that say? Does it mean the Chinese models are not as good, or are we just very biased and US-focused? - I think currently there's a discrepancy between the model and the platform. The open models are more known for the open weights, not the platform yet. known for the open weights, not their platform yet. - Many companies will sell you open-model inference at a very low cost. With OpenRouter, it's easy to look at multi-model things. You can run DeepSeek on Perplexity. Sitting here, we're like, "We use OpenAI GPT-5 Pro consistently." We're all willing to pay for the marginal intelligence gain. These models from the US are better in terms of the outputs. I think the question is, will they stay better for this year and for years to come? As long as they're better, I'm gonna pay for them. There's also analysis showing that the way the Chinese models are served—you could argue this is due to export controls— is that they use fewer GPUs per replica, which makes them slower and have different errors. If speed and intelligence are in your favor as a user, in the US, a lot of users will go for this. And I think that will spur these Chinese companies to want to compete in other ways, whether it's free or substantially lower costs, or it'll breed creativity in terms of offerings, which is good for the ecosystem. But the simple thing is: the US models are currently better, and we use them. I tried these other open models, and I'm like, "Fun, but I don't go back." models, and I'm like, "Fun, but not gonna... I don't go back to it." - We didn't really mention programming. That's another use case that a lot of people deeply care about. I use basically half-and-half Cursor and Claude Code, because they're... I fundamentally different experiences and both are useful. What do you guys... You program quite a bit, so what do you use? What's the current vibe? - So, I use the Codeium plugin for VS Code. You know, it's very convenient. It's just like a plugin, and then it's a chat interface that has access to your repository. I know that Claude Code is, I think, a bit different. It is a bit more agentic. It touches more things. It does the whole project for you. I'm not quite there yet where I'm comfortable with that because maybe I'm a control freak, but I still would like to see a bit what's going on. And Codeium is kind of, right now, for me, the sweet spot where it is helping me, but it is not taking completely over. - I should mention, one of the reasons I do use Claude Code is to build the skill of programming with English. I mean, the experience is fundamentally different. You're... As opposed to micromanaging the details of the process of the generation of the code, and looking at the diff, which you can in Cursor if that's the IDE you use, and in changing, altering. Looking and reading the code and understanding the code deeply as you progress, versus just thinking in this design space and just guiding it at this macro level, which I think is another way of thinking about the programming process. Also, we should say that Claude Code just seems to be somehow a better utilization of Claude Opus 4.5. - It's a good side-by-side for people to do. You can have Claude Code open, you can have Cursor open, you can have VS Code open, and you can select the same models on all of them— ...and ask questions, and it's very interesting. Claude Code is way better in that domain. It's remarkable. - All right, we should say that both of you are legit on multiple fronts: researchers, programmers, educators, Tweeters. And on the book front, too. So Nathan, at some point soon, hopefully has an RLHF book coming out. - It's available for preorder, and there's a full digital preprint. I'm just making it pretty and better organized for the physical thing, which is a lot of why I do it, because it's fun to create things that you think are excellent in the physical form when so much of our life is digital. - I should say, going to Perplexity here, Sebastian Raschka is a machine learning researcher and author known for several influential books. A couple of them that I wanted to mention—which is a book I highly recommend—Build a Large Language Model from Scratch, and the new one, Build a Reasoning Model from Scratch. So, I'm really excited about that. Building stuff from scratch is one of the most powerful ways of learning. - Honestly, building an LLM from scratch is a lot of fun. It's also a lot to learn. And like you said, it's probably the best way to learn how something really works, 'cause you can look at figures, but figures can have mistakes. You can look at concepts and explanations, but you might misunderstand them. But if there is code, and the code works, you know it's correct. I mean, there's no misunderstanding. It's precise. Otherwise, it wouldn't work. And I think that's the beauty behind coding. It doesn't lie. It's math, basically. So, even though with math, I think you can have mistakes in a book you would never notice. Because you are not running the math when you are reading the book, you can't verify this. And with code, what's nice is you can verify it. - Yeah, I agree with you about the Build an LLM from Scratch book. It's nice to tune out everything else, the internet and so on, and just focus on the book. But, you know, I read several history books. It's just less lonely somehow. It's really more fun. Like for example, on the programming front, I think it's genuinely more fun to program with an LLM. And I think it's genuinely more fun to read with an LLM. But you're right. That distraction should be minimized. So you use the LLM to basically enrich the experience, maybe add more context. I just find the rate of aha moments for me is really high with LLMs. - 100%. I also want to correct myself: I'm not suggesting not to use LLMs. I suggest doing it in multiple passes. Like, one pass just offline, focus mode, and then after that... I mean, I also take notes, but I, I try to resist the urge to immediately look things up. I do a second pass. It's just more structured this way. Sometimes things are answered in the chapter, but sometimes also it just helps to let it sink in and think about it. Other people have different preferences. I highly recommend using LLMs when reading books. For me, it's not the first thing to do; it's the second pass. - My recommendation is the opposite. I like to use the LLM at the beginning to lay out the full context of what is this world that I'm now stepping into? But I try to avoid clicking out of the LLM into the world of Twitter and blogs, because then you're down this rabbit hole. You're reading somebody's opinion. There's a flame war about a particular topic and all of a sudden you're in the realm of the internet and Reddit and so on. But if you're purely letting the LLM give you the context of why this matters, what are the big picture ideas... sometimes books are good at doing that, but not always. - This is why I like the ChatGPT app, because it gives the AI a home on your computer where you can focus on it, rather than just being another tab in my mess of internet options. And I think Claude Code does a good job of making that a joy, where it seems very engaging as a product design to be an interface that your AI will then go out into the world. It's something that is intangible between it and Codex; it just feels warm and engaging, where Codex can often be as good from OpenAI, but it just, feel a little bit rough around the edges. Whereas Claude Code makes it fun to build things from scratch, where you just trust that it'll make something. Obviously this is good for websites and kind of refreshing tooling and stuff like this, which I use it for, or data analysis. For my On my blog, we scrape Hugging Face so we keep download numbers for every dataset and model. over time, so we have them. And Claude was just like, "Yeah, I've made use of that data, no problem." And I was like, "That would've taken me days." And then I have enough situational awareness to be like, "Okay, these trends obviously make sense." You can check things. But that's just a wonderful interface where you can have an intermediary and not have to do the kind of awful low-level work that you would have to do to maintain different web projects. - All right. So we just talked about a bunch of the closed-weight models. Let's talk about the open ones. Tell me about the landscape of open LLM models. Which are interesting? Which stand out to you and why? We already mentioned DeepSeek R1. - Do you wanna see how many we can name off the top of our head? - Yeah, without looking at notes. - DeepSeek, Kimi, MiniMax, Z.ai, Moonshot. We're just going Chinese. - Let's throw in Mistral AI, Gemma... ...gpt-oss, the open weight model by OpenAI. Actually, NVIDIA had a really cool one, Nemotron 3. There, there's a lot of stuff especially at the end of the year. Qwen might be the one— - Oh, yeah. Qwen was the obvious name I was gonna say. You can get at least 10 Chinese and at least 10 Western. I think that OpenAI released their first open model— ...since GPT-2. When I was writing about OpenAI's open model release, they were like, "Don't forget about GPT-2," which I thought was really funny 'cause it's just such a different time. But gpt-oss-120b is actually a very strong model and does some things that other models don't do very well. Selfishly, I'll promote a bunch of Western companies in the US and Europe that have these fully open models. I work at the Allen Institute for AI, where we've been building OLMo, which releases data and code. And now we have actual competition for people that are trying to release everything so that others can train these models. There's the Institute for Foundation Models/LM360, which has had their K2 models of various types. Apertus is a Swiss research consortium. Hugging Face has SmolLM, which is very popular. And NVIDIA's Nemotron 3 has started releasing data as well. And then Stanford's Martini Community Project, which is kind of making it so there's a pipeline for people to open a GitHub issue and implement a new idea and then have it run in a stable language modeling stack. This space, that list was way smaller in 2024— ...so I think it was just AI2. So it's a great thing for more people to get involved and to understand language models, which doesn't really have a Chinese analog. While I'm talking, I'll say that the Chinese open language models tend to be much bigger, and that gives them higher peak performance as MoEs, where a lot of these things that we like a lot, whether it was Gemma and Nemotron, have tended to be smaller models from the US, which is starting to change from the US and Europe. Mistral Large 3 came out, which was a giant MoE model, very similar to DeepSeek architecture in December. And then a startup, RCAI, and both Nemotron and NVIDIA have teased MoE models way bigger than 100 billion parameters- like this 400 billion parameter range coming in this Q1 2026 timeline. So I think this kind of balance is set to change this year in terms of what people are using the Chinese versus US open models for, which I'm personally going to be very excited to watch. - First of all, huge props for being able to name so many of these. Did you actually name LLaMA? - No. - I feel like... - RIP. - This was not on purpose. - RIP LLaMA. All right. Can you mention some interesting models that stand out? You mentioned Qwen 3 is obviously a standout. - So I would say the year's almost bookended by both DeepSeek V3 and R1. And then on the other hand, in December, DeepSeek-V3.2. Because what I like about those is they always have an interesting architecture tweak that others don't have. But otherwise, if you want to go with the familiar but really good performance, Qwen 3 and, like Nathan said, also gpt-oss-120b. And I think what's interesting about it is it's kind of like the first public or open weight model that was really trained with tool use in mind, which I do think is kind of a paradigm shift where the ecosystem was not quite ready for it. By tool use, I mean that the LLM is able to do a web search or to call a Python interpreter. And I do think it's a standout because it's a huge unlock. Because one of the most common complaints about LLMs are, for example, hallucinations, right? And so, in my opinion, one of the best ways to solve hallucinations is to not try to always remember information or make things up. For math, why not use a calculator app or Python? If I ask the LLM, "Who won the soccer World Cup in 1998?" instead of just trying to memorize, it could go do a search. I think mostly it's still a Google search. So ChatGPT and gpt-oss-120b, they would do a tool call to Google, maybe find the FIFA website. Find, okay, it was France. It would get you that information reliably instead of just trying to memorize it. So I think it's a huge unlock which right now is not fully utilized yet by the open-source, open-weight ecosystem. A lot of people don't use tool call modes because I think, first, it's a trust thing. You don't want to run this on your computer where it has access to tools, could wipe your hard drive or whatever. So you want to maybe containerize that. But I do think that is like a really important step for the upcoming years to have this ability. - So a few quick things. First of all, thank you for defining what you mean by tool use. I think that's a great thing to do in general for the concepts we're talking about. Even things as sort of well-established as MoEs. You have to say that means mixture of experts, and you kind of have to build up an intuition for people what that means, how it's actually utilized, what are the different flavors. So what does it mean that there's just such an explosion of open models? What's your intuition? - If you're releasing an open model, you want people to use it, is the first and foremost thing. And then after that comes things like transparency and trust. I think when you look at China, the biggest reason is that they want people around the world to use these models, and I think a lot of people will not. If you look outside of the US, a lot of people will not pay for software, but they might have computing resources where you can put a model on it and run it. I think there can also be data that you don't want to send to the cloud. So the number one thing is getting people to use models, use AI, or use your AI that might not be able to do it without having access to the model. - I guess we should state explicitly, so we've been talking about these Chinese models and open weight models. Oftentimes, the way they're run is locally. So it's not like you're sending your data to China or to whoever developed Silicon Valley, or whoever developed the model. - A lot of American startups make money by hosting- ...these models from China and selling them. It's called selling tokens, which means somebody will call the model to do some piece of work. I think the other reason is for US companies like OpenAI. They are so GPU deprived. They're at the limits of the GPUs. Whenever they make a release, they're always talking about like, "Our GPUs are hurting." And I think during one of these gpt-oss-120b release sessions, Sam Altman said, "Oh, we're releasing this because we can use your GPUs. We don't have to use our GPUs, and OpenAI can still get distribution out of this," which is another very real thing, because it doesn't cost them anything. - And for the user, I think also, there are users who just use the model locally how they would use ChatGPT. But also for companies I think it's a huge unlock to have these models because you can customize them, you can train them, you can add post-training, add more data. Like, specialize them into, let's say, law, medical models, whatever you have. And the appeal, you mentioned Llama, the appeal of the open-weight models from China is that the open-weight models' licenses are even friendlier. I think they are just unrestricted open source licenses where if we use something like Llama or Gemma, there are some strings attached. I think it's like an upper limit in terms of how many users you have. And then if you exceed, I don't know, so and so many million users, you have to report your financial situation to, let's say, Meta or something like that. And I think while it is a free model, there are strings attached, and people do like things where strings are not attached. So I think that's also one of the reasons, besides performance, why the open-weight models from China are so popular, because you can just use them. There's no catch in that sense. - The ecosystem has gotten better on that front, but mostly downstream of these new providers providing such open licenses. That was funny when you pulled up Perplexity and said, "Kimi K2 Thinking hosted in the US." Which is just like an exact... I've never seen this, but it's an exact example of what we're talking about where people are sensitive to this. But Kimi K2 Thinking and Kimi K2 is a model that is very popular. People say that has very good creative writing and also in doing some software things. So it's just these little quirks that people pick up on with different models that they like. - What are some interesting ideas that some of these models have explored that you can speak to, that are particularly interesting to you? - Maybe we can go chronologically. I mean, there was, of course, DeepSeek. DeepSeek R1 that came out in January of 2025, if we just focus on 2025. However, this was based on DeepSeek-V3, which came out the year before in December 2024. There are multiple things on the architecture side. What is fascinating is... I mean, that's what I do with my from-scratch coding projects. You can still start with GPT-2, and you can add things to that model to make it into this other model. So it's all still kind of like the same lineage. It is a very close relationship between those. But top of my head, DeepSeek—what was unique there is the Mixture of Experts. Not that they were inventing Mixture of Experts—we can maybe talk a bit more about what Mixture of Experts means—but just to list these things first before we dive into detail. Mixture of Experts, but then they also had Multi-head Latent Attention, which is a tweak to the attention mechanism, where this was, I would say, the main distinguishing factor between these open-weight models. Different tweaks to make inference or KV cache size... We can also define KV cache in a few moments, but to kind of make it more economical to have long context, to shrink the KV cache size. So what are tweaks that we can do? And most of them focused on the attention mechanism. There is Multi-head Latent Attention in DeepSeek. There is Group Query Attention, which is still very popular. It's not invented by any of those models. It goes back a few years. But that would be the other option. Sliding window attention—I think OLMo 3 uses it, if I remember correctly. So there are these different tweaks that make the models different. Otherwise, I put them all together in an article once where I just compared them. They are very, surprisingly similar. It's just different numbers in terms of how many repetitions of the transformer block you have in the center. And, like, just little knobs that people tune. But what's so nice about it is it works no matter what. You can tweak things. You can move the normalization layers around to get some performance gains. And OLMo is always very good in ablation studies, showing what it actually does to the model if you move something around. Ablation studies: does it make it better or worse? But there are so many, let's say, ways you can implement a transformer and make it still work. The big ideas that are still prevalent is Mixture of Experts, multi-head latent attention, sliding window attention, group query attention. And then at the end of the year, we saw a focus on making the attention mechanism scale linearly with inference token prediction. So there was Qwen2-VL, for example, which added a gated delta net. It's kind of inspired by State space models, where you have a fixed state that you keep updating. But it makes essentially this attention cheaper, or it replaces attention with a cheaper operation. - And it may be useful to step back and talk about transformer architecture in general. - Yeah, so maybe we should start with the GPT-2 architecture. The transformer that was derived from the "Attention Is All You Need" paper. The "Attention Is All You Need" paper had a transformer architecture that had two parts, an encoder and a decoder. And GPT went just focusing in on the decoder part. It is essentially still a neural network and it has this attention mechanism inside. And you predict one token at a time. You pass it through an embedding layer. There's the transformer block. The transformer block has attention modules and a fully connected layer. And there are some normalization layers in between. But it's essentially neural network layers with this attention mechanism. So coming from GPT-2 when we move on to gpt-oss-120b, there is, for example, the Mixture of Experts layer. It's not invented by gpt-oss-120b. It's a few years old. But it is essentially a tweak to make the model larger without consuming more compute in each forward pass. So there is this fully connected layer, and if listeners are familiar with multi-layer perceptrons, you can think of a mini multi-layer perceptron, a fully connected neural network layer inside the transformer. And it's very expensive, because it's fully connected. If you have a thousand inputs and a thousand outputs, that's like one million connections. And it's a very expensive part in this transformer. And the idea is to kind of expand that into multiple feedforward networks. So instead of having one, let's say you have 256, but it would make it way more expensive, because now you have 256, but you don't use all of them at the same time. So you now have a router that says, "Okay, based on this input token, it would be useful to use this fully connected network." And in that context, it's called an expert. So a Mixture of Experts means you have multiple experts. And depending on what your input is, let's say it's more math-heavy, it would use different experts, compared to, let's say, translating input text from English to Spanish. It would maybe consult different experts. It's not quite clear, I mean, not as clear-cut to say, "Okay, this is only an expert for math and for Spanish." It's a bit more fuzzy. But the idea is essentially that you pack more knowledge into the network, but not all the knowledge is used all the time. That would be very wasteful. So, during the token generation, you are more selective. There's a router that selects which tokens should go to which expert. It adds more complexity. It's harder to train. There's a lot that can go wrong, like collapse and everything. So I think that's why OLMo 3 still uses dense... I mean, you have OLMo models with Mixture of Experts, but dense models, where dense means... So also, it's jargon. There's a distinction between dense and sparse. So Mixture of Experts is considered sparse, because we have a lot of experts, but only a few of them are active. So that's called sparse. And then dense would be the opposite, where you only have one fully connected module, and it's always utilized. - So maybe this is a good place to also talk about KV cache. But actually, before that, even zooming out, like fundamentally, how many new ideas have been implemented from GPT-2 to today? Like, how different really are these architectures? - Take the Mixture of Experts. The attention mechanism in gpt-oss-120b, that would be the Group Query Attention mechanism. So it's a slight tweak from Multi-Head Attention to Group Query Attention. So that we have too... I think they replaced LayerNorm by RMSNorm, but it's just like a different normalization there and not a big change. It's just like a tweak. The nonlinear activation function— people familiar with deep neural networks, I mean, it's the same as changing sigmoid with ReLU. It's not changing the network fundamentally. It's just a little tweak. And that's about it, I would say. It's not really fundamentally that different. It's still the same architecture. So you can go from one into the other by just adding these changes basically. - It fundamentally is still the same architecture. - Yep. For example, you mentioned my book earlier. That's a GPT-2 model in the book because it's simple and it's very small, so 124 million parameters approximately. But in the bonus materials, I do have OLMo from scratch, Gemini 3 from scratch, and other types of from-scratch models. And I always start it with my GPT-2 model and just tweak the—well, add different components and you get from one to the other. It's kind of like a lineage in a sense. - Can you build up an intuition for people? Because when you zoom out, you look at it, there's so much rapid advancement in the AI world. And at the same time, fundamentally the architectures have not changed. So where is all the turbulence, the turmoil of the advancement happening? Where are the gains to be had? - So there are different stages where you develop the network or train the network. You have the pre-training. Now back in the day, it was just pre-training with GPT-2. Now you have pre-training, mid-training, and post-training. So I think right now we are in the post-training focus stage. Pre-training still gives you advantages if you scale it up with better, higher quality data. But then we have capability unlocks that were not there with GPT-2, for For example, ChatGPT is basically a GPT-3 model. And GPT-3 is the same as GPT-2 in terms of architecture. What was new was adding supervised fine-tuning and reinforcement learning with human feedback. So it's more on the algorithmic side than the architecture. - I would say that the systems also change a lot. If you listen to NVIDIA's announcements, they talk about things like, "You now do FP8, you can now do FP4." What's happening is these labs are figuring out how to utilize more compute to put it into one model, which lets them train faster and put more data in. And then you can find better configurations faster by doing this. So you can look at, essentially, tokens per second per GPU as a metric that you look at when you're doing large-scale training. You can go from 10k to 13k by turning on FP8 training, which means you're using less memory per parameter in the model. By saving less information, you do less communication and train faster. So all of these system things underpin way faster experimentation on data and algorithms. It's a loop that keeps going where it's hard to describe when you look at architectures and they're exactly the same, but the code base used to train these models is vastly different- -and you could probably... the GPUs are different but you probably train gpt-oss-20b way faster in wall-clock time than GPT-2 was trained at the time. - Yeah. Like you said, they had, for example, in Mixture of Experts this FP4 optimization where you get more throughput. But I do think, for speed this is true, but it doesn't give the model new capabilities. It's just: how much can we make the computation coarser without suffering in terms of model performance degradation? But I do think- I mean, there are alternatives popping up to the transformer. Text diffusion models, a completely different paradigm. And there is also... I mean, although text diffusion models might use transformer architectures, it's not an autoregressive transformer. And also Mamba models. It's a state space model. But they do have trade-offs, and nothing has yet replaced the autoregressive transformer as the state-of-the-art model. For state-of-the-art, you would still go with that, but there are now alternatives for the cheaper end—alternatives that are kind of making compromises. It's not just one architecture anymore. There are little ones coming up. But if we talk about the state-of-the-art, it's pretty much still the transformer architecture, autoregressive, derived from GPT-2 essentially. - I guess the big question here is, we talked quite a bit about the architecture behind the pre-training. Are the scaling laws holding strong across pre-training, post-training, inference, context size, data, and synthetic data? - I'd like to start with the technical definition of a scaling law- -which informs all of this. The scaling law is the power law relationship between... You can think of the x-axis, so kind of what you are scaling as a combination of compute and data, which are kind of similar, and then the y-axis is like the held-out prediction accuracy over next tokens. We talked about models being autoregressive. It's like if you keep a set of text that the model has not seen, how accurate will it get when you train? And the idea of scaling laws came when people figured out that that was a very predictable relationship. And I think that that technical term is continuing, and then the question is, what do users get out of it? Then there are more types of scaling where, OpenAI's o1 was famous for introducing inference time scaling. And I think less famously for also showing that you can scale reinforcement learning training and get kind of this log x-axis and then a linear increase in performance on y-axis. So there's kind of these three axes now where the traditional scaling laws are talked about for pre-training, which is how big your model is and how big your dataset is, and then scaling reinforcement learning, which is like how long can you do this trial and error learning that we'll talk about. We'll define more of this, and then this inference time compute, which is just letting the model generate more tokens on a specific problem. So I'm kind of bullish, but they're all really still working, but the low-hanging fruit has mostly been taken, especially in the last year on reinforcement learning with verifiable rewards, which is this RLVR, and then inference time scaling, which is just why these models feel so different to use, where previously you would get that first token immediately. And now they'll go off for seconds, minutes, or even hours, generating these hidden thoughts before giving you the first word of your answer. And that's all about this inference time scaling, which is such a wonderful kind of step function in terms of how the models change abilities. They kind of enabled this tool use stuff and enabled this much better software engineering that we were talking about. And this, when we say enabled, is almost entirely downstream of the fact that this reinforcement learning with verifiable rewards training just kind of let the models pick up these skills very easily. So let the models learn, so if you look at the reasoning process when the models are generating a lot of tokens, what it'll often be doing is: it tries a tool, it looks at what it gets back. It tries another API, it sees what it gets back and if it solves the problem. So the models, when you're training them, very quickly learn to do this. And then at the end of the day, that gives this kind of general foundation where the model can use CLI commands very nicely in your repo and handle Git for you and move things around and organize things or search to find more information, which if we were sitting in these chairs a year ago is something that we didn't really think of the models doing. So this is just kind of something that has happened this year and has totally transformed how has totally transformed how we think of using AI which evolution and just unlocks so much value. But it's like, just so- pr- unlocks so much value. But it's- it's like, it's not clear what the next avenue will be in terms of unlocking stuff like this. I think there's... we'll get to continual learning later, but there's a lot of buzz around certain areas of AI, but no one knows when the next step function will really come. - So you've actually said quite a lot of things there, and said profound things quickly. It would be nice to unpack them a little bit. You say you're bullish basically on every version of scaling. So can we just even start at the beginning? Pre-training, are we kind of implying that the low- hanging fruit on pre-training scaling has been picked? Has pre-training hit a plateau, or is even pre-training still something you're bullish on? - Pre-training has gotten extremely expensive. I think to scale up pre-training, it's also implying that you're gonna serve a very large model to the users. So I think that it's been loosely established the likes of GPT-4 and similar models were around one trillion parameters at the biggest size. There's a lot of rumors that they've actually gotten smaller as training has gotten more efficient. You want to make the model smaller because then your costs of serving go down proportionately. These models, the cost of training them is really low relative to the cost of serving them to hundreds of millions of users. I think DeepSeek had this famous number of about five million dollars for pre-training at cloud market rates. In OLMo 3, section 2.4 in the paper, we just detailed how long we had the GPU clusters sitting around for training which includes engineering issues, multiple seeds, and it was like about two million dollars to rent the cluster to deal with all the headaches of training a model. So these models are pretty— like, a lot of people could get one to 10 million dollars to train a model, but the recurring costs of serving millions of users is really billions of dollars of compute. I think that you can look at a thousand GPU rental you can pay 100 grand a day for. And these companies could have millions of GPUs. Like you can look at how much these things cost to sit around. So that's kind of a big thing, and then it's like, if scaling is actually giving you a better model, is it gonna be financially worth it? And I think we'll slowly push it out as AI solves more compelling tasks, so like the likes of Claude Opus 4.5, making Claude Code just work for things. I— I launched this project called the ATOM project, which is American Truly Open Models in July, and that was like a true vibe coded website, and like, I have a job to make plots and stuff. And then I came back to refresh it in the last few weeks and it's like Claude Opus 4.5 versus whatever model at the time was like, just crushed all the issues that it had from building in June and July and like, it might be a bigger model. There's a lot of things that go into this, but there's still progress coming. - So what you're speaking to is the nuance of the y-axis of the scaling laws—the way it's experienced versus on a benchmark, the actual intelligence might be different. But still, your intuition about pre-training, if you scale the size of compute, will the models get better? Not whether it's financially viable but just from the law aspect of it, do you think the models will get smarter? - Yeah. And I think that there's... And this sometimes comes off as almost like disillusionment from people, leadership at AI companies saying this, but they're like, "It's held for 13 orders of magnitude of compute, why would it ever end?" So I think fundamentally it is pretty unlikely to stop, it's just eventually we're not even gonna be able to test the bigger scales because of all the problems that come with more compute. I think that there's a lot of talk on how 2026 is a year when very large Blackwell compute clusters, like gigawatt-scale facilities at hyperscalers, are coming online. These were all contracts for power and data centers that were signed and sought out in 2022 and 2023. So before or right after ChatGPT. It took this two-to-three-year lead time to build these bigger clusters to train the models. While there's obviously immense interest in building even more data centers than that. So that is the crux that people are saying: these new clusters are coming. The labs are gonna have more compute for training. They're going to utilize this, but it's not a given. I've seen so much progress that I expect it, and I expect a little bit bigger models, and I expect... I would say it's more like we'll see a $2,000 subscription this year. We've seen $200 subscriptions. That could 10X again, and these are the kind of things that could come, and they're all downstream of this bigger model that offers just a little bit more cutting edge. - So, you know, it's reported that xAI is gonna hit that one-gigawatt scale early '26, and a full two gigawatts by year end. How do you think they'll utilize that in the context of scaling laws? Is a lot of that inference? Is a lot of that training? - It ends up being all of the above. So I think that all of your decisions when you're training a model come back to pre-training. So if you're going to scale RL on a model, you still need to decide on your architecture that enables this. We were talking about other architectures and using different types of attention, or a mixture of experts models. The sparse nature of MoE models makes it much more efficient to do generation, which becomes a big part of post-training, and you need to have your architecture ready so that you can actually scale up this compute. I still think most of the compute is going in at pre-training. Because you can still make a model better, you still want to go and revisit this. You still want the best base model you can. And in a few years that'll saturate and the RL compute will just go longer. - Are there people who disagree with you and say pre-training is dead? It's all about scaling inference, scaling post-training, scaling context, continual learning, scaling data, synthetic data? - People vibe that way and describe it in that way, but I think it's not the practice that is happening. - It's just the general vibe of people saying this thing is dead- - The excitement is elsewhere. So the low-hanging fruit- ...in RL is elsewhere. For example, we released our model in November... Every company has deadlines. Our deadline was November 20th, and for that, our run was five days, which compared to 2024 is a very long time to just be doing post- training at a model of 30 billion parameters. It's not a big model. And then in December, we had another release, where we let the RL run for another three and a half weeks, and the model got notably better, so we released it. And that's a to just allocate to something that is going to be your peak- ...for the year. So it's like- - The reasoning is- - There's these types of decisions when training a model where they just... They can't leave it forever. You have to keep pulling in the improvements from researchers. So you redo pre-training, you'll do this post-training for a month, but then you need to give it to your users. You need to do safety testing. So it's just... I think there's a lot in place that reinforces this cycle of updating the models. Things improve. You get a new compute cluster that lets you do something more stably or faster. It's like you hear a lot about Blackwell having rollout issues, where at AI2, most of the models we're pre-training are on 1,000 to 2,000 GPUs. But when pre-training on 10,000 or 100,000 GPUs, you hit very different failures. GPUs break in weird ways, and on a 100,000 GPU run, you're pretty much guaranteed to have one GPU that is down. Your training code must handle that redundancy, which is a very different problem. Whereas what we're doing, like playing with post-training on a cluster, or for people learning ML, what they're battling to train these biggest models is just- ...mass distributed scale, and it's very different. But that's somewhat different than- That's a systems problem- ...in order to enable scaling laws, especially at pre-training. You need all these GPUs at once. When we shift to RL, it actually lends itself to heterogeneous compute because you have many copies of the model. To do a primer for language model reinforcement learning, what you're doing is having two sets of GPUs. One you can call the actor, and one you call the learner. The learner is where your actual reinforcement learning updates happen. These are traditionally policy gradient algorithms. Proximal Policy Optimization, PPO, and Group Relative Policy Optimization, GRPO, are the two popular classes. And on the other side you have actors which are generating completions, and these completions are what you're going to grade. Reinforcement learning is all about optimizing reward. In practice, you can have a lot of different actors in different parts of the world doing different types of problems, and then you send it back to this highly networked compute cluster to do this actual learning where you take the gradients. You need to have a tightly meshed network to do different types of parallelism and spread out your model for efficient training. Every different type of training and serving has these considerations to scale. We talked about pre-training and RL, and then inference time scaling- how do you serve a model that's thinking for an hour to 100 million users? I don't know about that, but I know that's a hard problem. In order to give people this intelligence, there's all these systems problems, and we need more compute and you need more stable compute to do it." - But you're bullish on all of these kinds of scaling is what I'm hearing. On the inference, on the reasoning, even on the pre-training? - Yeah, so that's a big can of worms here, but there are basically two... The knobs are the training and the inference scaling where you can get gains. In a world where we had, let's say, infinite compute resources, you want to do all of them. So you have training, you have inference scaling, and training is like a hierarchy: it's pre-training, mid-training, post-training. Changing the model size, more training data, training a bigger model gives you more knowledge in the model. Then the model, let's say, has a better base model. Back in the day, or still, we call it a foundation model, and it unlocks... But you don't, let's say, have the model be able to solve your most complex tasks during pre-training or after pre-training. You still have these other unlock phases where you have mid-training or, for example, post-training with RL that unlocks capabilities that the model has in terms of knowledge in the pre-training. And I think, sure, if you do more pre-training, you get a better base model that you can unlock later. But like Nathan said, it just becomes too expensive. We don't have infinite compute, so you have to decide, do I want to spend that compute more on making the model larger? It's like a trade-off. In an ideal world, you want to do all of them. And I think in that sense, scaling is still pretty much alive. You would still get a better model, but like we saw with Claude Opus 4.5, it's just not worth it. Because you can unlock more performance with other techniques at that current moment, especially if you look at inference scaling. That's one of the biggest gains this year with o1, where it took a smaller model further than pre-training a larger model like Claude Opus 4.5. So I wouldn't say pre-training scaling is dead, it's just that there are other more attractive ways to scale right now. But at some point, you will still want to make some progress on the pre-training. The thing also to consider is where you want to spend your money. If you spend it more on the pre-training, it's like a fixed cost. You train the model, and then it has this capability forever. You can always use it. With inference scaling, you don't spend money during training, you spend money later per query, and then it's also like math. How long is my model gonna be on the market if I replace it in half a year? Maybe it's not worth spending $5 million, $10 million, $100 million on training it longer. Maybe I will just do more inference scaling and get performance there. It maybe costs me $2 million in terms of user queries. It becomes a question of how many users you have and doing the math, and I think that's also where it's interesting where ChatGPT is in a position. I think they have a lot of users where they need to go a bit cheaper, where they have that GPT-5 model that is a bit smaller. Other companies that have... Let's say, if your customers have other trade-offs. For example, there was also the Math Olympiad or some of these math problems where ChatGPT or they had a proprietary model, and I'm pretty sure it's just like a model that has been fine-tuned a little bit more, but most of it was during inference scaling to achieve peak performance in certain tasks. need that all the time. But yeah, long story short, I do think all of these pre-training, mid-training, post-training, inference scaling, they are all still things you want to do. It's just finding—at the moment, in this year, it's finding the right ratio that gives you the best bang for the buck, basically. - I think this might be a good place to define pre-training, mid-training, and post-training. - So, pre-training is the classic training one next token prediction at a time. You have a big corpus of data. And Nathan probably also has very interesting insights there because of OLMo 3. A big portion of the paper focuses on the right data mix. So, pre-training is essentially just, you know, training cross entropy loss, training on next token prediction on a vast corpus of internet data, books, papers and so forth. It has changed a little bit over the years in the sense people used to throw in everything they can. Now, it's not just raw data. It's also synthetic data where people, let's say, rephrase certain things. So synthetic data doesn't necessarily mean purely AI-made data. It's also taking something from an article, a Wikipedia article, and then rephrasing it as a Q&A question or summarizing it, rewording it, and making better data that way. Because I think of it also like with humans. If someone, let's say, reads a book compared to a messy—no offense, but like—Reddit post or something like that, I do think you learn— - There's going to be a post about this, Sebastian. - Some Reddit data is very coveted and excellent for training. You just have to filter it. - And I think that's the idea. I think it's like if someone took that and rephrased it in a, let's say, more concise and structured way, I think it's higher quality data that gets the LLM there faster. You get the same LLM out of it at the end, but it trains faster because if the grammar and the punctuation are correct, it already learns the correct way, versus getting information from a messy source and then learning later how to correct that. So, I think that is how pre-training evolved and why scaling still works. It's not just about the amount of data, it's also the tricks to make that data better for you, in a sense. And then mid-training is... I mean, it used to be called pre-training. I think it's called mid-training because it was awkward to have pre-training and post-training but nothing in the middle, right? It sounds a bit weird. You have pre-training and post-training, but what's the actual training? So, the mid-training is usually similar to pre-training, but it's a bit more specialized. It's the same algorithm, but what you do is you focus, for example, on long-context documents. The reason you don't do that during pre-training is because you don't have that many long context documents. We have a specific phase. And one problem of LLMs is still that it's a neural network. It has the problem of catastrophic forgetting. So, you teach it something, it forgets other things. And you wanna... I mean, it's not 100% forgetting, but it's like "no free lunch." It's the same with humans. If you ask me some math I learned 10 years ago, I would have to look at it again. - Nathan was actually saying that he's consuming so much content that there's a catastrophic forgetting issue. - Yeah, I'm trying to learn so much about AI, and it's like I was learning about pre-training parallelism. I'm like, "I lost something and I don't know what it was." - I don't want to anthropomorphize LLMs, but it's the same kind of thing in how humans learn. I mean, quantity is not always better because you have to be selective. And mid-training is being selective in terms of quality content at the end. So the last thing the LLM has seen is the quality stuff. And then post-training is all the fine-tuning, supervised fine-tuning, DPO, Reinforcement Learning with Verifiable Rewards (RLVR), with human feedback, and so forth. So the refinement stages. And it's also interesting, it's a cost thing. You spend a lot of money on pre-training right now. RL a bit less. With RL, you don't really teach it knowledge. It's more like unlocking the knowledge; it's more like a skill learning, like how to solve problems with the knowledge that it has from pre-training. There are actually three papers this year, or last year, 2025, on RL for pre-training. But I don't think anyone does that in production. - Toy examples for now. - Toy examples, right? But to generalize, RL post-training is more like the skill unlock, where pre-training is like soaking up the knowledge. - A few things that could be helpful. A lot of people think of synthetic data as being bad for training the models. You mentioned how DeepSeek got almost... OCR, which is Optical Character Recognition. A lot of labs did it. Ai2 had one, Meta had multiple. And the reason each of these labs has these is because there are vast amounts of PDFs and other digital documents on the web that aren't in formats that are encoded with text easily. So you use these Almost-OCR, DeepSeek OCR, or what we called our Almost-OCR, to extract trillions of tokens of candidate data for pre-training. Pre-training dataset size is measured in trillions of tokens. Smaller models from researchers can be something like five to 10 trillion. researchers can be something like five to 10 trillion. Um, Qwen is documented going up to 50 trillion, and there are rumors that these closed labs can go to 100 trillion tokens. Getting this potential data to put in—they have a very big funnel, and the data you actually train on is a small percentage of this. This character recognition data would be described as synthetic data for pre-training in a lab. And then there's also the fact that ChatGPT now gives wonderful answers, and you can train on those best answers, and that's synthetic data. It's very different than early ChatGPT with lots of hallucination data. when people became grounded in synthetic data. - One interesting question is, if I recall correctly, OLMo 3 was trained with less data than specifically some other open-weight models, maybe even OLMo 2. But you still got better performance, and that might be one example of how the data helped. - It's mostly down to data quality. I think if we had more compute, we would train for longer. I think we'd ultimately see that as something we would want to do. And especially with big models, you need more compute, because we talked about having more parameters and we talked about knowledge. Essentially, there's a ratio where big models can absorb more from data, and then you get more benefit out of this. Any logarithmic graph in your mind is like a small model will level off sooner if you're measuring tons of tokens, and bigger models need more. But mostly, we aren't training that big of models right now at AI2, and getting the highest quality data we can is the natural starting point. - Is there something to be said about the topic of data quality? Is there some low-hanging fruit there still where the quality could be improved? - It's like turning the crank. Historically, in the open, there's been a canonical best pre-training dataset that has moved around between who has the most recent one or the best recent effort. Like AI2's Dolma was very early with the first OLMo, and Hugging Face had FineWeb. And there's a DCLM project, which has been kind of like a, which stands for Data Comp Language Model. There's been Data Comp for other machine learning projects, and they had a very strong dataset. And a lot of it is the internet is becoming fairly closed off, so we have Common Crawl, which is hundreds of trillions of tokens, and you filter it. It looks like scientific work where you're training classifiers and making decisions based on how you prune down this dataset into the highest quality stuff and the stuff that suits your tasks. Previously, language models were tested a lot more on knowledge and conversational things, but now they're expected to do math and code. To train a reasoning model, you need to remix your whole dataset. And there's a lot of wonderful scientific methods here where you can, you can take your gigantic dataset, sample really tiny things from different sources, such as GitHub, Stack Exchange, Reddit, Wikipedia. You can sample small things from them, and train small models on each of these mixes and measure their performance on your evaluations. You can just do basic linear regression, and it's like, "Here's your optimal dataset." But if your evaluations change, your dataset changes a lot. So a lot of OLMo 3 was new sources for reasoning to be better at math and code, and then you do this mixing procedure and it gives you the answer. I think that's happened at labs this year; there's new hot things, whether it's coding environments or web navigation, and you need to bring in new data, change your whole pre-training so that your post-training can work better. And that's like the constant evolution and the redetermining of what they care about for their models. - Are there fun anecdotes of what sources of data are particularly high quality that we wouldn't expect? You mentioned Reddit sometimes can be a source. - Reddit was very useful. I think PDFs is definitely one. - Oh, especially arXiv. - Yeah, so AI2 has run Semantic Scholar for a long time, which is what you can say is a competitor to Google Scholar with a lot more features. And to do this, AI2 has found and scraped a lot of PDFs for openly accessible papers that might not be behind the closed walled garden of a certain publisher. So, truly open scientific PDFs. And if you sit on all of these and you process it, you can get value out of it. And I think that like, a lot of that style of work has been done by the frontier labs did much earlier. You just need to have a pretty skilled researcher that understands how things change models; they bring it in, clean it, and it's a lot of labor. When frontier labs scale researchers, a lot more goes into data. If you join a frontier lab and you want to have impact, the best way to do it is just find new data that's better. And then, the fancy, glamorous algorithmic things like figuring out how to make o1 is like the sexiest thought of a scientist. It's like, "Oh, I figured out how to scale RL." There's a group that did that, but most of the contribution is like— - On the dataset -..."I'm gonna make the data better," or, "I'm gonna make the infrastructure better so everyone on my team can run experiments 5% faster." - At the same time, I think it's also one of the closest guarded secrets, what your training data is, for legal reasons. And so there's also, I think, a lot of work that goes into hiding what your training data was essentially. Like training the model to not give away the sources because you have legal reasons. - The other thing, to be complete, is that some people are trying to train on only licensed data, whereas Common Crawl is a scrape of the whole internet. So if I host multiple websites, I'm happy to have them train language models, but I'm not explicitly licensing what governs it. And therefore, Common Crawl is largely unlicensed, which means that your consent really hasn't been provided for how to use the data. There's another idea where you can train language models only on data that has been licensed explicitly, so that the kind of governing contract is provided, and I'm not sure if Apertus is the copyright thing or the license thing. I know that the reason that they did it was for an EU compliance thing, where they wanted to make sure that their model fit one of those checks. - On that note, there's also the distinction in licensing. Some people just purchase the license. Let's say they buy an Amazon Kindle book, or a Manning book, and then use that in training. That is a gray zone 'cause you paid for the content and you might want to train on it. But then there are also restrictions where even that shouldn't be allowed. And so that is where it gets a bit fuzzy. And yeah, I think that is right now still a hot topic. Big companies like OpenAI approached private companies for their proprietary data and private companies, they become more and more, let's say, protective of their data because they know, "Okay, this is going to be my moat in a few years." And I do think that's like the interesting question, where if LLMs become more commoditized, and I think a lot of people learn about LLMs, there will be a lot more people able to train LLMs. Of course, there are infrastructure challenges. But if you think of big industries like pharmaceutical industries, law, finance industries, I do think they, at some point, will hire people from other frontier labs to build their in-house models on their proprietary data, which will be then, again, another unlock with pre-training that is currently not there. Because even if you wanted to, you can't get that data. You can't get access to clinical trials most of the time and these types of things. So, I do think scaling, in that sense, might be still pretty much alive if you also look in domain-specific applications, because we are still right now, in this year, just looking at general purpose LLMs on, on ChatGPT, Anthropic and so forth. They are just general purpose, they're not even, I think, scratching the surface of what an LLM can do if it is really specifically trained and designed for a specific task. - I think on the data thing—this is one of the things that happened in 2025, and we totally forget it—is Anthropic lost in court and owed $1.5 billion to authors. Anthropic, I think, bought thousands of books and scanned them and was cleared legally for that because they bought the books, and that is kind of going through the system. And then the other side, they also torrented some books, and I think this torrenting was the path where the court said that they were then culpable to pay these billions of dollars to authors, which is just such a mind-boggling lawsuit that kind of just came and went. That is so much money-... from the VC ecosystem. - These are court cases that will define the future of human civilization, because it's clear that data drives a lot of this, and there's this very complicated human tension of... I mean, you can empathize. You're both authors. And there's some degree to which, I mean, you put your heart and soul and your sweat and tears into the writing that you do. It feels a little bit like theft for somebody to train your data without giving you credit. - And there are, like Nathan said, also two layers to it. Someone might buy the book and then train on it, which could be argued fair or not fair, but then there are the straight-up companies who use pirated books where it's not even compensating the author. That is, I think, where people got a bit angry about it specifically, I would say. - Yeah, but there has to be some kind of compensation scheme. This is like moving towards- ... towards something like Spotify streaming did- ... originally for music. You know, what does that- ... compensation look like? You have to define those kinds of models. You have to think through all of that. One other thing I think people are generally curious about, I'd love to get your thoughts, as LLMs are used more and more. If you look at even arXiv, but GitHub, more and more of the data is generated by LLMs. What do you do in that kind of world? How big of a problem is that? - Largest problem's the infrastructure and systems, but from an AI point of view, it's kind of inevitable. - So it's basically LLM-generated data that's curated by humans essentially, right? - Yes, and I think that a lot of open source contributors are legitimately burning out. If you have a popular open source repo, somebody's like, "Oh, I want to do open source AI. It's good for my career," and they just vibe- -code something and they throw it in. You might get more of this- - I have a- - - than I do. - Yeah, so I have actually a case study here. I have a repository called MLxtend that I developed as a student around 10 years ago, and it is a reasonably popular library still for certain algorithms, I think especially like frequent data mining stuff. And there were recently two or three people who submitted a lot of PRs in a very short amount of time. I do think LLMs have been involved in submitting these PRs. Me, as the maintainer, there are two things. First, I'm a bit overwhelmed. I don't have time to read through it because, especially as an older library, that is not a priority for me. At the same time, I kind of also appreciate it because I think something people forget is it's not just using the LLM. There's still a human layer that verifies something, and that is in a sense also how data is labeled, right? One of the most expensive things is getting labeled data for RL from human feedback phases. And this is kind of like that, where it goes through phases, and then you actually get higher quality data out of it. And so I don't mind it in a sense. It can feel overwhelming, but I do think there is also value in it. - It feels like there's a fundamental difference between raw LLM-generated data and LLM-generated data with a human in the loop that does some kind of verification, even if that verification is a small percent of the lines of code. - I think this goes with anything where people think also sometimes, "Oh, yeah. I can just use an LLM to learn about XYZ," which is true. You can, but there might be a person who is an expert who might have used an LLM to write specific code. There is kind of like this human work that went into it to make it nice, throwing out the not-so-nice parts to kind of pre-digest it for you, and that saves you time. I think that's the value-add, where you have someone filtering things or even using the LLMs correctly. This is still labor that you get for free. For example, if you read a Substack article, I could maybe ask an LLM to give me opinions on that, but I wouldn't even know what to ask. And I think there is still value in reading that article compared to me going to the LLM because you are the expert. You select what knowledge is actually spot on, should be included, and you give me this very... this executive summary. And this is a huge value-add because now I don't have to waste three to five hours to go through this myself, maybe get some incorrect information and so on. And so I think that's also where the future still is for writers even though there are LLMs that... Can kind of save you time. - It's kinda fascinating to watch. I'm sure you guys do this, but for me, I look at the difference between a summary and the original content. Even if it's a page-long summary of a page-long content, it's interesting to see how the LLM-based summary takes the edge off. What is the signal it removes from the thing? - The voice is what I talk about a lot. - Voice? Well, voice... I would love to hear what you mean by voice, but sometimes there's literally insights. By removing an insight, you're changing the meaning of the thing. So I'm continuously disappointed how bad LLMs are at really getting to the core insights, which is what a great summary does. Yet even if I have these extremely elaborate prompts where I'm really trying to dig for the insights, it's still not quite there, which... I mean, that's a whole deep philosophical question about what is human knowledge and wisdom and what does it mean to be insightful and so on. But when you talk about the voice, what do you mean? - When I write, I think a lot of what I'm trying to do is take what you think as a researcher, which is very raw. A researcher is trying to encapsulate an idea at the frontier of their understanding and they're trying to put what is a feeling into words. I try to do this in my writing, which makes it come across as raw but also high-information in a way that some people will get it and some won't. And that's the nature of research. And language models don't do this well. They're all trained with Reinforcement Learning from Human Feedback, which takes feedback from many people and averages how the model behaves from this. And I think it's going to be hard for a model to be very incisive when there's that sort of filter. This is a wonderful fundamental problem for researchers in RLHF. This provides so much utility in making the models better, but also the problem formulation is kind of... there's this knot in it that you can't get past. These language models don't have this prior in their deep expression they're trying to get at. I don't think it's impossible. There are stories of models that really shock people. Like, I think of... I would love to have tried Bing Sydney. Did that have more voice? Because it would so often go off the rails, which is historically obviously a scary way—like telling a reporter to leave his wife—is a crazy model to potentially put in general adoption. But that's kind of like a trade-off; is this RLHF process in some ways adding limitations? - That's a terrifying place to be as one of these frontier labs and companies, because millions of people are using them. - There was a lot of backlash last year with GPT-4o getting removed, and I've personally never used the model, but I've talked to people at OpenAI where they get emails from users that might be detecting subtle differences in the deployments in the middle of the night. And they email them like, "My friend is different." And they find these employees' emails and send them things because they are so attached to this set of model weights and configuration that is deployed to the users. We see this with TikTok. You open it... I don't use TikTok, but supposedly in five minutes the algorithm gets you. It's locked in. And those are language models doing recommendations. I think there are ways you can do this. Within five minutes of chatting with it, the model just gets you. And that is something that people aren't really ready for. Like, don't give that to kids. At least until we know what's happening. - But there's also going to be this mechanism... What's going to happen with these LLMs as they're used more and more... Unfortunately, the nature of the human condition is such that people commit suicide. And so what journalists will do is they will report extensively on the people who commit suicide. And they would very likely link it to the LLMs because they have that data about the conversations. If you're really struggling in your life, if you're depressed, if you're thinking about suicide, you're going to probably talk to LLMs about it. And so what journalists will do is say, "Well, the suicide was committed because of the LLM." And that's going to lead to the companies, because of legal issues and so on, more and more taking the edge off of the LLM. So it's going to be as generic as possible. It's so difficult to operate in this space because you don't want an LLM to cause harm to humans at that level, but also this is the nature of the human experience, is to have a rich conversation, a fulfilling conversation, one that challenges you from which you grow. You need that edge. And that's something extremely difficult for AI researchers on the RLHF front to actually have to solve, because you're dealing with the human condition. - A lot of researchers at these companies are so well-motivated, and definitely Anthropic and OpenAI culturally want to do good for the world. And it's such a—I'm like, "Ooh, I don't wanna work on this," because on the one hand, a lot of people see AI as a health ally, as somebody they can talk to about their health confidentially, but then it bleeds all the way into this, like talking about mental health, where it's heartbreaking that this will be the thing where somebody goes over the edge, but other people might be saved. And I'm like, "I don't..." As a researcher, it's like, I don't want to train image generation models and release them openly because I don't want to enable somebody to have a tool on their laptop that can harm other people. I don't have the infrastructure in my company to do that safely. But there's a lot of areas like this where it just needs people that will approach it with complexity and conviction. It's just such a hard problem. - But also, we as a society, as users of these technologies, need to make sure that we're having the complicated conversation about it versus just fearmongering— that Big Tech is causing harm to humans or stealing your data. It's more complicated than that. And you're right. There's a very large number of people inside these companies, many of whom I know, who deeply care about helping people. They are considering the full human experience of people from across the world, not just Silicon Valley. People across the United States and the world, what their needs are. It's really difficult to design this one system that is able to help all these different kinds of people across different age groups, cultures, and mental states. - I wish that the timing of AI was different relative to the relationship of Big Tech to the average person. Big Tech's reputation was so low, and with how AI is so expensive, it's inevitably going to be a Big Tech thing. It takes so many resources, and people say the US is, "betting the economy on AI" with this build-out. To have these be intertwined at the same time makes for such a hard communication environment. It would be good for me to go talk to more people in the world who hate Big Tech and see AI as a continuation of this. - And one of the things you recommend, one of the antidotes that you talk about, is to find agency in this system. As opposed to sitting back in a powerless way and consuming the AI slop as it rapidly takes over the internet. Find agency by using AI to build stuff, build apps, build... One, that actually helps you build intuition, but two, it's empowering because you can understand how it works, what the weaknesses are. It gives your voice power to say, "This is a bad use of the technology, and this is a good use." And you're more plugged into the system then, so you can understand it better and you can steer it better as a consumer. - I think that's a good point you brought up about agency. Instead of ignoring it and saying, "Okay, I'm not going to use it," I think it's probably long-term healthier to say, "Okay, it's out there. I can't put it back." when they came out. How do I make best use of it, and how does it help me to up-level myself? The one thing I worry about here, though, is, if you just fully use it for something you love to do, the thing you love to do is no longer there. And that could potentially, I feel, lead to burnout. For example, if I use an LLM to do all my coding for me, now there's no coding. I'm just managing something that is coding for me. Two years later, let's say, if I just do that eight hours a day, having something code for me, do I feel fulfilled still? Is this hurting me in terms of being excited about my job, excited about what I'm doing? Am I still proud to build something? - On that topic of enjoyment, it's quite interesting. We should just throw this in there, that there's this recent survey of about 791 professional developers, meaning 10-plus years of experience. - That's a long time. As a junior developer? - Yeah, in this day and age. So, there's also many fronts that are surprising. They break it down by junior and senior developers. But, I mean, it just shows that both junior and senior developers use AI-generated code in code they ship. So this is not just for fun or intermediate learning things. This is code they ship. 25%—like, most of them use around 50% or more. And what's interesting is, for the category of over 50% of your code that you ship is AI-generated, senior developers are much more likely to do so. But you don't want AI to take away the thing you love. I think this speaks to my experience, these results I'm about to say. Together, about 80% of people find it either somewhat more enjoyable or significantly more enjoyable to use AI as part of the work. - I think it depends on the task. From my personal usage, for example, I have a website where I sometimes tweak things. I personally don't enjoy this. So in that sense, if the AI can help me to implement something on my website, I'm all for it. It's great. But then, at the same time, when I solve a complex problem— well, if there's a bug, and I hunt this bug, and I find it, it's the best feeling in the world. You feel great. But now, if you don't even think about the bug, you just go directly to the LLM, well, you never have this kind of feeling, right? But then there could be the middle ground where, well, you try yourself, you can't find it, you use the LLM, and then you don't get frustrated because it helps you and you move on to something that you enjoy. And so, looking at these statistics, I think what is not factored in is that it's averaging over all the different scenarios. We don't know if it's for the core task or if it's for something mundane that people would not have enjoyed otherwise. So, in a sense, AI is really great for doing mundane things that take a lot of work. For example, my wife the other day—she has a podcast for book discussions, a book club, and she was transferring the show notes from Spotify to YouTube, and then the links somehow broke. And she had in some episodes, because it is so many books, like 100 links, and it would have been really painful to go in there and fix each link manually. So I suggested, "Hey, let's try ChatGPT." We copied the text into ChatGPT, and it fixed them. Instead of two hours going from link to link fixing that, it made that type of work much more seamless. I think everyone has a use case where AI is useful for something that would be really boring, really mundane. - For me personally, since we're talking about coding, and you mentioned debugging... the source of enjoyment for me, more on the Cursor side than Claude Code, is that I have a friend, I have a pair programmer. It's less lonely. You made debugging sound like this great joy. No, I would say debugging is like a drink of water after you've been going through a desert for days. You skip the whole desert part where you're suffering. Sometimes it's nice to have a friend who can't really find the bug, but can give you some intuition about the code, and together you're going through the desert and finding that drink of water. At least for me, maybe it speaks to the loneliness of the programming experience. That is a source of joy. - It's maybe also related to delayed gratification. I'm a person who, even as a kid, I liked the idea of Christmas presents—having them, getting them—better than actually receiving the presents. I would look forward to the day I get the presents, but then it's over and I'm disappointed. And maybe it's the same with food. I think food tastes better when you're really hungry. You're right, with debugging, it is not always great. It's often frustrating, but then if you can solve it, then it's great. But there's a sweet Goldilocks zone; if it's too hard, then it's just wasting your time. But I think another challenge is how will people learn? We looked at the chart and saw that more senior developers are shipping more AI-generated code than the junior ones. It's very interesting, because intuitively you would think it's the junior developers because they don't know how to do the thing yet, and so they use AI to do that thing. It could mean the AI is not good enough yet to solve that task, but it could also mean experts are more effective at using it. They know how to use it better, review the code, and then they trust the code more. One issue for society in the future will be: how do you become an expert if you never try to do the thing yourself? One way I always learned is by trying things myself. If you look at math textbooks and the solutions, you learn something, but you learn actually better if you try first. Then you appreciate the solution differently because you know how to put it into your mental framework. If LLMs are here all the time, would you actually go to the length of struggling? Would you be willing to struggle? Struggle is not nice, right? But if you use the LLM to do everything, at some point you will never really take the next step, and then you will maybe not get that unlock that you would get as an expert using an LLM. So, I think there's like a Goldilocks sweet spot where maybe the trick here is you make dedicated offline time where you study two hours a day, and the rest of the day use LLMs. But I think it's important also for people to still invest in themselves, in my opinion, to not just LLM everything. - Yeah, as a civilization, we each individually have to find that Goldilocks zone. And in the programming context as developers. Now, we've had this fascinating conversation that started with pre-training and mid-training. Let's get to post-training. A lot of fun stuff in post-training. So, what are some of the interesting ideas in post-training? - The biggest one from 2025 is learning this reinforcement learning with verifiable rewards. You can scale up the training there, which means doing a lot of this kind of iterative generate-grade loop, and that lets the models learn both interesting behaviors on the tool use and software side. This could be searching, running commands on their own and seeing the outputs, and then also that training enables this inference time scaling very nicely. And it just turned out that this paradigm was very nicely linked, where this kind of RL training enables inference time scaling. But inference time scaling could have been found in different ways. So, it was kind of this perfect storm where the models change a lot, and the way that they're trained is a major factor in doing so. And this has changed how people approach post-training dramatically. - Can you describe RLVR, popularized by DeepSeek R1? Can you describe how it works? - Yeah. Fun fact: I was on the team that came up with the term RLVR, which is from our Tulu 3 work before DeepSeek. We don't take a lot of credit for being the people to popularize the scaling RL, but as fun as what academics get, as an aside, is the ability to name and influence the discourse, because the closed labs can only say so much. That one of the things you can do as an academic is you might not have the compute to train the model, but you can frame things in a way that ends up being described as a community coming together around this RLVR term, which is very fun. And then DeepSeek are the people that did the training breakthrough, which is, they scaled the reinforcement learning. You have the model generate answers and then grade the completion if it was right, and then that accuracy is your reward for reinforcement learning. So reinforcement learning is classically an agent that acts in an environment, and the environment gives it a state and a reward back, and you try to maximize this reward. In the case of language models, the reward is normally accuracy on a set of verifiable tasks, whether it's math problems or coding tasks. And it starts to get blurry with things like factual domains. That is also, in some ways, verifiable, or constraints on your instruction, like respond only with words that start with A." All of these things are verifiable in some way, and the core idea of this is you find a lot more of these problems that are verifiable and you let the model try it many times while taking these RL steps, these RL gradient updates. The infrastructure evolved from reinforcement learning from human feedback, where in that era the score they were trying to optimize was a learned reward model of human preferences. So you kind of changed the problem domains and that let the optimization go on to much bigger scales, which kind of kickstarted a major change in what the models can do and how people use them. - What kind of domains is RLVR amenable to? - Math and code are the famous ones, and then there's a lot of work on what is called the rubrics, which is related to a word people might have heard: LLM-as-a-judge. For each problem, I'll have a set of problems in my dataset. I will then have an LLM and ask it, "What would a good answer to this problem look like?" And then you could try the problem over and over again and assign a score based on this rubric. That's not necessarily verifiable like math and code domains, but this rubrics idea and other scientific problems that might be a little bit more vague is where the attention is, where they're trying to push this set of methods into these kind of more open-ended domains so the models can learn a lot more. - I think that's called reinforcement learning with AI feedback, right? - That's the older term for it coined in Anthropic's Constitutional AI paper. It's like a lot of these things come in cycles. - Also, just one step back for RLVR. I think the interesting thing here is that you ask the LLM a, let's say, math question, and then you know the correct answer, and you let the LLM, as you said, figure it out. How it does it—you don't constrain it much. There are some constraints like "use the same language, don't switch between Spanish and English." But let's say you're pretty much hands-off. You only give the question and the answer, and then the LLM has the task to arrive at the right answer, but the beautiful thing here is what happens in practice: the LLM will do a step-by-step description, like as a student or as a mathematician would derive the solution. It will use those steps, and that helps the model to improve its own accuracy. And then, like you said, the inference scaling. Inference scaling loosely means spending more compute during inference, and here the inference scaling is that the model would use more tokens. In the DeepSeek R1 paper, they showed the longer they train the model, the longer the responses are. They grow over time. They use more tokens, so it becomes more expensive. It becomes expensive for simple tasks, but these explanations help accuracy. There are also papers showing what the model explains does not necessarily have to be correct or maybe it's unrelated to the answer, but for some reason, it still helps the model that it is explaining. And I think it's also—again, I don't want to anthropomorphize these LLMs, but it's kind of like how we humans operate. If there's a complex math problem in a math class, class, you usually have a note paper and you do it step by step. You cross out things. And the model also self-corrects, and that was, I think, the aha moment in the DeepSeek R1 paper. They called it the aha moment because the model itself recognized it made a mistake and then said, "Ah, I did something wrong, let me try again." And I think that's just so cool that this falls out of just giving it the correct answer and having it figure out how to do it, that it kind of does in a sense what a human would do. Although LLMs don't think like humans, it's kind of like an interesting coincidence and it... And the other nice side effect is it's great for us humans often to see these steps. It builds trust, but also we us humans to see these steps. It builds trust, but also we learn and can double check things. - There's a lot in here. I think some of the debate... There's been a lot of debate this year on if the language models like these... I think the aha moments are kind of fake because in pre-training you essentially have seen the whole internet. so you have definitely seen people explaining their work, even verbally, like a transcript of a math lecture. "You try this, oh, I messed this up." And what RLVR is very good at doing is amplifying these behaviors because they're very useful in enabling the model to think longer and to check its work. And I agree that it is very beautiful that this training kind of... The model learns to amplify this in a way that is so useful for the final answers being better. - I can give you also a hands-on example. I was training the Qwen 3 base model with RLVR on MATH-500. The base model had an accuracy of about 15%. Just 50 steps, like in a few minutes with RLVR, the model went from 15% to 50% accuracy. And the model... You can't tell me it's learning anything fundamentally about math in - The Qwen example is weird because there've been two papers this year, one of which I was on, about data contamination in Qwen and specifically that they train on a lot of this special mid-training phase that we should take a minute on, because it's weird because they train on problems that are almost identical to MATH. - Exactly. And so you can see that basically the RL, it's not teaching the model any new knowledge about math. You can't do that in 50 steps. So the knowledge is already there, in the pre-training, you're just unlocking it. - I still disagree with the premise because there's a lot of weird complexities that you can't prove because one of the things that points to weirdness is that if you take the Qwen 3 so-called base model and you... You could Google like "math dataset, Hugging Face", and you could take a problem and what you do if you put it into Qwen 3 base... All these math problems have words, so it'd be like "Alice has five apples and takes one... and gives three to whoever," and there are these word problems. With these Qwen-based models, why people are suspicious of them is if you change the numbers but keep the words- Qwen will produce, without tools, will produce a very high accuracy decimal representation of the answer, which means there's some... At some time, it was shown problems that were almost identical to the test set, and it was using tools to get a very high precision answer, but a language model without tools will never actually have this. So it's kind of been this big debate in the research community: how much of these reinforcement learning papers that are training on Qwen and measuring specifically on this math benchmark, where there's been multiple papers talking about contamination, is like, how much can you believe them? And I think this is what caused the reputation of RLVR being about formatting, because you can get these gains so quickly, therefore it must already be in the model. But there's a lot of complexity here that we... It's not really like controlled experimentation, so we don't really know. - But if it weren't true, I would say distillation wouldn't work, right? I mean, distillation can work to some extent, but the thing is that is, I think, the biggest problem, and I research this contamination because we don't know what's in the data. Unless you have a new dataset, it is really impossible. And the same, you mentioned the math dataset, where you have a question and then answer and an explanation is given, but then also even something simpler like MMLU, which is a multiple-choice benchmark. If you just change the format slightly, like, I don't know, if you use a dot instead of a parenthesis or something like that, the model accuracy will vastly differ. - I think that that could be like a model issue rather than a general issue. - It's not even malicious by the developers of the LLM, like, "Hey, we want to cheat at that benchmark." It has seen something at some point. I think the only fair way to evaluate an LLM is to have a new benchmark that is after the cutoff date when the LLM was deployed. - Can we lay out what would be the recipe of all the things that go into post-training? And you mentioned RLVR was a really exciting, effective thing. Maybe we should elaborate. RLHF still has a really important component to play. What kind of other ideas are there on post-training? - I think you can kind of take this in order. I think you could view it as what made o1, which is this first reasoning model, possible, or what will the latest model be? And they actually... You're going to have similar interventions at these, where you start with mid-training, and the thing that is rumored to enable o1 and similar models is really careful data curation, where you're providing a broad set of what is called reasoning traces, which is just the model generating words in a forward process that is reflecting, like breaking down a problem into intermediate steps and trying to solve them. So at mid-training, you need to have data that is similar to this to make it so that when you move into post-training, primarily with these verifiable rewards, it can learn. And then what is happening today is you're figuring out which problems to give the model and how out which problems to give the model and how long you can train it for and how much inference you can enable the model to use when solving these verifiable problems. So as models get better, certain problems models get better, certain problems are no longer... The model will solve them 100% of the time, and therefore there's very little signal in this. If we look at the GRPO equation, this one is famous for this because essentially the reward given to the agent is based on how good a given action—an action is a completion—is relative to the other answers to that same problem. So if all the problems get the same answer, there's no signal in these types of algorithms. So what they're doing is they're finding harder problems, which is why you hear about things like scientific domains, where it's so hard to get anything right. If you have a lab or something, it just generates so many tokens or much harder software problems. So the frontier models are all pushing into these harder domains when they can train on more problems and the model will learn more skills at once. The RLHF link to this is that RLHF has been and still is kind of like the finishing touch on the models, where it makes the models more useful by improving the organization or style or tone. There are different things that resonate with different audiences, like some people like a really quirky model and RLHF could be good at enabling that personality, and some people hate this markdown bulleted list thing that the models do, but it's actually really good for quickly parsing information. In RLHF, this human feedback stage is really great for putting this into the model at the end of the day. It's what made ChatGPT so magical for people. And that use has actually remained fairly stable. This formatting can also help the models get better at math problems, for example. So it's like the border between style and formatting, and like the method that you use to answer a problem is actually all very closely linked in terms of when you're training these models, which is why RLHF can still make a model better at math, but these verifiable domains are a much more direct process to doing this because it makes more sense with the problem formulation, which is why it ends up all forming together. But to summarize, it's like mid-training is give the model the skills it needs to then learn. RL with verifiable rewards is letting the model try a lot of times, so put a lot of compute into trial-and-error learning across hard problems. And then RLHF would be like finishing the model, making it easy to use and kind of just rounding the model out. - Can you comment on the amount of compute required for RLVR? - It's only gone up and up. I think Ilya Sutskever was famous for saying they use a similar amount of compute for pre-training and post-training. Back to the scaling discussion, they involve very different hardware for scaling. Pre-training is very compute-bound, which is like this FLOPs discussion, which is just how many matrix multiplications can you get through at once. And because with RL you're generating these answers, you're trying the model in real-world environments, it ends up being much more memory-bound because you're generating long sequences and the attention mechanisms have this behavior where you get a quadratic increase in memory as you're getting to longer sequences. So the compute becomes very different. In pre-training we would talk about a model—if we go back to like the Biden administration executive order, it's like 10 to the 25th FLOPs to train a model. If you're using FLOPs in post-training, it's a lot weirder because the reality is just like: how many hours are you allocating? How many GPUs for? And I think in terms of time, the RL compute is getting much closer because you just can't put it all into one system. Pre-training is so computationally dense where all the GPUs are talking to each other and it's extremely efficient, whereas RL has all these moving parts and can take a long time to generate a sequence of 100,000 tokens. If you think about GPT-5.2 Pro taking an hour, it's like, what if your training run has a sample for an hour and you have to make sure that's handled efficiently? So I think in GPU hours or just wall-clock hours, the RL runs are probably approaching the same number of days as pre-training, but they probably aren't using as many GPUs at the same time. There are rules of thumb where in labs you don't want your pre-training runs to last more than a month because they fail catastrophically. And if you are planning a huge cluster to be held for two months and then it fails on day 50, the opportunity costs are just so big. So people don't want to put all their eggs in one basket. GPT-4 was the ultimate YOLO run, and nobody ever wanted to do it before, where it took three months to train and everybody was shocked that it worked. I think people are a little bit more cautious and incremental now. - So RLVR is more, let's say, unlimited in how much you can train or still get benefit, where RLHF, because it's a preference tuning, you reach a certain point where it doesn't really make sense to spend more RL budget on that. So just a step back with preference tuning: there are multiple people that can give multiple, let's say, explanations for the same thing and they can both be correct, but at some point you learn a certain style and it doesn't make sense to iterate on it. My favorite example is: if relatives ask me what laptop they should buy, I give them an explanation or ask, "What is your use case?" They, for example, prioritize battery life and storage. Other people like us, for example, we would prioritize RAM and compute. Both answers are correct, but different people require different answers. With preference tuning, you're trying to average somehow. You are asking the data labelers to give you, not the right, but the preferred answer and then you train on that. But at some point you learn that average preferred answer. And there's no reason to keep training longer on it because it's just a style, whereas with RLVR, you let the model solve more and more complex, difficult problems. So I think it makes more sense to allocate more budget long-term to RLVR. Also, right now we are in an RLVR 1.0 blend where it's still that simple thing where we have a question and answer, but we don't do anything with the stuff in between. There were multiple research papers, also by Google for example, on process reward models that also give scores for the explanation—how correct is the explanation. And I think that will be the next thing, let's say RLVR 2.0 for this year, focusing in between question and answer, like how to leverage that information, the explanation, to help it get better accuracy. So that's one angle. And there was a DeepSeek Math-V2 paper where they also had interesting inference scaling there where, first, they had developed models that grade themselves, a separate model. And I think that will be one aspect. And the other, like Nathan mentioned, it will be for RLVR branching into other domains. - The place where people are excited are value functions, which is pretty similar. So process reward models are kind of like... Process reward models assign how good something is at each intermediate step in a reasoning process, where value functions apply value to every token the language model generates. Both of these have been largely unproven in the language modeling and reasoning model era. People are more optimistic about value functions for whatever reason now. I think process reward models were tried a lot more in this pre-o1, pre-reasoning model era, and a lot of people had a lot of headaches with them. So I think a lot of it is human nature... Value models have a very deep history in reinforcement learning. They're one of the first things core to deep reinforcement learning existing, is training value models. So right now people are excited about trying value models, but there's very little proof. And there are negative examples in trying to scale up process reward models. These things don't always hold in the future. We came to this discussion by talking about scaling. The simple way to summarize what you're saying is you don't want to do too much RLHF, where the signal doesn't scale. People have worked on RLHF for language models for years, especially with intense interest after ChatGPT. And the first release of a reasoning model trained with RLVR, OpenAI's o1, had a scaling plot where if you increase training compute logarithmically, you get a linear increase in evaluations. This has been reproduced multiple times. DeepSeek had a plot like this. But there's no scaling law for RLHF where if you log-increase the compute, you get performance. In fact, the seminal scaling paper for RLHF is scaling laws for reward model over-optimization. So that's a big line to draw with RLVR and the methods we have now. In the future, they will follow this scaling paradigm: where you can let the best runs run for an extra 10x and you get performance, but you can't do this with RLHF. And that is just going to be field-defining in how people approach them. While I'm a shill for people to academically do RLHF, to do the best RLHF you might not need the extra 10 or 100x of compute, but to do the best RLVR you do. I think there's a seminal paper from a Meta internship. It's called something like "The Art of Scaling Reinforcement Learning with Language Models." What they describe as a framework is Scale-RL. Their incremental experiment was like 10,000 V100 hours, which is like thousands or tens of thousands of dollars per experiment. They do a lot of them, and This cost is not accessible to the average academic, which is a hard equilibrium where it's trying to figure out how to learn from each community. - I was wondering if we could take a bit of a tangent and talk about education and learning. If you're someone listening to this who's a smart person interested in programming and AI, I presume building something from scratch is a good beginning. So can you take me through what you would recommend people do? - I would personally start, like you said, implementing a simple model from scratch that you can run on your computer. The goal is not, when you build a model from scratch, to have something for every day use. It's not going to be your personal assistant replacing an existing open-weight model or ChatGPT. It's to see what exactly goes into the LLM, what comes out, and how the pre-training works on your own computer, preferably. Then you learn about pre-training, supervised fine-tuning, and the attention mechanism. You get a solid understanding of how things work, but at some point you reach a limit, because small models can only do so much. The problem with learning about LLMs at scale is that it's exponentially more complex to make a larger model, because the model isn't just larger—you have to shard your parameters across multiple GPUs. Even for the KV cache, there are multiple ways to implement it. One is just to understand how it works, just to grow the cache. You grow it step-by-step by, let's say, concatenating lists, but then that wouldn't be optimal on GPUs. You would pre-allocate a tensor and then fill it in. But that adds another 20 or 30 lines of code. And for each thing, you add so much code. The goal with the book is basically to understand how the LLM works. It's not going to be a production-level LLM, but once you have that, you can understand the production-level LLM. - So you're trying to always build an LLM that's going to fit on one GPU? - Yes. Most of them do. I have some bonus materials on some MoE models. One or two of them may require multiple GPUs, but the goal is to have it on one GPU. And the beautiful thing is, you can self-verify. It's almost like RLVR. When you code these from scratch, you can take an existing model from the Hugging Face Transformers library. The library is great, but if you want to learn about LLMs, it's not the best place to start because the code is so complex to fit so many use cases. Because people use it in production, it has to be really sophisticated, really intertwined, and hard to read. It's not linear. - It started as a fine-tuning library, and then it grew to be the standard representation of every model architecture. Hugging Face is the default place to get a model, and Transformers is the software. It enables it so people can easily load a model and do something basic with it. - And all frontier labs that have open-weight models have a Transformers version of it, like from DeepSeek to gpt-oss-120b. That's the canonical weight format you can load. But even even Transformers, the library, is not used in production. People use SGLang or vLLM, and it adds another layer of complexity. - We should say that the Transformers library has like 400 models. - So it's the one library that tries to implement a lot of LLMs, and so you have a huge codebase, basically. It's huge. It's like, I don't know, maybe millions— - That's crazy. - hundreds of thousands of lines of code. Understanding the part you want to understand is finding the needle in the haystack. But what's beautiful is you have a working implementation, so you can work backwards. What I would recommend doing, or what I also do, is if I want to understand, for example, how OLMo is implemented, I would look at the weights in the model hub, the config file, and then you can see, "Oh, they used so many layers. They use, let's say, Group Query Attention or Multi-Head Attention in that case." Then you see all the components in a human-readable, 100-line config file. And then you start, let's say, with your GPT-2 model and add these things. The cool thing here is you can then load the pretrained weights and see if they work in your model. You want to match the same output that you get with a Transformer model, and then you can use that, basically as a verifiable reward to make your architecture correct. Sometimes it takes me a day. With OLMo 3, the challenge was RoPE for the position embeddings. They had a YaRN extension and there was some custom scaling there, and I couldn't quite match these things. In this struggle, you kind of understand things. At the end, you know you have it correct because you can unit test it. You can check against the reference implementation. I think that's one of the best ways to learn, really. To basically reverse-engineer something. - I think that is something everyone interested in getting into AI today should do. That's why I liked your book. I came to language models from the RL and robotics field. I had never taken the time to just learn all the fundamentals. This transformer architecture is so fundamental, just as deep learning was in the past, and people need to do this. I think where a lot of people get overwhelmed is, "How do I apply this to have impact or find a career path?" Because language models make this fundamental stuff so accessible, and people with motivation will learn it. Then it's like, "How do I get cycles on goal to contribute to research?" I'm actually fairly optimistic because the field moves so fast that a lot of times the best people don't fully solve a problem because there's a bigger problem to solve that's very low-hanging fruit, so they move on. I think that a lot of what I was trying to do in this RLHF book is take post-training techniques and describe how people think about them influencing the model and what people are doing. Then it's remarkable how many things I just think people stop studying or don't pursue. I think people trying to go narrow after doing the fundamentals is good, and then reading the relevant papers and being engaged in the ecosystem. It's like you actually... actually... The proximity that random people online have to the leading researchers—no one knows who all the... The anonymous accounts on X and ML are very popular, and no one knows who all these people are. It could just be random people that study this stuff deeply, especially with the AI tools. To just be like, "I don't understand this, keep digging into it," is a very useful thing. But there's a lot of research areas that maybe have three papers that you need to read, and then one of the authors will probably email you back. But you have to put in a lot of effort into these emails to understand the field. I think it would be for a newcomer easily weeks of work to feel like they can truly grasp what is a very narrow area, but I think going narrow after you have the fundamentals will be very useful to people because I've become very interested in character training, which is how you make the model funny or sarcastic or serious, and what do you do to the data to do this? A student at Oxford reached out to me and was like, "Hey, I'm interested in this," and I advised him. And that paper now exists. There's like two or three people in the world that were very interested in this. He's a PhD student, which gives him an advantage, but for me, that was a topic I was waiting for someone to be like, "Hey, I have time to spend cycles on this." I'm sure there's a lot more very narrow things where you're just like, "It doesn't make sense that there was no answer to this." I think it's just there's so much information coming that people are like, "I can't grab onto any of these," but if you just stick in an area, I think there's a lot of interesting things to learn. - Yeah, I think you can't try to do it all because it would be very overwhelming and you would burn out. For me, for example, I haven't kept up with computer vision in a long time; I just focused on LLMs. But coming back to your book, I think this is a really great book and a really good bang for the buck because if you want to learn about RLHF, I wouldn't go out there and read RLHF papers because you would be spending two years— - Some of them contradict. I've just edited the book, and there's no chapter where I had to be like, "X papers say one thing and Y papers say another, and we'll see what comes out to be true." - Just to go through the table of contents, what are some ideas we might have missed in the bigger picture of post-training? First of all, you did the problem setup, training overview, what are preferences, preference data, and the optimization tools, reward modeling, regularization, instruction tuning, rejection sampling, and reinforcement learning. Then, Constitutional AI and AI feedback, reasoning, and inference-time scaling to use in function calling, synthetic data and distillation, evaluation, and then an open questions section, over-optimization, style and information, and then product UX, character and post-training. What are some ideas worth mentioning that connect both the educational and the research components? You mentioned character training, which is pretty interesting. - Character training is interesting because there's so little on it. We talked about how people engage with these models. We feel good using them because they're positive, but that can go too far; it can be too positive. And it's like, essentially, it's: How do you change your data and decision-making to make it exactly what you want? And like, OpenAI has this thing called a model spec, which is essentially their internal guideline for what they want the model to do, and they publish this to developers. So, essentially, you can know what is a failure of OpenAI's training—where they have the intentions and they haven't met them yet— versus what is something that they actually wanted to do and that you don't like. And that transparency is very nice, but all the methods for curating these documents and how easy it is to follow them is not very well known. I think the way the book is designed is that the RL chapter is obviously what people want because everybody hears about it with RLVR, and it's the same algorithms and the same math, but you can use it in very different documents. So I think the core of RLHF is like how messy preferences are. It's essentially a rehash of a paper I wrote years ago, but this is essentially the chapter that'll tell you why RLHF is never ever fully solvable because, the way that even RL is set up, it assumes that preferences can be quantified and that multiple preferences can be reduced to single values. And I think it relates in the economics literature to the Von Neumann-Morgenstern utility theorem, and that is the chapter where all of that philosophical, economic, and psychological context tells you what gets compressed into doing RLHF. So it's like you have all of this and then later in the book it's like: You use this RL map to make the number go up. And I think that's why it'll be very rewarding for people to do research on, because quantifying preferences is something that humans have designed a problem in order to make preferences studyable. But there's kind of fundamental debates, like, an example is in a language model response you have different things you care about, like accuracy or style. And when you're collecting the data, they all get compressed into: "I like this more than another." And that is happening, and there's a lot of research in other areas of the world that go into how you should actually do this. I think social choice theory is the subfield of economics around how you should aggregate preferences. And I went to a workshop that published a white paper on: "How can you think about using social choice theory for RLHF?" So I mostly would want people that get excited about the math to come and find things where they could stumble into this broader context. I think there's a fun thing: I just keep a list of all the tech reports of reasoning models I like. So in Chapter 14, where there's a short summary of RLVR, there's just a gigantic table where I list every single reasoning model that I like. I think in education, a lot of it needs to be like, at this point, what I like, because the language models are so good at the math. For example, the famous paper, Direct Preference Optimization, which is a much simpler way of solving the problem than RL. The derivations in the appendix skip steps of math. And for this book, I redid the derivations and I'm like, "What the heck is this log trick that they use to change the math?" But doing it with language models, they're like, "This is the log trick." And I'm like, "I don't know if I like this, that the math is so commoditized." I think some of the struggle in reading this appendix- ...and following the math is good for learning. - Yeah, we're returning to this often on the topic of education. You both have brought up the word "struggle" quite a bit. So there is value. If you're not struggling as part of this process, you're not fully following the proper process for learning. proper process for learning, I suppose. - Some providers are working on models for education designed to not give- actually, I haven't used them, but I'd guess they're designed to not give all the information at once. And make people work for it. Training models to do this would be a wonderful contribution. Where, like all of the stuff in the book, you had to reevaluate every decision. decision for it- It's a great example. There's a chance we work on it at Ai2, which I thought would be so fun. - It makes sense. I did something like that the other day for video games. Sometimes for pastime I play video games, like I like- Video games with puzzles, like Zelda and Metroid. And there's this new game where I really got stuck and was okay with it. I don't want to struggle for two days, so I used an LLM. But then you say, "Hey, please don't add spoilers. Just, you know, I'm here and there. What do I have to do next?" You can do the same thing for math where you say, "Okay, I'm stuck at this point. Don't give me the full solution, but what is something I could try?" Where you carefully probe it. But the problem here is I think it requires discipline. Many people enjoy math, but there are also a lot of people who need to do it for their homework, and then it's like a shortcut. We could develop an educational LLM, but other LLMs are still there, and there's still a temptation to use the other LLMs. - I think many people in college understand the stuff they're passionate about- about-...they're self-aware and they understand it shouldn't be easy. I think we just have to develop a good taste- ...talk about research taste, school taste about stuff that you should be struggling on- ...and stuff you shouldn't be. It's tricky, because you don't have good long-term vision sometimes you don't have good long-term vision about what would be actually useful to you in your career. But you have to develop that taste, yeah. - I was talking to my fiancee or friends about this, there's this brief 10-year window where all of the homework and all the exams could be digital. Before that, everybody had to do all the exams in blue books because there was no other way. And now after AI, everyone's going to need to be in blue books and oral exams because everyone could cheat so easily. It's like this brief generation that had a different education system where everything could be digital, but you still couldn't cheat. And now it's just going back. It's just very funny. - You mention character training. Just zooming out on a more general topic, for that project how much compute was required? And in general, to contribute as a researcher, are there places where not too much compute is required where you can actually contribute as an individual researcher? - For the character training thing, I think this research is built on fine-tuning about 7 billion parameter models with LoRA, which is essentially only fine-tuning a small subset of the weights of the model. I don't know exactly how many GPU hours that would take. - But it's doable. - Not doable for every academic. The situation for some academics is so dire that the only work you can do is doing inference where you have closed models or open models and you get completions from them and you can look at them and understand the models. And that's very well-suited to evaluation, where you want to be the best at creating representative problems that the models fail on or show certain abilities, which I think that you can break through with this. I think that the top-end goal for a researcher working on evaluation, if you want to have career momentum, is that Frontier Labs pick up your evaluation. You don't need to have every project do this. But if you go from a small university with no compute and find something that Claude struggles with, and then the next Claude model has it in the blog post, there's your career rocket ship. I think that's hard, but if you want to scope the maximum possible impact with minimum compute, it's something like that, which is just get very narrow and it takes learning of where the models are going. So you need to build a tool that tests where Claude 4.5 will fail. If I'm going to start a research project, I need to think where the models in eight months are going to be struggling. - But what about developing totally novel ideas? - This is a trade-off. I think that if you're doing a PhD, you could also be like, "It's too risky to work in language models. I'm going way longer term," which is like what is— what is the thing that's going to define language model development in 10 years? I end up being a person that's pretty practical. I mean, I went to my PhD where it was like, "I got into Berkeley. Worst case, I get a master's, and then I go work in tech." I'm very practical about it, so I'm like the life afforded to people to work at these AI companies, the amount of... OpenAI's average compensation is over a million dollars in stock a year per employee. For any normal person in the US, to get into this AI lab is transformative for your life. So I'm pretty practical about it. there's still a lot of upward mobility working in language models if you're focused. And look at these jobs. But from a research perspective, the transformative impact in these academic awards... to be the next Yann LeCun is from not working on— not caring about language model development very much. - It's a big financial sacrifice in that case. - So I work with some awesome students, and they're like, "Should I go work at an AI lab?" And I'm like, "You're getting a PhD at a top school. Are you gonna leave to go to a lab?" I don't know. If you go work at a top lab, I don't blame you. Don't go work at some random startup that might go to zero. But if you're going to OpenAI, I'm like, "It could be worth leaving a PhD for." - Let's more rigorously think through this. So where would you give a recommendation for people to do a research contribution? So the options are academia: get a PhD. Spend five years publishing. Compute resources are constrained. There's— there's research labs that are more focused on open-weight models, and working there. Or closed frontier research labs. So OpenAI, Anthropic, xAI, and so on. - The two gradients are: the more closed, the more money you tend to get, but you also get less credit. In terms of building a portfolio of things that you've done, it's very clear what you have done as an academic. Versus if you are going to trade this fairly reasonable progression for being a cog in the machine, which could also be very fun. So I think it's very different career paths. But the opportunity cost for being a researcher is very high because PhD students are paid essentially nothing. So it ends up rewarding people that have a fairly stable safety net, and they realize that they can operate in the long term, wanting to do very interesting work and get a very interesting job. So it is a privileged position to be like, "I'm gonna see out my PhD and figure it out after because I want to do this." At the same time, the academic ecosystem is getting bombarded by funding getting cut and stuff. So there's just so many different trade-offs where I understand plenty of people that are like, "I don't enjoy it. I can't deal with this funding search. My grant got cut for no reason by the government," or, "I don't know what's gonna happen." So I think there's a lot of uncertainty and trade-offs that, in my opinion, favor just taking the well-paying job with meaningful impact. It's not like you're getting paid to sit around at OpenAI. You're building the cutting edge of things that are— changing millions of people's relationship to tech. - But publication-wise, they're being more secretive, increasingly so. So you're publishing less and less. And so you are having a positive impact at scale, but you're a cog in the machine. - I think it honestly hasn't changed that much. I have been in academia. I'm not in academia anymore. wouldn't want to miss my time in academia. But what I wanted to say before I get to that is that I think it hasn't changed that much. I was working in computational biology, using AI or machine learning methods with collaborators, and a lot of people went from academia directly to Google. And I think it's the same. Back then, professors were sad that their students went into industry because they couldn't carry on their legacy. I think it's the same. It hasn't changed that much. The only thing that has changed is the scale. Cool stuff was always developed in industry that was closed. You couldn't talk about it. And I think the difference now is your preference. Do you like to publish your work, or are you more in a closed lab? That's one difference. The compensation, of course, is another, but it's always been like that. It depends on where you feel comfortable. And nothing is forever. Right now, there's a third option, which is launching a startup. A lot of people are doing that. It's a very risky move, but it can be a high-risk, high-reward situation, whereas joining an industry lab is pretty safe. You also have upward mobility. I think once you've been at an industry lab, it's easier to find future jobs. But then again, how much do you enjoy the team and working on proprietary things versus how much you like publishing work? I mean, publishing is stressful. Acceptance rates at conferences can be arbitrary and very frustrating, but it's high reward if you have a paper published. You feel good because your name is on there. It's a high accomplishment. - I feel like my friends who are professors seem happier than those who work at a frontier lab, to be honest. There's a grounding there. The frontier labs definitely do this 9-9-6, which is shorthand for working all the time. - Can you describe 9-9-6? It's a culture invented, I believe, in China and adopted in Silicon Valley. What is 9-9-6? It's 9:00 AM to 9:00 PM, - Six days a week. - six days a week. What is that, 72 hours? Okay. So, is this basically the standard in AI companies in Silicon Valley? This kind of grind mindset. - Yeah, I mean, maybe not exactly like that, but I think there is a trend towards it. And it's interesting. I think it almost flipped because when I was in in academia, I felt like that. As a professor, you write grants, you teach, and you do research. It's like three jobs in one, and it's more than a full-time job if you want to be successful. successful. And I feel like now, like Nathan just said, the professors, in comparison to a lab, I think they have less pressure or workload than at a frontier lab because— - I think they work a lot. They're just so fulfilled. By working with students— and having a constant runway of mentorship and a mission that is very people-oriented, I think in a era when things are moving very fast and are very chaotic, it's very rewarding to people. - Yeah, and I think at a startup, it's this pressure. It's like you have to make it. And it's really important that people put in the time, but it is really hard because you have to deliver constantly, and I've been at a startup. I had a good time, but I don't know if I could do it forever. It's an interesting pace and it's exactly like we talked about in the beginning. These models are leapfrogging each other, and they are just constantly trying to take the next step compared to their competitors. It's just ruthless right now. - I think this leapfrogging nature and having multiple players is actually an underrated driver of language modeling progress where competition is so deeply ingrained in people, and these companies have intentionally created very strong cultures. Like, Anthropic is known to be so culturally, like, deeply committed and organized. I mean, we hear so little from them, and everybody at Anthropic seems very aligned. And it's like being in a culture that is super tight and having this competitive dynamic is a thing that's gonna make you work hard and create things that are better. But that comes at the cost of human capital, which is like you can only do this for so long, and people are definitely burning out. I wrote a post on burnout as I've tread in and out of this myself, especially trying to be a manager, full-mode training. It's a crazy job doing this. The book Apple in China by Patrick McGee, he talked about how hard the Apple engineers worked to set up the supply chains in China, and he was like, they had "saving marriage" programs, and he told in a podcast, he was like, "People died from this level of working hard." So I think it's just like it's a perfect environment for creating progress based on human expense, and there's gonna be a lot of... the human expense is the 996 that we started this with, which is like— ... people do really grind. - I also read this book. I think they had a code word for if someone had to go home to spend time with their family to save the marriage, and it's crazy. Then the colleagues said, "Okay, this is like red alert for this situation. We have to let that person go home this weekend." But at the same time, I don't think they were forced to work. They were so passionate about the product, I guess, that you get into that mindset. And I had that sometimes as an academic, but also as an independent person, I have that sometimes. I overwork, and it's unhealthy. I had back issues, I had neck issues, because I did not take the breaks that I maybe should have taken. But no one forced me to; it's because I wanted to work, because it's exciting stuff. - That's what OpenAI and Anthropic are like. They want to do this work. - Yeah, but there's also a feeling of fervor that's building, especially in Silicon Valley, aligned with the scaling laws idea, where there's this hype where the world will be transformed in a scale of weeks and you want to be at the center of it. And then, you know, I have this great fortune of having conversations with a wide variety of human beings, and from there I get to see all these bubbles and echo chambers across the world. It's fascinating to see how we humans form them. And I think it's fair to say that Silicon Valley is a kind of echo chamber, a kind of silo and bubble. I think bubbles are actually really useful and effective. It's not necessarily a negative thing because you could be ultra-productive. It could be the Steve Jobs reality distortion field, because you just convince each other that breakthroughs are imminent, and by convincing each other of that, you make the breakthroughs imminent. - Byrne Hobart wrote a book classifying bubbles. One of them is financial bubbles, which is like speculation, which is bad, and the other one is for build-outs, because it pushes people to build these things. And I do think AI is in this, but I worry about it transitioning to a financial bubble, which is - Yeah, but also in the space of ideas, that bubble—you are doing a reality distortion field, and that means you are deviating from reality. And if you go too far from reality while also working, you know, 996, you might miss some fundamental aspects of the human experience, including beyond Silicon Valley. This is a common problem in Silicon Valley: it's a very specific geographic area. You might not understand the Midwest perspective, the full experience of all the other humans in the United States and across the world, and you speak a certain way to each other, you convince each other of a certain thing, and that can get you into real trouble. Whether AI is a big success and becomes a powerful technology or it's not, in either trajectory you can get yourself into trouble. So you have to consider all of that. Here you are, a young person trying to decide what you want to do with your life. - The thing that is... I don't even really understand this, but the SF AI memes have gotten to the point where "permanent underclass" was one of them, which was the idea that the last six months of 2025 was the only time to build durable value in an AI startup or model. Otherwise, all the value will be captured by existing companies and you will therefore be poor, which... that's an example of the SF thing that goes so far. I still think for young people going to be able to tap into it, if you're really passionate about wanting to have an impact in AI, being physically in SF is the most likely place where you're going to do this. But it has has trade-offs. - I think SF is an incredible place, but there is a bit of a bubble. And if you go into that bubble, which is extremely valuable, just get out also. Read history books, read literature, visit other places in the world. Twitter and Substack are not the entire world. - I would say, one of the people I worked with is moving to SF, and it's like, I need to get him a copy of Season of the Witch, which is a history of SF from 1960 to 1985, which goes through the hippie revolution, like all the gays taking over the city and that culture emerging, and then the HIV/AIDS crisis and other things. And it's just like, that is so recent, and so much turmoil and hurt, but also love in SF. And it's like, no one knows about this. It's a great book, Season of the Witch. I recommend it. A bunch of my SF friends who get out recommended it to me. And I think that's just like living there... I lived there and I didn't appreciate this context, and it's just so recent. - Yeah. Okay, let's... We talked a lot about a lot of things. Certainly about the things that were exciting last year. But this year, One of the things you guys mentioned that's exciting is the scaling of text diffusion models, and just a different exploration of text diffusion. Can you talk about what that is and what the possibility it holds? So, different kinds of approaches than the current LMs? - Yeah, so we talked a lot about the transformer architecture and the autoregressive transformer architecture specifically, like GPT. And it doesn't mean no one else is working on anything else. So, people are always on the, let's say, lookout for the next big thing. Because I think it would be almost stupid not to. Because sure, right now, the transformer architecture is the thing, and it works best, and there's, right now, nothing else out there. But, you know, it's always a good idea to not put all your eggs into one basket. So, people are developing other alternatives to the autoregressive transformer. One of them would be, for example, text diffusion models. And listeners may know diffusion models from image generation, like Stable Diffusion popularized it. There was a paper on generating images. Back then, people used GANs, Generative Adversarial Networks. And then there was this diffusion process where you iteratively denoise an image, and that resulted in really good quality images over time. Stable Diffusion was a company. Other companies build their own diffusion models. And then people are now like, "Okay, can we try this also for text?" Doesn't, you know, make intuitive sense yet, because it feels like, okay, it's not something continuous like a pixel that we can differentiate. It's discrete text, so how do we implement that denoising process? It's kind of similar to the BERT models by Google. Like, when you go back to the original transformer, they were the encoder and the decoder. The decoder is what we are using right now in GPT and so forth. The encoder is more like a parallel technique where you have multiple tokens that you fill in in parallel. GPT models, they do autoregressive generation, completing the sentence one token at a time. And in BERT models, you have a sentence that has gaps. You mask them out, and then one iteration is filling in these gaps. Text diffusion is kind of like that, where you are starting with some random text, and then you are filling in the missing parts or refining them iteratively over multiple iterations. And the cool thing here is that this can do multiple tokens at the same time. It's like the promise of having it more efficient. Now, the trade-off is, of course, how good is the quality? It might be faster, and now you have this dimension of the denoising process. The more steps you do, the better the text becomes. And people... I mean, you can scale in different ways. They try to see if that is maybe a valid alternative to the autoregressive model in terms of giving you the same quality for less compute. Right now, there are papers that suggest if you want to get the same quality, you have to crank up the denoising steps, and then you end up spending the same compute you would spend on an autoregressive model. The other downside is, while being parallel sounds appealing, some tasks are not parallel. Like reasoning tasks or tool use, maybe where you have to ask a code interpreter to give you an intermediate result. That is tricky with diffusion models. So, there are some hybrids, but the main idea is: how can we parallelize it? It's an interesting avenue. I think right now, there are mostly research models out there, like LaMDA and some other ones. I saw some by startups, some deployed models. There is no big diffusion model at scale yet, like on the Gemini or ChatGPT level. But there was an announcement by Google, a site where they said they are launching Gemini Diffusion, and they put it into context of their Gemini Nano 2 model, and they said basically: for the same quality on most benchmarks, we can generate things much faster. You mentioned what's next. I don't think the text diffusion model is going to replace autoregressive LLMs, but it will be something maybe for quick, cheap, at-scale tasks. Maybe the free tier in the future will be something like that. - I think there are examples where it's already being used. To paint an example of why this is better, for example, when GPT-5 is taking 30 minutes to respond, it's generating one token at a time. And this diffusion idea is essentially to generate all of those tokens and the completion in one batch, which is why it could be way faster. And I think it could be suited for... the startups I'm hearing about are code startups where you have a code base, and you have somebody that's effectively "vibe coding," and they say, "Make this change." And a code diff is essentially a huge reply from the model, but it doesn't have to have that much external context, and you can get it really fast by using these diffusion models. One example I've heard is that they use text diffusion to generate really long diffs, because doing it with an autoregressive model would take minutes, and that time for a user-facing product causes a lot of churn. Every second, you lose a lot of users. So, I think it's going to be this thing where it's going to— ...grow and have some applications, but I actually thought that different types of models were going to be used for different things much sooner than they have been, so I kind of trade off. I think the tool-use point is the one that's stopping them from being most general purpose because, for Claude Code and ChatGPT search, the autoregressive chain is interrupted with some external tool, and I don't know how to do that with the diffusion setup. - So what's the future of tool use this year and then in the coming years? Do you think there's going to be a lot of developments there, and how that's integrated into the entire stack? - I do think right now, it's mostly on the proprietary LLM side, but I think we will see more of that in the open-source tooling. And I think it is a huge unlock because then you can really outsource certain tasks from just memorization to actual— you know, instead of having the LLM memorize what is 23 plus 5, just use a calculator. - So do you think that can help solve hallucination? - Not solve it, but reduce it. So the LLM still needs to know when to ask for a tool call. And the second one is, well, it doesn't mean the internet is always correct. You can do a web search, but let's say I asked who won the World Cup in, let's say, 1998; it still needs to find the right website and get the right information. You can still go to the incorrect website and give me incorrect information. So I don't think it will fully solve that, but it is improving it in that sense. And so another cool paper earlier this year—I think it was December 31st, so it's not technically 2026, but close—the recursive language model. That's a cool idea to kind of take this even a bit further. Just to explain, Nathan, you also mentioned earlier, it's harder to do cool research in academia because of the compute budget. If I recall correctly, they did everything with GPT-5, so they didn't even use local models, but the idea is, let's say you have a long-context task; instead of having the LLM solve all of it in one shot or even in a chain, you break it down into sub-tasks. You have the LLM decide what is a good sub-task, and then recursively call an LLM to solve that. And I think something like that, adding tools—you know, each one maybe you have a huge Q&A task, so each one goes to the web and gathers information, and then you pull it together at the end and stitch it back together. I think there's going to be a lot of unlock using things like that where you don't necessarily improve the LLM itself; you improve how the LLM is used and what the LLM can use. One downside right now with tool use is you have to give the LLM permission to use tools. And that will take some trust, especially if you want to unlock things like having an LLM answer emails for you—or not even answer, but just sort them for you or select them for you or something like that. I don't know if I would today give an LLM access to my emails, right? I mean, this is a huge risk. - I think there's a cool... one last point on the tool use thing. I think that you hinted at this, and we've both come at this in our own ways, is that the open versus closed models use tools in very different ways, where open models, people go to Hugging Face and download the model, and then the person's going to be like, "What tool do I want?" I don't know, Exa is my preferred search provider, but somebody else might care for a different search startup. Where you release a model, it needs to be useful for multiple tools, for multiple use cases, which is really hard because you're making a general reasoning engine model, which is actually what gpt-oss-120b is good for. But on the closed models, you're deeply integrating the specific tool into your experience, and I think that open models will struggle to replicate some of the things that I like to do with closed models, which will be like, you can reference a mix of public and private information. And something that I keep trying every three to six months, I try Claude Code on the web, which is just prompting a model to make an update to some GitHub repository that I have. And it's just like that set of secure cloud environments is just so nice for just sending it off to do this thing and then come back to me, and these will probably help define some of the local open and closed niches. But I think initially, because there was such a rush to get tool use working, the open models were on the back foot, which is kind of inevitable. I think there's so much research, so many resources in these frontier labs, but it will be fun when the open models solve this because it's going to necessitate a bit more flexible and potentially interesting model that might work with this recursive idea to be an orchestrator and a tool use model, so hopefully the necessity drives some interesting innovation there. - So, continual learning—this is a longstanding topic, important problem. I think that increases in importance as the cost of training the models goes up. So can you explain what continual learning is and how important it might be this year and in the coming years to make progress? - This relates a lot to this kind of SF zeitgeist of, what is AGI, which is Artificial General Intelligence, and what is ASI, Artificial Superintelligence, and what are the language models that we have today capable of doing? I think the language models can solve a lot of tasks, but a key milestone among the AI community is essentially when AI could replace any remote worker, taking in information and solving digital tasks and doing them. And the limitation that's highlighted by people is that a language model will not learn from feedback the same way that an employee does. So if you hire an editor, the editor will mess up, but you will tell them. And if you hired a good editor, they don't do it again. But language models don't have this ability to modify themselves and learn very quickly. So the idea is, if we are going to actually get to something that is a true, general adaptable intelligence that can go into any remote work scenario, it needs to be able to learn quickly from feedback and on-the-job learning. I'm personally more bullish on language models being able to just provide them with very good context. You said, maybe offline, that you can write extensive documents to models where you say, "I have all this information. Here are all the blog posts I've ever written. I like this type of writing. My voice is based on this." But many people don't provide this to models, and the models weren't designed to take this amount of context previously. Agentic models are just starting. So it's this kind of trade-off: do we need to update the weights of this model with this continual learning thing to make them learn fast? Or the counterargument is we just need to provide them with more context and information, and they will have the appearance of learning fast by having a lot of context and being smart. - So we should mention the terminology here. Continual learning refers to changing the weights continuously so that the model adapts and adjusts based on the new incoming information, doing so continually, rapidly, and frequently. And then the thing you mentioned on the other side of it generally will be referred to as in-context learning. As you learn stuff, there's a huge context window. You can just keep loading it with extra information every time you prompt the system, which I think both legitimately can be seen as learning. It's just a different place where you're doing the learning. - I think, to be honest with you, continual learning — updating weights — we already have that in different flavors. If you think about how... I think the distinction here is: do you do that on a personalized custom model for each person, or do you do it on a global model scale? I think we have that already, going from GPT-5 to 5.1 and 5.2. It's maybe not immediate, but it is a curated update, a quick curated update where there was feedback about things they couldn't do, feedback by the community. They updated the weights, next model, and so forth. So it is a flavor of that. Another even finer-grained example is like RLVR; you run it, it updates. The problem is you can't just do that for each person because it would be too expensive to update the weights for each person, and I think that's the problem. Unless you get... Even at OpenAI scale, building the data centers, it would be too expensive. I think that is only feasible once you have something on the device where the cost is on the consumer. Like what Apple tried to do with the Apple Foundation models, putting them on the phone, where they learn from experience. - A bit of a related topic, but this kind of, maybe anthropomorphized term: memory. What are different ideas for the mechanism of how to add memory to these systems as we're increasingly seeing? Personalized memory especially? - Right now, it's mostly basically stuffing things into the context and then just recalling that. But again, I think it's expensive because you have to—you can cache it, but still you spend tokens on that. And the second one is you can only do so much. I think it's more like a preference or style. I mean, a lot of people do that when they solve math problems. You say it's way so you can add previous knowledge and stuff, but you also give it certain preference prompts: "do what I preferred last time," or something like that. But it doesn't unlock new capabilities. So for that, one thing people still use is LoRA adapters. These are basically, instead of updating the whole weight matrix, there are two smaller weight matrices that you kind of have in parallel or overlays like the delta. But yeah, you can do that to some extent, but then again, it is economics. There were also papers, for example, LoRA learns less but forgets less. It's like, there's no free lunch. If you want to learn more, you need to use more weights, but it gets more expensive. And then again, if you learn more, you forget more, and you have to find that Goldilocks zone basically. - We haven't really mentioned it much, but implied in this discussion is context length also. Is there a lot of innovation that's possible there? - I think the colloquially accepted thing is that it's a compute and data problem where you can... and sometimes small architecture things like attention variants. We talked about hybrid attention models, which is essentially if you have what looks like a state space model within your transformer. And those are better suited because you have to spend less compute to model the furthest along token. I think that, those aren't free because they have to be accompanied by a lot of compute or the right data. How many sequences of 100,000 tokens do you have in the world, and where do you get these? It just ends up being pretty expensive to scale them. We've gotten pretty quickly to a million tokens of input context length. I would expect it to keep increasing and get to 2 million or 5 million this year, but I don't expect it to go to 100 million. That would be like a true breakthrough, and I think those breakthroughs are possible. I think of the continual learning thing as a research problem where there could be a breakthrough that just makes transformers work way better at this and it's cheap. These things could happen with so much scientific attention. But turning the crank, it'll be consistent increases over time. - Looking at the extremes, I think there's, again, no free lunch. So, the one extreme to make it cheap: you have, let's say, an RNN that has a single state where you save everything from the previous stuff. It's like a specific fixed-size thing, so you never really grow the memory because you are stuffing everything into one state, but then the longer the context gets, the more information you forget because you can't compress everything into one state. Then on the other hand, you have the transformers, which try to remember every token, which is great sometimes if you want to look up specific information, but very expensive because you have the KV cache that grows, the dot product that grows. But then, like you said, the Mamba layers—I mean, they kind of have the same problem. Like an RNN, you try to compress everything into one state; you're a bit more selective there. But then I think it's like this Goldilocks zone again. With Nemotron 3, they found a good ratio of how many attention layers do you need for the global information where everything is accessible compared to having these compressed states. And I think that's how we will scale more—by finding better, let's say, ratios in the Goldilocks zone, like between making computing cheap enough to run, but then also making it powerful enough to be useful. And one more plug here, the Recursive Language Model paper, that is one of the papers that tries to kind of address the long context thing. So what they found is essentially instead of stuffing everything into this long context if you break it up into multiple smaller tasks, so you save memory by having multiple smaller cores, you can actually get better accuracy than having the LLM try everything all at once. I mean, it's a new paradigm. We will see, you know, there might be other flavors of that. So I think with that, we will still make improvement on long context, but then also, like Nathan said, I think the problem is for pre-training itself, we don't have as many long context documents as other documents. So it's harder to study basically how LLMs behave and stuff like that on that level. - There are some rules of thumb where essentially you pre-train a language model, like OLMo. we pre-trained at like 8K context length and then extended to 32K with training. And there are some rules of thumb where you're essentially doubling the training context length, it takes like 2X compute, and then you can normally like 2 to 4X the context length again. So I think a lot of it ends up being kind of compute bound at pre-training, which is in this... Like we talked about, everyone talks about this big increase in compute for the top labs this year, and that should reflect in some longer context windows. But I think on the post-training side, there are some more interesting things. As we have agents, the agents are gonna manage this context on their own, where now agents, people that use Claude Code a lot dread the compaction, which is when Claude takes its entire full 100,000 tokens of work and compacts it into a bulleted list. But what the next models will do—and I'm sure people are already working on this—is essentially the model can control when it compacts and how. So you can essentially train your RL algorithm where compaction is an action- ...where it shortens the history and then the problem formulation will be, "I want to keep the maximum evaluation scores that I have gotten while the model compacts its history to the minimum length." Because then you have the minimum amount of tokens that you need to do this kind of compounding autoregressive prediction. So there are actually pretty nice problem setups in this, where the... Like these agentic models learn to use their context in a different way than just plow forward. - One interesting recent example would be DeepSeek-V3.2, where they had a sparse attention mechanism where they have essentially a very efficient, small, lightweight indexer. And instead of attending to all tokens, it selects: "What tokens do I actually need?" I mean, it almost comes back to the original idea of attention where you are selective, but attention is always on, you have maybe zero weight on some of them, but you use them all. But they are even more like, "Let's just mask that out or not even do that." And even with sliding window attention in OLMo, that is also kind of that idea. You have a rolling window where you keep it fixed, because you don't need everything. Occasionally, some layers you might, but it's wasteful. But right now, I think, if you use everything, you're on the safe side—it gives you the best bang for the buck because you never miss information. And I think this year will be more about figuring out, like you said, how to be smarter about that. Right now, people want to have the next state-of-the-art, and the state-of-the-art happens to be the brute-force, expensive thing. And then once you have that, as you said, keep that accuracy, but let's see how we can do that cheaper now, with tricks. - Yeah. All this scaling thing. The reason we get the Claude 4.5 Sonnet model first is because you can train it faster and you're not hitting these compute walls as soon. They can just try a lot more things and get the model faster, even though the bigger model is actually better. - I think we should say that there's a lot of exciting stuff going on in the AI space. My mind has recently been really focused on robotics. Today, we almost entirely didn't talk about robotics. There's a lot of stuff on image generation, video generation. I think it's fair to say that the most exciting research work in terms of the amount, intensity, and fervor is in the LLM space, which is why I think it's justified for us to focus on the LLMs that we're discussing. But it'd be nice to bring in certain things that might be useful. For example, world models— there's growing excitement on that. Do you think there will be any use in this coming year for world models in the LLM space? Yes, I do think so. Also with LLMs, what's interesting here is that if we unlock more LLM capabilities, it also automatically unlocks all the other fields because it makes progress faster. A lot of researchers and engineers use LLMs for coding. So even if they work on robotics, if you optimize these LLMs that help with coding, it pays off. But then, yes, world models are interesting. It's basically where you have the model run a simulation of the world in a sense, like a little toy thing of the real thing, which can, again, unlock capabilities regarding data the LLM is not aware of. It can simulate things. And I think LLMs happen to work well by pre-training and doing next-token prediction. But we could do this even more sophisticatedly in a sense. I think there was a paper by Meta, a paper called World Models. So where they basically apply the concept of world models to LLMs again, where instead of just having next-token prediction and verifiable rewards, checking the answer correctness, they also make sure the intermediate variables are correct. You know, it's kind of like the model is learning basically a code environment in a sense. And I think this makes a lot of sense. It's just expensive to do, but it is making things more sophisticated, like modeling the whole thing, not just the result. And so it can add more value. I remember when I was a grad student, there is a... competition called CASP, I think, where they do protein structure prediction. They predict the structure of a protein that is not solved yet at that point. So in a sense, this is actually great, and I think we need something like that for LLMs also, where you do the benchmark, but no one does. You hand in the results, but no one knows the solution. And then after the fact, someone reveals that. But, AlphaFold, when it came out, it crushed this benchmark. I mean, there were also multiple iterations, but I remember the first one. I'm not an expert in that subject, but the first one explicitly modeled the physical interactions of the... You know, the physics of the molecule. Also the angles, impossible angles. And then in the next version, I think they got rid of this, and just with brute force, scaling it up. And I think with LLMs, we are currently in this brute force scaling because it just happens to work. But I do think at some point it might make sense to bring back this thing. And I think with world models, I think that is where I think that might be actually quite cool. I mean, yeah. And of course, also for robotics, which is completely unrelated to LLMs. - Yeah. And robotics is very explicit. So there's the problem of locomotion or manipulation. Locomotion is much more solved, especially in the learning domain. But there's a lot of value, just like with the initial protein folding systems, bringing in the traditional model-based methods. So you don't... it's unlikely that you can just learn the manipulation or the whole body, local manipulation problem end to end. That's the dream. But then you realize when you look at the magic of the human hand and the complexity of the real world, it's really hard to learn this all the way through, the way I guess AlphaFold 2 didn't. - I'm excited about the robotic learning space. I think it's collectively getting supercharged by all the excitement and investment in language models generally, where the infrastructure for training transformers, which is a general modeling thing, is becoming world-class industrial tooling, where wherever there was a limitation for robotics, it's just way better. There's way more compute. And then on top of that, they take these language models as kind of central units where you can do interesting explorative work around something that already works. And then I see it emerging as, kind of like we talked about, Hugging Face transformers and Hugging Face. I think when I was at Hugging Face, I was trying to get this to happen, but it was too early. It's like these open robotic models on Hugging Face, and having people be able to contribute data and fine-tune them. I think we're much closer now that the investment in robotics and self-driving cars is related and it enables this, where once you get to the point where you can have this sort of ecosystem where somebody can download a robotics model and maybe fine-tune it to their robot or share datasets across the world. There's some work in this area like RTX, I think it was a few years ago, where people are starting to do that. But once they have this ecosystem, it'll look very different. And then this whole post-ChatGPT boom is putting more resources into that, which I think is a very good area for doing research. - This is also resulting in much better, more accurate, and more realistic simulators being built, closing the sim-to-real gap in the robotic space. But, you know, you mentioned a lot of excitement in the robotics space and a lot of investment. The downside of that, which happens in hype cycles, I personally believe, and most robotics people believe, that it's not... Robotics is not going to be solved at the time scale as being implicitly or explicitly promised. And so what happens when there's all these robotics companies that spring up and then they don't have a product that works? Then there's going to be this kind of crash of excitement, which is nerve-wracking. Hopefully something else will come in and keep swooping in so that the continued development of some of these ideas keeps going. - I think it's also related to the continual learning issue, essentially, where the real world is so complex. With LLMs, you don't really need to have something learn for the user, because there are a lot of things everyone has to do. Everyone maybe wants to, I don't know, fix their grammar in their email or code or something like that. It's more constrained, so you can kind of prepare the model for that. But preparing the robot for the real world is harder. I mean, you have the robotic foundation models, and you can learn certain things like grasping things. But then again, everyone's house is different. It's so different, and that is, I think, where the robot would have to learn on the job, essentially. And that, I guess, is the bottleneck right now: how to, customize it on the fly, essentially. - I don't think I can possibly understate the importance of the thing that doesn't get talked about almost at all by robotics folks or anyone, which is safety. All the interesting complexities we talk about learning, all the failure modes and failure cases, everything we've been talking about with LLMs—sometimes they fail in interesting ways. All of that is fun and games in the LLM space. In the robotic space, in people's homes, across millions of minutes and billions of interactions, you really are almost allowed to fail never. When you have embodied systems that are put out there in the real world, you just have to solve so many problems you never thought you'd have to solve when just thinking about the general robot learning problem. - I'm so bearish on in-home learned robots for consumer purchase. I'm very bullish on self-driving cars, and I'm very bullish for robotic automation, e.g., like Amazon distribution where Amazon has built whole new distribution centers designed for robots first rather than humans. There's a lot of excitement in AI circles about AI enabling automation and mass-scale manufacturing, and I do think that the path to robots doing that is more reasonable, where it's a thing that is designed and optimized to do a repetitive task that a human could conceivably do but doesn't want to. And then I'm much, but it's also going to take a lot longer than people probably predict. I think the leap from AI singularity to we can now scale up mass manufacturing in the US because we have a massive AI advantage is one that is troubled by a lot of political and other challenging problems. - Let's talk about timelines, specifically timelines to AGI or ASI. Is it fair, as a starting point, to say that nobody really agrees on the definitions of AGI and ASI? - I kind of think there's a lot of disagreement, but I've been getting pushback where a lot of people kind of say the same thing, which is like a thing that could reproduce most digital economic work. So, the remote worker is a fairly reasonable example. And I think OpenAI's definition is somewhat related to that, which is like an AI that can do a lot of economically valuable tasks—which I don't really love as a definition, but I think it could be a grounding point, because language models today, while immensely powerful, are not this remote worker drop-in. And there are things that could be done by an AI that are way harder than remote work, which are like finding an unexpected scientific discovery that you couldn't even posit, which would be an example of something that somebody says is an artificial superintelligence problem. Or, taking in all medical records and finding linkages across certain illnesses that people didn't know, or figuring out that some common drug can treat some niche cancer. They would say that that is a superintelligence thing. So these are kind of natural tiers. My problem with it is that it becomes deeply entwined with the quest for meaning of AI and these religious aspects to it. So there's different paths you can take it. - And I don't even know if the remote worker is a good definition because what exactly is that? I actually, I mean, I like... I don't know if you like the originally titled AI27 report. They focus more on code and research taste, so the target there is the superhuman coder. So they have several milestone systems: Superhuman coders, superhuman AI researcher, then superintelligent AI researcher, and then the full ASI, artificial superintelligence. But after you develop the superhuman coder, everything else follows quickly. There, the task is to have fully autonomous, automated coding. So any kind of coding you need to do in order to perform research is fully automated. And from there, humans would be doing AI research together with that system, and they will quickly be able to develop a system that can actually do the research for you. That's the idea. And initially their prediction was 2027, 2028, and now they've pushed it back by three to four years to 2031 (mean prediction). Probably my prediction is even beyond 2031, but at least you can, in a concrete way, think about how difficult it is to fully automate programming. - Yeah, I disagree with some of their presumptions and dynamics on how it would play out, but I think they did good work in the scenario-defining milestones that are concrete and tell a useful story, which is why the reach for this AI 2027 document transcended Silicon Valley. It's because they told a good story and they did a lot of rigorous work to do this. I think the camp that I fall into is that AI is so-called "jagged," which will be excellent at some things and really bad at some things. I think that when they're close to this automated software engineer, what it will be good at is traditional ML systems and frontend, the model is excellent at; but distributed ML, the models are actually quite bad at because there's so little training data on doing large-scale distributed learning. This is something we already see, and I think this will just get amplified. And then it's kind of messier in these trade-offs, like how you think AI research works and so on. - So you think basically a superhuman coder is almost unachievable, because of the jagged nature of the thing, you're just always going to have gaps in capabilities? - I think it's assigning completeness to something where the models are already superhuman at some types of code. I think that will continue. And people are creative, so they'll utilize these incredible abilities to fill in the weaknesses of the models and move really fast. There'll always be this dance for a long time between the humans enabling the thing that the model can't do. And the best AI researchers are the ones that can enable this superpower. And I think those lines lead to what we already see. I think like Claude Code for building a website, you can stand up a beautiful website in a few hours or do data going to keep getting better, and we'll pick up some new coding skills along the way. Linking to what's happening in big tech, this AI 2027 report leans into the singularity idea, whereas I think research is messy, social and largely in the data in ways that AI models can't process. But what we do have today is really powerful and these tech companies are all collectively buying into this with tens of billions of dollars of investment. So we are going to get some much better version of ChatGPT, a much better version of Claude Code than we already have. I think it's just hard to predict where that is going, but the bright clarity of that future is why some of the most powerful people in the world are putting so much money into this. And I think it's just kind of small differences between like—we don't actually know what a better version of ChatGPT is, but also, can it automate AI research? I would say probably not, at least in this timeframe. Big tech is going to spend $100 billion much faster than we get an automated AI researcher that enables an AI research singularity. - So you think your prediction would be— if this is even a useful milestone or more than 10 years out? - I would say less than that on the software side, but I think longer than that on things like research. - Well, let's just for fun try to imagine a world where all software writing is fully automated. Can you imagine that world? - By the end of this year, the amount of software that'll be automated will be so high. But it'll be things like trying to train a model with RL and you need to have multiple bunches of GPUs communicating with each other. That'll still be hard, but it'll be much easier. - One way to think about this— the full automation of programming—is just thinking of lines of useful code written, the fraction of that to the number of humans in the loop. So presumably there'll be for a long time humans in the loop of software writing. It'll just be fewer and fewer relative to the amount of code written. Right? And the superhuman coder—I think the presumption there is it goes to zero, the number of humans in the loop. What does that world look like when the number of humans in the loop is in the hundreds, not in the hundreds of thousands? - I think software engineering will be driven more to system design and goals of outcomes, where I do think software is largely going to be. I think this has been happening over the last few weeks, where people have gone from a month ago saying, "Oh yeah, agents are kind of slop," which is a famous Karpathy quote, to what is a little bit of a meme—the industrialization of software when anyone can just create software with their fingerprints. I do think we are closer to that side of things, and it takes direction and understanding how systems work to extract the best from the language models. I think it's hard to accept the gravity of how much is going to change with software development and how many more people can do things without ever looking at the code. - I think what's interesting is to think about whether these systems will be independent—completely independent in the sense that, while I have no doubt that LLMs will kind of at some point solve coding in a sense, like calculators solve calculating, right? So at some point, humans developed a tool where you never need a human to calculate that number. You just type it in, and it's an algorithm. You can do it in that sense. And I think that's the same probably for coding. But the question isn't... I think what will happen is, you will just say, "Build that website." It will make a really good website, and then you maybe refine it. But will it do things independently where... Will you still be having humans asking the AI to do something? Like will there be a person to say, "Build that website?" Or will there be AI that just builds websites or something? - I think talking about building websites is— - Too simple. - The problem with websites and the problem with the web, you know, HTML and all that kind of stuff, it's very resilient to just... slop. It will show you slop; it's good at showing slop. I would rather think of safety-critical systems, like asking AI to end-to-end generate something that manages logistics, or manages cars, a fleet of cars—all that kind of stuff. So it end-to-end generates that for you. - I think a more intermediate example is take something like Slack or Microsoft Word. I think if organizations allow it, AI could very easily implement features end-to-end and do a fairly good job for like things that you want to try. You want to add a new tab in Slack that you want to use, and I think AI will be able to do that pretty well. - Actually, that's a really great example. How far away are we from that? - Like this year. - See, I don't know. I don't know. - I guess I don't know how bad production codebases are, but I think that within... on the order of a few years, a lot of people are going to be pushed to be more of a designer and product manager, where you have multiple of these agents that can try things for you and they might take one to two days to implement a feature or attempt to fix a bug. And you have these dashboards, which I think Slack is actually a good dashboard where your agents will talk to you and you'll then give feedback. But things like, if I make a website, like, "Do you want a passable logo?" I think these cohesive design things and the style is going to be very hard for models and deciding on what to add the next time. - I just... Okay. I hang out with a lot of programmers and some of them are a little bit on the skeptical side in general. That's just their vibe. I just think there's a lot of complexity involved in adding features to complex systems. Like, if you look at the browser, Chrome. If I wanted to add a feature, if I wanted to have tabs as opposed to up top, I want them on the left side. Interface-wise, right? I think we're not... This is not a next-year thing. - One of the Claude releases this year, one of their tests was: we give it a piece of software and leave Claude to run to recreate it entirely, and it could already almost rebuild Slack from scratch, just given the parameters of the software and left in a sandbox environment to do that. - So the "from scratch" part, I like almost better. - So it might be that smaller and newer companies are advantaged, and they're like, "We don't have the bloat and complexity, and therefore this feature exists." - And I think this gets to the point you mentioned, that some people you talk to are skeptical. I think that's not because the LLM can't do X, Y, Z. It's because people don't want it to do it this way. - Some of that could be a skill issue on the human side. We have to be honest with ourselves. And some of that could be an underspecification issue. So, programming, it's like you're just assuming... This is like an issue with communication in relationships and friendships. You're assuming the LLM is supposed to read your mind. This is where spec-driven design is really important. Using natural language to specify what you want. - If you talk to people at the labs, they use these in their training and production code. Claude Code is built with Claude Code, and they all use these things extensively. Dario talks about how much of Claude's code... It's like these people are slightly ahead in terms of the capabilities they have and what they probably spend on inference. They could spend 10 to 100x as much as we're spending on a lowly $100 or $200 a month plan. They truly let it rip. And I think that, with the pace of progress that we have, it seems like- a year ago we didn't have Claude Code and we didn't really have reasoning models. The difference between sitting here today and what we can do with these models is significant, and there's a lot of low-hanging fruit to improve them. The failure modes are pretty dumb. Like- "Claude, you tried to use a CLI command I don't have installed 14 times, and then I sent you the command to run." That, from a modeling perspective, is pretty fixable. So I don't know. - I agree with you. I've been becoming more and more bullish in general. Speaking to what you're articulating, I think it is a human skill issue. Anthropic is leading the way, along with other companies, in understanding how to best use the models for programming; therefore, they're effectively using them. There are a lot of programmers on the outskirts who don't... I mean, there's not a really good guide on how to use them. People are trying to figure it out, but- - It might be very expensive. The entry point might be $2,000 a month, which is only for tech companies and rich people. That could be it. - But it might be worth it. If the final result is a working software system, it might be worth it. By the way, it's funny how we converged from the discussion of timeline to AGI to something more pragmatic and useful. Is there anything concrete, interesting, useful, and profound to be said about the timeline to AGI and ASI? Or are these discussions a bit too detached from the day to day? - There are interesting bets. A lot of people are trying to do RLVR— Reinforcement Learning with Verifiable Rewards—in real scientific domains, where startups with hundreds of millions of funding have wet labs where they're having language models propose hypotheses that are tested in the real world. I would say that they're early, but with the pace of progress, it's like- ...maybe they're early by six months and they make it because they were there first, or maybe they're early by eight years; you don't know. That type of moonshot to branch this momentum into other sciences would be very transformative. If, AlphaFold moments happen in all sorts of other scientific domains by a startup solving this. I think there are startups—maybe Harmonic is one—where they're going all in on language models plus Lean for math. You had another guest where you talked about this recently, and we don't know exactly what's going to fall out of spending $100 million on that model. Most of them will fail, but a couple might be big breakthroughs that are very different than ChatGPT or Claude Code type software experiences. A tool that's only good for a PhD mathematician but makes them 100X effective... - I agree. I think this will happen in a lot of domains, especially domains that have a lot of resources, like finance, legal, and pharmaceutical companies. But then again, is it really AGI? Because we are now specializing it again. Is it really that much different from back in the day when we had specialized algorithms? It's just the same thing, way more sophisticated, but I don't know, is there a threshold for AGI? I think the real cool thing here is that we have foundation models we can specialize. That's like the breakthrough. Right now, I think we are not there yet because, first, it's too expensive, but also, ChatGPT doesn't just give away their model to customize it. I think once that's true... And I can imagine this as a business model, where OpenAI says at some point, "Hey, Bank of America, for $100 million we will do your custom model," something like that. I think that will be the huge economic value-add. The other thing, though, is also... Companies, I mean, what is the differentiating factor? If everyone uses the same LLM, if everyone uses ChatGPT, they will all do the same thing. Well, if everyone is moving in lockstep, but companies want to have a competitive advantage, there is no way around using some of their private data and specializing. It's gonna be interesting. - Seeing the pace of progress, it does feel like things are coming. I don't think the AGI and ASI thresholds are particularly useful. - I think the real question, and this relates to the remote worker thing, is: when are we going to see a big, obvious leap in economic impact? Because currently there's not been an obvious leap in the economic impact of LLM models, for example. And that's, you know, aside from AGI or ASI, all that stuff, there's a real question of, "When are we going to see a GDP..." "...jump?" - Yeah, what is the GDP made up of? A lot of it is financial services, so I don't know what this is. - Right, GDP is a- - It's just hard for me to think about the GDP bump, but I would say that software development becomes valuable in a different way, when you no longer have to look at the code anymore. When Claude Code will make you a small business. Which is essentially, Claude can set up your website, your bank account, your email, and your whatever else. And you just have to express what you're trying to put into the world. That's not just an enterprise market, but it is hard. I don't know how you get people to try doing that. I guess if ChatGPT can do it—people are trying ChatGPT. - I think it boils down to the scientific question of, "How hard is tool use to solve?" Because a lot of the stuff you're implying, the remote work stuff, is tool use. It's like... computer use, like how you have an LLM that goes out there, this agentic system, and does something in the world, and only screws up 1% of the time. - Computer use- - Or less. -...is a good example of what labs care about and we haven't seen a lot of progress on. We saw multiple demos in 2025 of, like, Claude can use your computer, or OpenAI had operator, and they all suck. They're investing money in this, and I think that'll be a good example. Whereas actually, something where it just seems like taking over the whole screen seems a lot harder than having an API that they can call in the back end. For some of that, you have to set up a different environment for them all to work in. They're not working on your MacBook; they are individually interfacing with Google and Amazon and Slack, and they handle all these things in a very different way than humans do. So some of this might be structural blockers. - Also, specification-wise, I think the problem is for arbitrary tasks, well, you still have to specify what you want your LLM to do. And how do you do that? What is the environment? How do you specify? You can say what the end goal is, but if it can't solve the end goal... with LLMs, if you ask it for text, it can always clarify or do sub-steps. How do you put that information into a system that, let's say, books a travel trip for you? You can say, "You screwed up my credit card information," but even to get it to that point, even to get it to that point, how do you, as a user, guide the model before it can even attempt that? I think the interface is really hard. - Yeah, it has to learn a lot about you specifically. And this goes to continual learning, about the general mistakes that are made throughout, and then mistakes that are made through you. - All the AI interfaces are getting set up to ask humans for input. I think Claude Code we talked about a lot. It asks feedback and questions. If it doesn't have enough specification on your plan or your desired goal, it starts to ask questions, "Would you rather?" We talked about Memory, which saves across chats. Its first implementation is kind of odd, where it'll mention my dog's name or something in a chat. I'm like, "You don't need to be subtle about this. I don't care." But things that are emerging, ChatGPT has the Pulse feature. Which is like a curated couple of paragraphs with links to something to look at, and people talk about how models are going to ask you questions. Which I think is a very... It's probably going to work. The language model knows you had a doctor appointment and asks, "Hey, how are you feeling after that?" Which again goes into the territory where humans are very susceptible to this, and there's a lot of social change to come. But also, they're experimenting with having the models engage. Some people like this Pulse feature, which processes your chats and automatically searches for information and puts it in the app. So there are a lot of things coming. - I used that feature before, and I always feel bad because it does that every day, and I rarely check it out. It's like, how much compute is burned on something I don't even look at, you know? It's kind of like, "Oh..." - There's also a lot of idle compute in the world, so don't feel too bad. - Okay. Do you think new ideas might be needed? Is it possible that the path to AGI, however we define that, to solve computer use more generally, to solve biology and chemistry and physics—sort of the Dario Amodei definition of AGI? Do you think it's possible that totally new ideas are needed? Non-LLM, non-RL ideas? What might they look like? We're going into philosophy land a bit. - For something like a singularity to happen, I would say yes. The new ideas could be architectures or training algorithms, fundamental deep learning things. But in that nature, they're pretty hard to predict. I think we won't get very far even without those advances. We might get the software solution, but it might stop at software and not do computer use without more innovation. So I think that a lot of progress will be coming, but if you're gonna zoom out, there's still ideas in the next 30 years that are gonna look like that was a major scientific innovation that enabled the next chapter of this. And I don't know if it comes in one year or in 15 years. - Yeah. I wonder if the bitter lesson holds true for the next 100 years, what that looks like. - If scaling laws are fundamental in deep learning, I think the bitter lesson will always apply, which is compute will become more abundant, but even within abundant compute, the ones that have a steeper scaling law slope or a better offset— like, this is a 2D plot of performance and compute—and like even if there's more compute available, the ones that get 100x out of it will win. - It might be something like literally computer clusters orbiting Earth with solar panels. - The problem with that is heat dissipation. You get all the radiation from the sun and don't have any air to dissipate heat. But there is a lot of space to put clusters. There's a lot of solar energy there and you could figure out the heat dissipation, as there is a lot of energy and there probably could be engineering will to solve the heat problem— so there could be. - Is it possible—and we should say that it definitely is possible— that we're basically going to be plateauing this year? Not in terms of— the system capabilities, but what they actually mean for human civilization. So on the coding front, really nice websites will be built. Very nice auto-complete. Very nice way to understand code bases and maybe help debug, but really just a very nice helper on the coding front. It can help research mathematicians do some math. It can help you with shopping. It's a nice helper. It's Clippy on steroids. What else? It may be a good education tool and all that kind of stuff, but computer use turns out extremely difficult to solve. So I'm trying to frame the cynical case in all these domains where there's not a really huge economic impact, but realize how costly it is to train these systems at every level, both the pre-training and the inference, how costly the inference is, the reasoning, all of that. Like, is that possible? And how likely is that, do you think? - When you look at the models, there are so many obvious things to improve and it takes a long time to train these models and to do this art, and it'll take us with the ideas that we have multiple years to actually saturate in terms of whatever benchmark or performance we are searching for. It might serve very narrow niches; like the average ChatGPT 800 million user might not get a lot of benefit out of this, but it is going to serve different populations by getting better at different things. - But I think what everybody's chasing now is a general system that's useful to everybody. So, okay, so if that's not... That can plateau, right? - I think that dream is actually kind of dying. As you talked about with the specialized models where it's like... And multimodal is often... Video generation is a totally different thing. Thing. - "That dream is kind of dying" is a big statement, because I don't know if it's dying. If you ask the actual frontier lab people, they... I mean, they're still chasing it, right? - I do think they are still rushing to get the next model out, which will be much better than the... "Much" is a relative term, but it will be better than the previous one. And I can't see them slowing down. I just think the gains will be made or felt more through not only scaling the model, but now... I feel like there's a lot of tech debt. It's like, "Well, let's just put the better model in there." Better model, better model. And now people are like, "Okay, let's also at the same time improve everything around it too." Like the engineering of the context and inference scaling. The big labs will still keep doing that. And now also the smaller labs will catch up, because now they are hiring more. There will be more people and LLMs. It's kind of like a circle. They also make them more productive and it's just... It's like amplification. I think what we can expect is amplification, but not like a change of any... not like a paradigm change. I don't think that is true, but everything will be just amplified and amplified, and I can see that continuing for a long time, you know? - Yeah. I guess my statement that the dream is dying depends on exactly what you think it's gonna be doing. Like, Claude Code is a general model that can do a lot of things, but it's not necessarily... It depends a lot on integrations. I bet Claude Code could do a fairly good job of doing your email, and the hardest part is figuring out how to give information to it and how to get it to be able to send your emails. But that's just kind of like... I think it goes back to what is the "one model to rule everything" ethos, which is just like a thing in the cloud that handles your entire digital life and is way smarter than everybody. It's like it's operating in a... So it's an interesting leap of faith to go from "Claude Code becomes that," which in some ways is... There are some avenues for that, but I do think that the rhetoric of the industry is a little bit different. - I think the immediate thing we will feel next as a normal person using LLMs will probably be related to something trivial, like making figures. Right now, LLMs are terrible at making figures. Is it because we are getting served the cheap models with much less inference compute than behind the scenes? Maybe some. Like, there are some ways to get better figures, but if you ask today, ..."Draw a flowchart of X, Y, Z," it's most of the time terrible. And it is a very simple task for a human. I think it's almost easier sometimes to draw something than to write something. - Yeah, the multimodal understanding does feel like something that is odd... ...that it's not better solved. - I think we're not saying one obvious thing that we're not realizing, that's a gigantic thing that's hard to measure, which is making all of human knowledge accessible— —to the entire world. One thing that is hard to articulate is the huge difference between Google Search and an LLM. I feel like I can basically ask an LLM anything and get an answer, and it's doing less and less hallucination. And that means understanding my own life, figuring out a career trajectory, solving the problems all around me, learning about anything through human history. I feel like nobody's really talking about that, because they just immediately take it for granted that this is awesome. That's why everybody's using it: because you get answers for stuff. Think about the impact across time. This is not just in the United States; it's all across the world. Kids throughout the world being able to learn these ideas— the impact that has across time is probably......That's the real impact. Talk about GDP; it won't be like a leap. It'll be... ...that's how we get to Mars, that's how we build these things, that's how we have a million new OpenAIs and all the innovation from there. It's this quiet force that permeates everything: human knowledge. - I agree with you. In a sense, it makes knowledge more accessible, but it also depends on what the topic is. For something like math, you can ask it questions and it answers, but if you want to learn a topic from scratch, the sweet spot is still elsewhere. There are really good math textbooks laid out linearly, and that is a proven strategy to learn a topic. It makes sense, if you start from zero, to use information-dense text to soak it up, but then you use the LLM to make infinite exercises. Like, you have problems in a certain area or have questions that something's- uncertain about certain things, you ask it to generate example problems, you solve them, and you need more background knowledge, you ask it to generate that. But then... it won't give you anything, let's say, that is not in the textbook. It's just packaging it differently, if that makes sense. But then there are things I feel like where it also adds value in a more timely sense, where there is no good alternative besides a human doing it on the fly. For example, if you're planning to go to Disneyland and you try to figure out which tickets to buy for which park when, well, there is no textbook on that. There is no information-dense resource. There's only the sparse internet, and then there is a lot of value in the LLM. You just ask it. You have constraints on traveling these days. I want to go there and there. Please figure out what I need, when and from where, what it costs and stuff like that, and it is a very customized, on-the-fly package. And this is like one of a thousand examples of personalized- Personalization is essentially pulling information from the sparse internet, the non-information-dense thing where there's no better version that exists. It just doesn't exist. You make it almost from scratch. - And if it does exist, it's full of- speaking of Disney World, full of- what would you call it? Ad slop. It's impossible. Take any city in the world, what are the top 10 things to do? An LLM is just way better to ask than anything on the internet. - Well, for now, that's because they're subsidized and they're gonna be paid for by ads. - Oh my goodness. - It's coming. - No. No. I mean, I'm hoping there's a very clear indication what's an ad and what's not an ad in that context. - That's something I mentioned a few years ago. If, I don't know, if you are looking for a new running shoe, well, is it a coincidence that Nike maybe comes up first? Maybe, maybe not. But I think there are clear laws. You have to be clear about that. I think that's what everyone fears. It's the subtle message in there, but that also brings us to the topic of ads, where I think this was a thing. Hopefully, I think for- in 2025, just because I think it's they're still not making money in other ways right now. Having ad spots in there... but the thing is, they couldn't, because there are alternatives without ads and people would just flock- to the other products. It's also just crazy how- yeah, how they're one-upping each other, spending so much money to just get the users. - I think so. Like, some Instagram ads— I don't use Instagram, but I understand the appeal of paying a platform to find users who will genuinely like your product, and that is the best case of things like Instagram ads. But there are also plenty of cases where advertising is very awful for incentives, and I think that a world where the power of AI can integrate with that positive view of, "I am a person and I have a small business and I want to make the best, I don't know, damn steak knives in the world, and I want to sell them to somebody who needs them." And if AI can make that sort of advertising thing work even better, that's very good for the world, especially with digital infrastructure, because that's how the modern web has been built. But that's not to say that addicting feeds so that you can show people more content is a good thing. So, I think that's even what OpenAI would say, is they want to find a way that can make the monetization upside of ads while still giving their users agency. And I personally would think that Google is probably going to be better at figuring out how to do this, because they already have ad supply and if they figure out how to turn this demand in their Gemini app into useful ads, then they can turn it on. And somebody will figure it out—I don't know if it's this year, but there will be experiments with it. - I do think what holds companies back right now is really just that the competition is not doing it. It's more like a reputation thing. It's just, I think people are just afraid right now of ruining or losing their reputation, losing users, because it would make headlines if someone launched these ads. But— - Unless they were great, but the first ads won't be great because it's a hard problem that we don't know how to solve. - Yeah, I think also the first version of that will likely be something like on X, like the timeline where you have a promoted post sometimes in between. It'll be something where it will say "promoted" or something small, and then there will be an image. I think right now the problem is: who makes the first move? - If we go 10 years out, the proposition for ads is that you will make so much money on ads by having so many users that you can use this to fund better R&D and make better models, which is why YouTube is dominating the market for any— Netflix is scared of YouTube. They have the ads, they make—I pay $28 a month for Premium. They make at least $28 a month off of me and many other people. And they're just creating such a dominant position in video. So I think that's the proposition: that ads can make you have a sustained advantage. in what you're spending per user. But there's so much money in it right now that somebody starting that flywheel is scary because it's a long-term bet. - Do you think there'll be some crazy big moves this year business-wise? Like Google or Apple acquiring Anthropic or something like this? - Dario will never sell, but we are starting to see some types of consolidation with Groq for $20 billion and Scale AI for almost $30 billion and countless other deals like this that are structured in a way that is detrimental to the Silicon Valley ecosystem, which is this licensing deal where not everybody gets brought along, rather than a full acquisition that benefits the rank-and-file employee by getting their stock vested. That's a big issue for culture to address because the startup ecosystem is the lifeblood where, if you join a startup, even if it's not successful, it might get acquired on a cheap premium and you'll get paid out for this equity. These licensing deals are taking the top talent a lot of the time. The deal for Groq to NVIDIA is rumored to be better to the employees, but it is still this antitrust-avoiding thing. But I think that this trend of consolidation will continue. Me and many smart people I respect have been expecting consolidation to have happened sooner, but it seems like some of these things are starting to turn, but at the same time, companies are raising ridiculous amounts of money for reasons where I'm like, "I don't know why you're taking that money." So it's mixed this year, but some consolidation pressure is starting. - What kind of surprising consolidation will we see? You say Anthropic is a "never." I mean, Groq is a big one. Groq with a Q, by the way. - Yeah. There's just a lot of startups and a very high premium on AI startups. So there could be a lot of - that kind of stuff, yeah. - $10 billion range acquisitions, which is really big for a startup that was maybe founded a year ago. I think Manus.ai... this company based in Singapore that Meta-founded was founded eight months ago and then had a $2 billion exit. I think there will be some other multi-billion dollar acquisitions, like Perplexity. - Like Perplexity, right? - Yeah, people rumor them to Apple. I think there's a lot of of pressure and liquidity in AI. There's pressure on big companies to have outcomes and- I would guess that a big acquisition gives people leeway to then tell the next chapter of that story. - I guess Cursor—we've been talking about code—somebody acquires Cursor. if somebody acquires Cursor... - They're in such a good position by having so much user data. And we talked about continual learning. They had one of the most interesting sentences in a blog post, which is that they had their new Composer model, which was a fine-tune of one of these large Mixture of Expert models from China. You can know that by asking it or because the model sometimes responds in Chinese— ...which none of the American models do. And they had a blog post where they're like, "We're updating the model weights every 90 minutes based on real-world feedback from people using it." Which is like the closest thing to real-world RL happening on a model, and it's just mentioned in one of their blog posts— - That's incredible. - which is super cool. - And by the way, I should say I use Composer a lot because one of the benefits it has is that it's fast. - I need to try it 'cause everybody says this. - And there'll be some IPOs potentially. You think Anthropic, OpenAI, xAI. - They can all raise so much money so easily that they don't feel a need to. So long as fundraising is easy, they're not going to IPO because public markets apply pressure. I think we're seeing in China that the ecosystem's a little different with both MiniMax and Z.ai applying for, filing IPO paperwork, which will be interesting to see how the Chinese market reacts. I actually would guess that it's going to be similarly hypey to the US, so long as all this is going and not based on the reality that they're both losing a ton of money. I wish more of the gigantic American AI startups were public because it would be very interesting to see how they're spending money and have more insight. And also just to give people access to investing in these, because I think they're some of the most formidable companies—they're the companies of the era. And the tradition is now for so many of the big startups in the US to not go public. It's like we're still waiting for Stripe and the IPO, but Databricks definitely didn't. They raised like a Series G or something. And I just feel like it's kind of a weird equilibrium for the market where it's like, I would like to see these companies go public and evolve in that way that a company can. - Do you think 10 years from now some of the frontier model companies are still around? Anthropic, OpenAI? - I definitely don't see it as a winner-takes-all unless there truly is some algorithmic secret that one of them finds that lets this flywheel. Because the development path is so similar for all of them. Google and OpenAI have all the same products, and then Anthropic's more focused, but when you talk to people it sounds like they're solving a lot of the same problems. So I think... and there's offerings that'll spread out. There's a lot of... it's a very big cake being made that people are going to take money out of. - I don't want to trivialize it, but OpenAI and Anthropic are primarily LLM service providers. And some of the other companies like Google and xAI, linked to X, do other stuff too. And so it's very possible, if AI becomes more commodified, that the companies just providing LLMs will die. - I think the advantage they have is a lot of users, and I think they will just pivot. Like Anthropic, I think, pivoted. I don't think they originally planned to work on code, but they found, "Okay, this is a nice niche, and now we are comfortable and we push on this niche." I can see the same thing... Let's say hypothetically, I'm not sure if it will be true, but let's say Google takes all the market share of the general chatbot. Maybe OpenAI will then focus on some other sub-topic. They have too many users to go away in the foreseeable future. - I think Google is always ready to say, "Hold my beer," with AI models. - I think the question is if the companies can support the valuations. I see the AI companies being looked at in some ways like AWS, Azure, and GCP are, all competing in the same space and all very successful businesses. There's a chance that the API market is so unprofitable that they go up and down the stack to products and hardware. They have so much cash that they can build power plants and data centers, which is a durable advantage now. But there's also a reasonable outcome that these APIs are so valuable and so flexible for developers that they become something like AWS. But AWS and Azure are also going to have these APIs, so having five or six people competing in the API market is hard. So maybe that's why they get squeezed out. - You mentioned "RIP Llama." Is there a path to winning for Meta? - I think nobody knows. They're moving a lot, so they're signing licensing deals with Black Forest Labs, which is an image generation company, or Midjourney. So I think in some ways on the product and consumer-facing AI front, it's too early to tell. I think they have some people who are excellent and very motivated being close to Zuckerberg. So I think there's still a story to unfold there. Llama is a bit different, where Llama was the most focused expression of the organization. And I don't see Llama being supported to that extent. I think it was a very successful brand for them. So they still might participate in the open ecosystem or continue the Llama brand into a different service, because people know what Llama is. - You think there's a Llama 5? - Not an open-weight one. - It's interesting. I think Llama was the pioneering open-weight model. With Llama 1, 2, and 3, there was a lot of love. But I think then, hypothesizing or speculating, I think the leaders at Meta, like the upper executives, they... I think they got very excited about Llama because they saw how popular it was in the community. And then I think the problem was trying to, let's say, monetize the open—or not monetize the open source, but use it to make a bigger splash. It felt almost forced, like developing these very big Llama 4 models to be on top of the benchmarks. But I don't think the goal of Llama models is to be on top of the benchmarks beating, let's say, ChatGPT or other models. I think the goal was to have a model that people can use, trust, modify, and understand. So that includes having smaller models. They don't have to be the best models. And what happened was, these models were, of course... the benchmarks suggested that they were better than they were because they had specific models trained on preferences so that they performed well on benchmarks. That's kind of, like, this overfitting thing to force it to be the best. But then at the same time, they didn't do the small models that people could use. And I think that no one could run these big models then. And then there was kind of a weird thing. I think it's just because people got too excited about headlines pushing the frontier. I think that's it. - And too much on the benchmarking side. - It's too much work. - I think it imploded under internal political fighting and misaligned incentives. The researchers want to build the best models, but there's a layer of organization— ...and management that is trying to demonstrate that they do these things. And then there are rumors about how, for example, some horrible technical decision was made. It just seems like it got so bad that it all just crashed out. - Yeah, but we should also give huge props to Mark Zuckerberg. I think it comes from Mark, actually, from Mark Zuckerberg, from the top of the leadership, saying open source is important. The fact that that leadership exists means there could be a Llama 5, where they learn the lessons from benchmarking and say, "We're going to be GPT-OSS—" "...and provide a really awesome library of open source." - What people say is that there's a debate between Mark and Alexandr Wang, who is very bright, but much more against open source. And to the extent that he has a lot of influence over the AI org, it seems much less likely, because it seems like Mark brought him in for a fresh leadership eye in directing AI. And if being open or closed is no longer the defining nature of the model, I don't expect that to be a defining argument between Mark and Alex. They're both very bright, but I just have a hard time understanding all of it because Mark wrote this piece in July of 2024, which was probably the best blog post at the time, saying "The Case for Open Source AI." And then July 2025 came around and it was, "We're reevaluating our relationship with open source." So it's just kind of... - But I think also the problem... Not the problem, but I think, well, we may have been a bit too harsh, and that caused some of that. Because I mean, we as open source developers or the community... Even though the model was maybe not what everyone hoped for, it got a lot of backlash. And I think that was unfortunate because I can see that as a company, they were hoping for positive headlines. And instead of just getting no headlines or positive headlines, in turn they got negative headlines. And then it kind of reflected bad on the company. I think that is also something where it's maybe a spite reaction, almost like, "Okay, we tried to do something nice, we tried to give you something cool, like an open source model, and now you are kind of being negative about us, even for the company." So in that sense, it looks like, "Well, maybe then we'll change our mind." I guess. I don't know. - Yeah, that's where the dynamics of discourse on X can lead us, as a community, astray. Because sometimes it feels random. People pick the thing they like and don't like. I mean, you can see the same thing with Grok 4.1 and Grok Code Fast 1.0. I don't think, vibe-wise, people love it publicly. But a lot of people use it. So if you look to Reddit and X, they don't really give it praise from the programming community, but they use it. And the same thing with probably Llama. I don't understand the dynamics of either positive hype or negative hype. I don't understand it. - I mean, one of the stories of 2025 is the US filling the gap of Llama, which is the rise of these Chinese open-weight models, models- to the point where that was the single issue I've spent a lot of energy on lately, trying to do policy work to get the US to invest in this. - So just tell me the story of ADAM. - The ADAM Project started as me calling it the American DeepSeek Project, which doesn't really work for DC audiences, but it's the story of the most impactful thing I can do with my career, which is that these Chinese open-weight models are cultivating a lot of power, and there is a lot of demand for building on these open models, especially in enterprises in the US that are very cagey about Chinese models. - The ADAM Project, American Truly Open Models, is a US-based initiative to build and host high-quality, genuinely open-weight AI models and supporting infrastructure explicitly aimed at competing with and catching up to China's rapidly advancing open-source AI ecosystem. - I think the one-sentence summary would be that... or two sentences. One is a proposition that open models are going to be an engine for AI research because that is what people start with; therefore, it's important to own them. And the second one is, therefore, the US should be building the best models so that the best research happens in the US, and those US companies take the value from being the home of where AI research is happening. And without more investment in open models—we have plots on the website where it's like, "Qwen, Qwen, Qwen, Qwen"—it's all these models that are excellent from these Chinese companies that are cultivating influence internationally. I think the US is spending way more on AI, and the ability to create open models that are a generation beyond what the cutting edge of closed labs costs roughly $100 million, which is a lot of money, but not a lot of money to these companies. Therefore, we need a centralizing force of people who want to do this. And I think we got engagement from people pretty much across the full stack, whether it's policy. - So there has been support from the administration? - I don't think anyone technically in government has signed it publicly, but I know people that have worked in AI policy, in both the Biden and Trump administrations, are very supportive of promoting open-source models in the US. I think, for example, AI2 got a grant from the NSF for $100 million over four years, which is the biggest CS grant the NSF has ever awarded, and it's for AI2 to attempt this. It's a starting point. But the best thing happens when there are multiple organizations building models, because they can cross-pollinate ideas and build this ecosystem. I don't think it works if it's just Llama releasing models, because Llama could go away. The same thing applies for AI2; I can't be the only one building models. It becomes a lot of time spent on talking to people, whether in policy... I know NVIDIA is very excited about this. I think Jensen Huang has been talking about the urgency for this, and they've done a lot more in 2025, where the Nemotron models are more of a focus. They've started releasing some data along with NVIDIA's open models, and very few companies do this, especially of NVIDIA's size, so there are signs of progress. We hear about Reflection AI, where they say their two billion dollar fundraise is dedicated to building US open models, and I feel and their announcement tweet reads like a blog post, right? I think that cultural tide is starting to turn. In July, four or five DeepSeek-caliber Chinese open-weight models and and zero from the US. That's the moment where I realized, like, "Oh, I guess I have to spend energy on this because nobody else is gonna do it." So it takes a lot of people contributing together, and I don't say that, the Adam Project isn't the thing that's helping to move the ecosystem, but it's people like me doing this sort of thing to get the word out. - Do you like the 2025 America's AI Action Plan? That includes open source stuff. The White House AI Action Plan includes a dedicated section titled "Encourage Open-Source and Open-Weight AI," defining such models and arguing they have unique value for innovation and startups. - Yeah. I mean, the AI Action Plan is a plan, but largely, I think it's maybe the most coherent policy document that has come out of the administration, and I hope that it largely succeeds. I know people that have worked on the AI Action Plan and the challenges of taking policy and making it real. I have no idea how to do this as an AI researcher, but largely a lot of things in that were very real, and there's a huge build-out of AI in the country. There are a lot of issues that people are hearing about, from water use to whatever, and we should be able to build things in this country, but also, we need to not ruin places in our country in the process of building it, and it's worthwhile to spend energy on. I think that's a role the federal government plays. They set the agenda. And with AI, setting the agenda that open-weight should be a first consideration is a large part of what they can do and then people think about it. - Also, for education and talent for these companies, it's very important because otherwise, if there are only closed models, how do you get the next generation of people contributing at some point? Because otherwise, you will point only be able to learn after you joined a company. But at that point, how do you hire talented people? How do you identify talented people? I think open source is essential for a lot of things, but also even just for educating the population and training the next generation of researchers. It's the way, or the only way. - The way that I could've gotten this to go more viral was to tell a story of Chinese AI integrating with an authoritarian state, being ASI and taking over the world, and therefore we need our own American models. But it's very intentional why I talk about innovation and science in the US because I think it's both more realistic as an outcome, but also it's a world that I would like to manifest. - I would say, though, also even any open-weight model, I do think, is a valuable model. - Yeah. And my argument is that we should be in a leading position. But I think it's worth saying it simply because there are still voices in the AI ecosystem that say we should consider banning the release of open models due to safety risks. And I think it's worth adding that, effectively, that's impossible without making the US have its own great firewall, which is also known to not work that well because the cost for training these models, whether it's one to a hundred million dollars, is attainable to a huge amount of people in the world that want to have influence, so these models will be trained all over the world. And we want the models, especially when, like, I mean, there are safety concerns, but we want this information and tools to flow freely across the world and into the US so that people can use them and learn from them. Stopping that would be such a restructuring of our internet that it seems impossible. - Do you think maybe in that case the big open-weight models from China are actually a good thing in a sense, like, for the US companies? Because maybe the US companies you mentioned earlier are usually one generation behind in terms of what they release open source versus what they are using? For example, gpt-oss might not be the cutting-edge model. Gemini 3 might not be, but they do that because they know this is safe to release. But then when they see, these companies see, for example, there is DeepSeek-V3.2, which is really awesome, and it gets used and there is no backlash, there is no security risk, that could then, again, encourage them to release better models. Maybe that, in a sense, is a very positive thing. - A hundred percent. These Chinese companies have set things into motion that I think would potentially not have happened if they were not all releasing models. So I think it was like I'm almost sure that those discussions have been had by leadership. - Is there a possible future where the dominant AI models in the world are all open source? - Depends on the trajectory of progress that you predict. If you think saturation in progress is coming within a few years, so essentially, within the time where financial support is still very good, then open models will be so optimized and so much cheaper to run that they'll win out. This goes back to open source ideas where so many more people will be putting money into optimizing the serving of these open-weight common architectures that they will become standards, and then you could have chips dedicated to them, and it'll be way cheaper than the offerings from these closed companies that are custom. - We should say that the AI27 report kinda predicts one of the things it does from a narrative perspective is that there will be a lot of centralization. As the AI systems get smarter and smarter, national security concerns will arise, and you'll centralize the labs, and they'll become super secretive, and there'll be this whole race -...from a military perspective of how do you... between China and the US. And so all of these fun conversations we're having about LLMs... the generals and the soldiers will come into the room and be like, "All right. We're now in the Manhattan Project stage of this whole thing." - I think in 2025, '26, '27, I don't think something like that is even remotely possible. You can make the same argument for computers, right? You can say, "Computers are capable and we don't want the general public to get them." Or chips, even AI chips, but you see how Huawei makes chips now. It took a few years, but... and I don't think there is a way you can contain knowledge like that. I think in this day and age, it is impossible, like the internet. I don't think this is a possibility. - On the Manhattan Project thing, I think that a Manhattan Project-like thing for open models would be pretty reasonable, because it wouldn't cost that much. But I think that will come. It seems like culturally, the companies are changing. But I agree with Sebastian on all of that. I don't see it happening nor being helpful. - Yeah. The motivating force behind the Manhattan Project was civilizational risk. It's harder to motivate that for open-source models. - There's no civilizational risk. - On the hardware side, we mentioned NVIDIA a bunch of times. Do you think Jensen and NVIDIA will keep winning? - I think they have to iterate and manufacture a lot. And I think they probably... what they're doing, they do innovate, but I think there's always the chance that someone does something fundamentally different, gets very lucky, and then does something. But the problem is adoption. The moat of NVIDIA is probably not just the GPU. It's more like the CUDA ecosystem, and that has evolved over two decades. Even back when I was a grad student, I was in a lab doing biophysical simulations, molecular dynamics, and we had a Tesla GPU back then just for the computations. It was about 15 years ago now. And they built this up for a long time, and that's the moat, I think. It's not the chip itself, although they have the money to iterate, build, and scale. But then it's really about compatibility. If you're at that scale, why would you go with something risky where there are only a few chips they can make per year? You go with the big one. But then I do think with LLMs now, it will be easier to design something like CUDA. It took 15 years because it was hard, but now that we have LLMs, we can maybe replicate CUDA. - And I wonder if there will be a separation of training and inference compute as we stabilize, and more compute is needed for inference. - That's supposed to be the point of the Groq acquisition. And that's why part of what Vera Rubin is- where they have a new chip with no high-bandwidth memory, which is one of the- or very little, which is one of the most expensive pieces. It's designed for pre-fill, which is the part of inference where you essentially do a lot of matrix multiplications. And then you only need the memory when you're doing this autoregressive generation, and you have the KV cache swaps. So they have this new GPU that's designed for that specific use case, and then the cost of ownership per FLOP or whatever is actually way lower. But I think that NVIDIA's fate lies in the diffusion of AI still. Their biggest clients are still these hyperscale companies. Like, Google obviously can make TPUs. Amazon is making Trainium. Microsoft will try to do its own things. And so long as the pace of AI progress is high, NVIDIA's platform is the most flexible and people will want that. But if there's stagnation, then creating bespoke chips, there's more time to do it. - It's interesting that NVIDIA is quite active in trying to develop all kinds of different products. - They try to create areas of commercial value that will use a lot of GPUs. - Mm-hmm. But they keep innovating and they're doing a lot of incredible research, so... - Everyone says the company's super oriented around Jensen and how operationally plugged in he is. And it sounds so unlike many other big companies that I've heard about. And so long as that's the culture, I think that we can expect that to keep progress happening. And it's like he's still in the Steve Jobs era of Apple. So long as that is how it operates, I'm pretty optimistic for their situation because it's like, it is their top-order problem, and I don't know if making these chips for the whole ecosystem is the top goal of all these other companies. They'll do a good job, but it might not be as good of a job. - Since you mentioned Jensen, I've been reading a lot about history and about singular figures in history. What do you guys think about the single man/woman view of history? How important are individuals for steering the direction of history in the tech sector? So, you know, what's NVIDIA without Jensen? You mentioned Steve Jobs. What's Apple without Steve Jobs? What's xAI without Elon or DeepMind without Demis? - People make things earlier and faster, whereas scientifically, many great scientists credit being in the right place at the right time and still making the innovation, where eventually someone else will still have the idea. So I think that in that way, Jensen is helping manifest this GPU revolution much faster and much more focused than it would happen without having a person there. And this is making the whole AI build-out faster. But I do still think that eventually, something like ChatGPT would have happened and a build-out like this would have happened, but it probably would not have been as fast. I think that's the sort of flavor that is applied. - These individual people, there are people who are placing bets on something. Some get lucky, some don't. But if you don't have these people at the helm, it would be more diffused. It's almost like investing in an ETF versus individual stocks. Individual stocks might go up or down more heavily than an ETF, which is more balanced. It will eventually go up over time. We'll get there. But it's just like, you know, the focus I think is the thing. Passion and focus. - Isn't there a real case to be made that without Jensen, there's not a reinvigoration of the deep learning revolution? - It could've been 20 years later, is what I would say. Or like another AI winter could have come if GPUs weren't around. - That could change history completely because you could think of all the other technologies that could've come in the meantime, and the focus of human civilization would get... Silicon Valley would be captured by different hype. - But I do think there's certainly an aspect where it was all planned, the GPU trajectory. But on the other hand, it's also a lot of lucky coincidences or good intuition. Like the investment into, let's say, biophysical simulations. I mean, I think it started with video games and then it just happened to be good at linear algebra because video games require a lot of linear algebra. And then you have the biophysical simulations. But still, I don't think the master plan was AI. I think it happened to be Alex Krizhevsky. So someone took these GPUs and said, "Hey, let's try to train a neural network on that." It happened to work really well, and I think it only happened because you could purchase those GPUs. - Gaming would've created a demand for faster processors if NVIDIA had gone out of business in the early days. That's what I would think. I think that the GPUs would've been different, but I think GPUs would still exist at the time of AlexNet and at the time of the Transformer. It was just hard to know if it would be one company as successful or multiple smaller companies with worse chips. But I don't think that's a 100-year delay. It might be a decade delay. - Well, it could be one, two, three, four, five-decade delay. I just can't see Intel or AMD doing what NVIDIA did. - I don't think it would be a company that exists. I think it would be a different company that would rise. - Like Silicon Graphics or something. - So yeah, some company that has died would have done it. - But just looking at it, it seems like these singular figures, these leaders, have a huge impact on the trajectory of the world. Obviously, there are incredible teams behind them. But, you know, having that kind of very singular, almost dogmatic focus- -is necessary to make progress. - Yeah, I mean, even with GPT, it wouldn't exist if there wasn't a person, Ilya, who pushed for this scaling, right? - Yeah, Dario Amodei was also deeply involved in that. If you read some of the histories from OpenAI, it seems wild thinking about how early these people were like, "We need to hook up 10,000 GPUs and take all of OpenAI's compute and train one model." There were a lot of people who didn't want to do that. - Which is an insane thing to believe. To believe in scaling before scaling has any indication that it's going to materialize. Again, singular figures. Speaking of which, 100 years from now, this is presumably post-singularity, whatever singularity is. When historians look back at our time now, what technological breakthroughs would they really emphasize as the breakthroughs that led to the singularity? So far we have Turing to today, 80 years. - I think it would still be computing, like the umbrella term "computing." I don't necessarily think that in 100 or 200 years it would be AI. It could still very well be computers. We are now taking better advantage of them, but the fact of computing remains. - It's basically a Moore's Law discussion. Even the details of CUDA and GPUs won't even be remembered, nor will all this software turmoil. It'll just be, obviously, compute. - I generally agree, but is the connectivity of the internet and compute able to be merged? Or is it both of them? - I think the internet will probably be related to communication. It could be a phone, the internet, or satellites. Compute is more like the scaling aspect of it. - It's possible that the internet is completely forgotten- -that the internet is wrapped into phone networks, like communication networks. This is just another manifestation of that, and the real breakthrough comes from increased compute, or Moore's Law, broadly defined. - Well, I think the connection of people is very fundamental to it. it's like, you can talk to anyone. You want to find the best person in the world for something, they are somewhere in the world. And being able to have that flow of information—the AIs will also rely on this. I've been fixating on when I said the dream was dead about the one central model. The thing that is evolving is people having many agents for different tasks. People already started doing this with different clouds. It's described as many AGIs in the data center where each one manages and they talk to each other. And that is reliant on networking and the free flow of information. on top of compute. But networking, especially with GPUs, is such a part of scaling of compute. The GPUs and the data centers need to talk to each other. - Anything about neural networks will be remembered? Like, do you think there's something very specific and singular to the fact that it's neural networks that's seen as a breakthrough, like a genius, that you're basically replicating, in a very crude way, the human mind? The structure of the human brain, the human mind? - I think without the human mind, we probably wouldn't have neural networks, because it just was an inspiration for that. But on the other end, I think it's just so, so different. I mean, it's digital versus biological, that I do think it will probably be more grouped as an algorithm. - That's massively parallelizable......On this particular kind of compute? - It could have been like genetic computing; genetic algorithms just parallelized. It just happens that this is more efficient and works better. - And it very well could be that the LLM, the neural networks, the way we architect them now is just a small component of the system that leads to singularity. - If you think of it in 100 years, I think society can be changed more with more compute and intelligence because of autonomy. But looking at this, what are the things from the Industrial Revolution that we remember? We remember the engine, which is probably the equivalent of the computer in this. But there's a lot of other physical transformations that people are aware of, like the cotton gin and all these things, these machines that are still known: air conditioning, refrigerators. Some of these things from AI will still be known. The word "transformer" could still be known. I would guess that deep learning is definitely still known, but the transformer might be evolved away from in 100 years with AGI researchers everywhere. But I think deep learning is likely to be a term that is remembered. - And I wonder what the air conditioning and refrigeration of the future is that AI brings. If we travel forward 100 years from now, we transport there right now, what do you think is different? How do you think the world looks different? First of all, do you think there are humans? Do you think there are robots everywhere walking around? - I do think specialized robots, for sure, for certain tasks. - Humanoid form? - Maybe half-humanoid. We'll see. I think for certain things, yes, there will be humanoid robots because it's just amenable for the environment. But for certain tasks, it might make sense. What's harder to imagine is how we interact with the devices and what humans do with devices. Well, I mean, I'm pretty sure it will probably not be the cellphone or the laptop. Will it be implants? - I mean, it has to be brain-computer interfaces, right? I mean, 100 years from now, given the progress we're seeing now— there has to be... unless there's legitimately a complete alteration of how we interact with reality. - On the other hand, cars are older than 100 years, right? And it's still the same interface. We haven't replaced cars with something else. We just made them better, but it's still a steering wheel, still wheels, you know? - I think we'll still carry around a physical brick of compute because people want some ability to have a private... Like, you might not engage with it as much as a phone, but having private information that is yours as an interface between the rest of the internet, I think that will still exist. It might not look like an iPhone and it might be used a lot less, but I still expect people to carry things around. - Why do you think the smartphone is the embodiment of private? There's a camera on it. There's— - Private for you, like encrypted messages, encrypted photos... know what your life is. I guess it's a question of how optimistic on brain-machine interfaces you are. Is all that just going to be stored in the cloud? Your whole calendar? It's hard to think about processing all the information that we can process visually through brain-machine interfaces presenting something like a calendar or something to you. It's hard to just think about knowing, without looking, your email inbox. Like you signal to a computer and then you just know your email inbox. Is that something that the human brain can handle being piped into it non-visually? I don't know exactly how those transformations happen. Humans aren't changing in 100 years. I think agency and community are things that people actually want. - A local community, yeah. - People you are close to, being able to do things with them and being able to ascribe meaning to your life and being able to do things. In 100 years, I don't think that human biology is changing away from those on a time scale that we can discuss. And I think that UBI does not solve agency. I do expect mass wealth, and I hope that it has spread so that the average life looks very different in 100 years. But that's still a lot to happen. If you think about countries that are early in their development process to getting access to computing and internet, to build all the infrastructure and have policy that shares one nation's wealth with another is... I think it's an optimistic view to see all that happening in 100 years- ...while they are still independent entities and not just like absorbed into some international order by force. - But there could be just better, more elaborate, more effective ... social support systems that help alleviate some levels of basic suffering from the world. You know, the transformation of society where a lot of jobs are lost in the short term, I think we have to really remember that each individual job that's lost is a human being who's suffering. That's like a ... When jobs are lost, the scale is a real tragedy. You can make all kinds of arguments about economics or how it's all going to be okay. It's good for the GDP, there's going to be new jobs created. Fundamentally at the individual level for that human being, that's real suffering. That's a real personal sort of tragedy. And we have to not forget that as the technologies are being developed. And also my hope for all the AI slop we're seeing is that there will be a greater and greater premium for the fundamental aspects of the human experience that are in-person. The things that we all... Like seeing each other, talking together in-person. - The next few years are definitely going to be an increased value on physical goods and events—...and even more pressure on slop. So it'll be... the slop is only starting. The next few years will be more and more diverse...versions of slop. - They would be drowning in slop. Is that what— - So I'm hoping that society drowns in slop enough to snap out of it and be like, "We can't deal with it. It just doesn't matter." And then, the physical has such a higher premium on it. - Even like classic examples, I honestly think this is true, and I think we will get tired of it. We are already kind of tired of it. I mean, even art. I don't think art will go away. You have paintings, physical paintings. There's more value, not just monetary value, but just more value appreciation for the actual painting than a photocopy of that painting. It could be a perfect digital reprint, but there is something when you go to a museum and you look at that art and you see the real thing and you just think, "Okay. A human." It's like a craft. You have like an appreciation for that. And I think the same is true for writing, for talking, for any type of experience... I do unfortunately think it will be like a dichotomy, like a fork where some things will be automated. Like, you know, there are not as many paintings as there used to be, you know, 200 years ago. There are more photographs, more photocopies. But at the same time, it won't go away. There will be value in that. I think the difference will just be, you know, what's the proportion of that. But personally, I have a hard time reading things where I obviously see it's obviously AI generated. I'm sorry. It might—it might be really good information there, but I'm just like, "Nah, not for me." - I think eventually they'll fool you, and it'll be on platforms that give ways of verifying or building trust. So you will trust that Lex is not AI generated, having been here. So then you have trust in this channel. But it's harder for new people who don't have that trust. - Well, that will get interesting because I think fundamentally it's a solvable problem by having trust in certain outlets that they won't do it, but it's all going to be trust-based. There will be systems to authorize, "Okay, this is real. This is not real." There will be some telltale signs where you can obviously tell this is AI generated and this is not. But some will be so good that it's hard to tell, and then you have to trust. And well, that will get interesting and a bit problematic. - The extreme case of this is to watermark all human content. So all photos that we take on our own have some watermark until they are edited or something like this. And software can manage communications with the device manufacturer- device manufacturer to maintain human editing— which is the opposite of the discussion to try to watermark AI images. And then you can make a Google image that has a watermark and use a different Google tool to remove it. - Yep. It's going to be an arms race, basically. - And we've been mostly focusing on the positive aspects of AI. All the capabilities that we've been talking about can be used to destabilize human civilization with even just relatively dumb AI applied at scale, and then further, superintelligent AI systems. Of course, there's the sort of doomer take that's important to consider as we develop these technologies. What gives you hope about the future of human civilization, given everything we've been talking about? Are we going to be okay? - I think we will. I'm definitely a worrier, both about AI and non-AI things. But humans do tend to find a way. I think that's what humans are built for: to have community and find a way to figure out problems. That's what has gotten us to this point. And to think that the AI opportunity and related technologies is really big. And I think that there's big social and political problems to help everybody understand that. And I think that's what we're staring at a lot of right now, is like the world is a scary place, and AI is a very uncertain thing. And it takes a lot of work that is not necessarily building things. It's like telling people and understanding people, that the people building AI are historically not motivated or wanting to do. But it is something that is probably doable. It just will take longer than people want. And we have to go through that long period of like hard, distraught AI discussions if we want to have the lasting benefits. - Yeah. Through that process, I'm especially excited that we get a chance to better understand ourselves, us at the individual level as humans and at the civilization level, and answer some of the big mysteries, like what is this whole consciousness thing going on here? It seems to be truly special. Like, there's a real miracle in our mind. And AI puts a mirror to ourselves and we get to answer some of the big questions about like, what is this whole thing going on here? - Well, one thing about that is also what I do think makes us very different from AI and why I don't worry about AI taking over is, like you said, consciousness. We humans, we decide what we want to do. AI in its current implementation, I can't see it changing. You have to tell it what to do. And so you have still the agency. It doesn't take the agency from you because it becomes a tool. You can think of it as a tool. You tell it what to do. It will be more automatic than other previous tools. It's certainly more powerful than a hammer, it can figure things out, but it's still you in charge, right? So the AI is not in charge, you're in charge. You tell the AI what to do and it's doing it for you. - So in the post-singularity, post-apocalyptic war between humans and machines, you're saying humans are worth fighting for? - 100%. I mean, this is... The movie Terminator, they made in the '80s, essentially, and I do think, well, the only thing I can see going wrong is, of course, if things are explicitly programmed to do the thing that is harmful, basically. - I think actually in that, in a Terminator type of setup, I think humans win. I think we're too clever. It's hard to explain how we figure it out, but we do. And we'll probably be using local LLMs, open source LLMs to help fight the machines. I apologize for the ridiculousness. Like I said, Nathan already knows I've been a big fan of his for a long time. Been a big fan of yours, Sebastian, for a long time, so it's an honor to finally meet you. Thank you for everything you put out into the world. Thank you for the excellent books you're writing. Thank you for teaching us. And thank you for talking today. This was fun. - Thank you for inviting us here and having this human connection, which is actually- - Extremely valuable- human connection. Thanks for listening to this conversation with Sebastian Raschka and Nathan Lambert. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback and so on. And now let me leave you with some words from Albert Einstein. "It is not that I'm so smart, but I stay with the questions much longer." Thank you for listening, and hope to see you next time.
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sb See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. *Transcript:* https://lexfridman.com/ai-sota-2026-transcript *Correction:* Here's an updated image listing a collection of recent open & closed AI models with some improvements & fixes: https://lexfridman.com/wordpress/wp-content/uploads/2026/01/ai_models_2025.png *CONTACT LEX:* *Feedback* - give feedback to Lex: https://lexfridman.com/survey *AMA* - submit questions, videos or call-in: https://lexfridman.com/ama *Hiring* - join our team: https://lexfridman.com/hiring *Other* - other ways to get in touch: https://lexfridman.com/contact *EPISODE LINKS:* Nathan's X: https://x.com/natolambert Nathan's Blog: https://interconnects.ai Nathan's Website: https://natolambert.com Nathan's YouTube: https://youtube.com/@natolambert Nathan's GitHub: https://github.com/natolambert Nathan's Book: https://rlhfbook.com Sebastian's X: https://x.com/rasbt Sebastian's Blog: https://magazine.sebastianraschka.com Sebastian's Website: https://sebastianraschka.com Sebastian's YouTube: https://youtube.com/@SebastianRaschka Sebastian's GitHub: https://github.com/rasbt Sebastian's Books: Build a Large Language Model (From Scratch): https://manning.com/books/build-a-large-language-model-from-scratch Build a Reasoning Model (From Scratch): https://manning.com/books/build-a-reasoning-model-from-scratch *SPONSORS:* To support this podcast, check out our sponsors & get discounts: *Box:* Intelligent content management platform. Go to https://lexfridman.com/s/box-ep490-sb *Quo:* Phone system (calls, texts, contacts) for businesses. Go to https://lexfridman.com/s/quo-ep490-sb *UPLIFT Desk:* Standing desks and office ergonomics. Go to https://lexfridman.com/s/uplift_desk-ep490-sb *Fin:* AI agent for customer service. Go to https://lexfridman.com/s/fin-ep490-sb *Shopify:* Sell stuff online. Go to https://lexfridman.com/s/shopify-ep490-sb *CodeRabbit:* AI-powered code reviews. Go to https://lexfridman.com/s/coderabbit-ep490-sb *LMNT:* Zero-sugar electrolyte drink mix. Go to https://lexfridman.com/s/lmnt-ep490-sb *Perplexity:* AI-powered answer engine. Go to https://lexfridman.com/s/perplexity-ep490-sb *OUTLINE:* 0:00 - Introduction 1:57 - China vs US: Who wins the AI race? 10:38 - ChatGPT vs Claude vs Gemini vs Grok: Who is winning? 21:38 - Best AI for coding 28:29 - Open Source vs Closed Source LLMs 40:08 - Transformers: Evolution of LLMs since 2019 48:05 - AI Scaling Laws: Are they dead or still holding? 1:04:12 - How AI is trained: Pre-training, Mid-training, and Post-training 1:37:18 - Post-training explained: Exciting new research directions in LLMs 1:58:11 - Advice for beginners on how to get into AI development & research 2:21:03 - Work culture in AI (72+ hour weeks) 2:24:49 - Silicon Valley bubble 2:28:46 - Text diffusion models and other new research directions 2:34:28 - Tool use 2:38:44 - Continual learning 2:44:06 - Long context 2:50:21 - Robotics 2:59:31 - Timeline to AGI 3:06:47 - Will AI replace programmers? 3:25:18 - Is the dream of AGI dying? 3:32:07 - How AI will make money? 3:36:29 - Big acquisitions in 2026 3:41:01 - Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta 3:53:35 - Manhattan Project for AI 4:00:10 - Future of NVIDIA, GPUs, and AI compute clusters 4:08:15 - Future of human civilization *PODCAST LINKS:* - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips *SOCIAL LINKS:* - X: https://x.com/lexfridman - Instagram: https://instagram.com/lexfridman - TikTok: https://tiktok.com/@lexfridman - LinkedIn: https://linkedin.com/in/lexfridman - Facebook: https://facebook.com/lexfridman - Patreon: https://patreon.com/lexfridman - Telegram: https://t.me/lexfridman - Reddit: https://reddit.com/r/lexfridman