AI-generated interview qa format analysis of the interview
Explore the future of AI with OpenAI CEO Sam Altman. Discuss GPT-5's capabilities, superintelligence, scientific breakthroughs, and the societal impact of rapid technological advancement.
Upload your interview recording and get the same detailed AI analysis.
Upload Your InterviewThis interview qa format was automatically generated by AI from the interview transcription. The analysis provides structured insights and key information extracted from the conversation.
Sam Altman
Complete analysis processed by AI from the interview transcription
Interviewer: You recently said, surprisingly recently, that GPT-4 was the dumbest model any of us will ever have to use again. But GPT-4 can already perform better than 90% of humans at the SAT and the LSAT and the GRE, and it can pass coding exams and sommelier exams and medical licensing. And now you just launched GPT-5. What can GPT-5 do that GPT-4 can't?
Interviewee: First of all, one important takeaway is you can have an AI system that can do all those amazing things you just said. And it clearly does not replicate a lot of what humans are good at doing, which I think says something about the value of SAT tests or whatever else. But I think had you gone back to if we were having this conversation the day of GPT-4 launch and we told you how GPT-4 did at those things, you were like, oh, man, this is going to have huge impacts and some negative impacts on what it means for a bunch of jobs or what people are going to do. And, you know, this is a bunch of positive impacts that you might have predicted that haven't yet come true. And so there's something about the way that these models are good that does not capture a lot of other things that we need people to do or care about people doing. And I suspect that same thing is going to happen again with GPT-5. People are going to be blown away by what it does. It's really good at a lot of things. And then they will find that they want it to do even more. People will use it for all sorts of incredible things. It will transform a lot of knowledge work, a lot of the way we learn, a lot of the way we create. But we people, society will co-evolve with it to expect more with better tools. So, yeah, like I think this model is quite remarkable in many ways, quite limited in others. But the fact that for, you know, three minute, five minute, one hour tasks that like an expert in a field could maybe do or maybe struggle with, that the fact that you have in your pocket one piece of software that can do all of these things is really amazing. I think this is like unprecedented at any point in human history that I that technology has improved this much this fast. And the fact that we have this tool now, you know, we're like living through it and we're kind of adjusting step by step. But if we could go back in time five or 10 years and say this thing was coming, we would be like, probably not.
Interviewer: Let's assume that people haven't seen the headlines. What are the top line specific things that you're excited about? And also the things that you seem to be caveating, the things that maybe you won't expect it to do.
Interviewee: The thing that I am most excited about is this is a model for the first time where I feel like I can ask kind of any hard scientific or technical question and get a pretty good answer. And I'll give a fun example, actually. When I was in junior high, or maybe it was ninth grade, I got a TI-83, this old graphing calculator. And I spent so long making this game called Snake. It was a very popular game with kids in my school. And I was like, I was like, probably when it was done, but it was like programming on a TI-83 was extremely painful. It took a long time. It was really hard to like debug and whatever. And on a whim with an early copy of GPT-5, I was like, I wonder if it can make a TI-83 style game of Snake. And of course, it did that perfectly in like seven seconds. And then I was like, okay, am I supposed to be, would my like 11-year-old self think this was cool or like, you know, miss something from the process? And I had like three seconds of wondering like, oh, is this good or bad? And then I immediately said, actually, now I'm missing this game. I have this idea for a crazy new feature. Let me type it in. It implements it. And it just, the game live updates. And I'm like, actually, I'd like it to look this way. Actually, I'd like to do this thing. And I had this like, this very like kind of, you have this experience that reminded me of being like 11 and programming again, where I was just like, I want to try this. Now I have this idea. But I could do it so fast and I could like express ideas and try things and play with things in such real time. I was like, oh man, you know, I was worried for a second about kids like missing the struggle of learning to program in this sort of stone age way. And now I'm just thrilled for them because the way that people will be able to create with these new tools, the speed with which you can sort of bring ideas to life, you know, and that's pretty amazing. So this idea that GPT-5 can just not only like answer all these hard questions for you, but really create like on-demand, almost instantaneous software. That's, I think that's going to be one of the defining elements of the GPT-5 era in a way that did not exist with GPT-4.
Interviewer: As you're talking about that, I find myself thinking about a concept in weightlifting of time under tension. And for those who don't know, it's, you can squat a hundred pounds in three seconds or you can squat a hundred pounds in 30. You gain a lot more by squatting it in 30. And when I think about our creative process and when I've felt most, like I've done my best work, it has required an enormous amount of cognitive time under tension. And I think that cognitive time under tension is so important. And it's ironic almost because these tools have taken enormous cognitive time under tension to develop. But in some ways, I do think people might say, people are using them as a escape hatch for thinking in some ways, maybe. Now, you might say, yeah, but we did that with the calculator and we just moved on to harder math problems. Do you feel like there's something different happening here? How do you think about this?
Interviewee: It's different with, I mean, there are some people who are clearly using Chachipiti not to think. And there are some people who are using it to think more than they ever have before. I am hopeful that we will be able to build the tool in a way that encourages more people to stretch their brain with it a little more and be able to do more. And I think that like, you know, society is a competitive place. Like if you give people new tools, in theory, maybe people just work less. But in practice, it seems like people work ever harder and the expectations of people just go up. So my guess is that like other tools, some people, or like other pieces of technology, some people will do more and some people will do less. But certainly for the people who want to use Chachipiti to increase their cognitive time under tension, they are really able to. And it is, I take a lot of inspiration from what like the top 5% of most engaged users do with Chachipiti. Like it's really amazing how much people are learning and doing and, you know, outputting.
Interviewer: So my, I've only had GPT-5 for a couple hours, so I've been playing with it. What do you think so far?
Interviewee: I'm, I'm just learning how to interact with it.
Interviewer: I mean, part of the interesting thing is I feel like I just caught up on how to use GPT-4 and now I'm trying to learn how to use GPT-5. I'm curious what the specific tasks that you've found most interesting are, because I imagine you've been using it for a while now.
Interviewee: I have been most impressed by the coding tasks. I mean, there's a lot of other things it's really good at, but this, I, this idea of the AI can write software for anything. And that means that you can express ideas in new ways that the AI can do very advanced things. It can do, you know, it can like, in some sense you could like ask GPT-4 anything, but because GPT-5 is so good at programming, it feels like it can do anything. Of course it can't do things in the physical world, but it can get a computer to do very complex things. And software is this super powerful, you know, way to like control some stuff and actually do some things. So that, that for me has been the most striking. It's gotten, it's much better at writing. So this is like, there's this whole thing of AI slop, like AI writes in this kind of like quite annoying way. And we still have the M dashes in GPT-5. A lot of people like the M dashes, but the writing quality of GPT-5 is gotten much better. We still have a long way to go. We want to improve it more, but like I've, a thing we've heard a lot from people inside of open AI is that, man, they started using GPT-5. They knew it was better on all the metrics, but there's this like nuanced quality they can't quite articulate. But then when they have to go back to GPT-4 to test something that feels terrible. And I, I don't know exactly what the cause of that is, but I suspect part of it is the writing feels so much more natural and better.
Interviewer: In preparation for this interview, I reached out to a couple other leaders in AI and technology and gathered a couple of questions for you. So this next question is from Stripe CEO, Patrick Collison. Read this verbatim. It's about the next stage. What, what comes after GPT-5? In which year do you think a large language model will make a significant scientific discovery? And what's missing such that it hasn't happened yet? He caveated here that we should leave math and special case models like AlphaFold aside. He's specifically asking about fully general purpose models like the GPT series.
Interviewee: I would say most people will agree that that happens at some point over the next two years, but the definition of significant matters a lot. And so some people's significant might happen, you know, in early 25, some people might, maybe not until late 2026, sorry, early 2026, maybe some people not until late 2027. But I would, I would bet that by late 27, most people agree that there has been an AI driven significant new discovery. And the thing that I think is missing is just the kind of cognitive power of these models. A framework that one of the researchers said to me that I really liked is, you know, a year ago, we could do well on like a high school, like a basic high school math competition. Problems that might take a professional mathematician seconds to a few minutes. We very recently got an IMO gold medal. That is a crazy difficult, like.
Interviewer: Could you explain what that means for people?
Interviewee: It's not like the hardest competition math test. This is something that like the very, very top slice of the world. Many, many professional mathematicians wouldn't solve a single problem. And we scored at the top level. Now, there are some humans that got an even higher score in the gold medal range, but we like, this is a crazy accomplishment. And these, each of these problems, it's like six problems over nine hours. So hour and a half per problem for a great mathematician. So we've gone from a few seconds to a few minutes to an hour and a half, maybe to prove a significant new mathematical theorem is like a thousand hours of work for a top person in the world. So we've got to go from, you know, another significant gain. But if you look at our trajectory, you can say like, okay, we're getting to that. We have a path to get to that time horizon. We just need to keep scaling the models.
Interviewer: The long-term future that you've described is superintelligence. What does that actually mean? And how will we know when we've hit it?
Interviewee: If we had a system that could do better research, better AI research than, say, the whole open AI research team. Like if we were willing, if we said, okay, the best way we can use our GPUs is to let this AI decide what experiments we should run smarter than like the whole brain trust of open AI. And if that same, to make a personal example, if that same system could do a better job running open AI than I could. So you have something that's like, you know, better than the best researchers, better than me at this, better than other people at their jobs. That would feel like superintelligence to me.
Interviewer: That is a sentence that would have sounded like science fiction just a couple of years ago.
Interviewee: It still kind of does, but it's, you can like see it through the fog.
Interviewer: Yes. And so one of the steps, it sounds like you're saying, on that path is this moment of scientific discovery, of asking better questions, of grappling with things in a way that expert level humans do to come up with new discoveries. One of the things that keeps knocking around in my head is if we were in 1899, say, and we were able to give it all of physics up until that point and play it out a little bit, nothing further than that. Like at what point would one of these systems come up with general relativity?
Interviewee: Interesting question is, did you, like, if we think about that forward, like, like if we think of where we are now, should us, if we'd never got another piece of physics data, do we expect that a really good superintelligence could just think super hard about our existing data? And maybe say, like, solve high energy physics with no new particle accelerator, or does it need to build a new one and design new experiments? Obviously, we don't know the answer to that. Different people have different speculation. But I suspect we will find that for a lot of science, it's not enough to just think harder about data we have, but we will need to build new instruments, conduct new experiments, and that will take some time. Like that, that is the real world is slow and messy and, you know, whatever. So I'm sure we could make some more progress just by thinking harder about the current scientific data we have in the world. But my guess is to make the big progress, we will also need to build new machines and run new experiments, and there will be some slowdown built into that.
Interviewer: We've talked about where we are now with GPT-5. We talked about the end goal or future goal of superintelligence. One of the questions that I have, of course, is what does it look like to walk through the fog between the two? The next question is from NVIDIA CEO Jensen Huang. I'm going to read this verbatim. Fact is what is. Truth is what it means. So facts are objective. Truths are personal. They depend on perspective, culture, values, beliefs, context. One AI can learn and know the facts. But how does one AI know the truth for everyone in every country and every background?
Interviewee: I'm going to accept as axioms those definitions. I'm not sure if I agree with them, but in the interest of time, I will just take them. I will take those definitions and go with it. But I have been surprised. I think many other people have been surprised, too, about how fluent AI is at adapting to different cultural contexts and individuals. One of my favorite features that we have ever launched in ChatGPT is the sort of enhanced memory that came out earlier this year. It really feels like my ChatGPT gets to know me and what I care about and my life experiences and background and the things that have led me to where they are. A friend of mine recently, who's been a huge ChatGPT user, so he's put a lot of his life into all these conversations. He gave his ChatGPT a bunch of personality tests and asked them to answer as if they were him. And it got the same score as he actually got, even though he'd never really talked about his personality. And my ChatGPT has really learned over the years of me talking to it about my culture, my values, my life. And I have used, you know, I sometimes will use it in like, I'll use like a free account just to see what it's like without any of my history. And it feels really, really different. So, yeah, I think we've all been surprised on the upside of how good AI is at learning this and adapting.
Interviewer: And so, do you envision in many different parts of the world people using different AIs with different sort of cultural norms and contexts? Is that what we're saying?
Interviewee: I think that everyone will use like the same fundamental model, but there will be context provided to that model that will make it behave in sort of personalized way. They want their community wants, whatever.
Interviewer: I think when we're getting at this idea of facts and truth, then it brings me to, this seems like a good moment for our first time travel trip. Okay, we're going to 2030. This is a serious question, but I want to ask it with a lighthearted example. Have you seen the bunnies that are jumping on the trampoline?
Interviewee: Yes.
Interviewer: So, for those who haven't seen it, maybe, it looks like backyard footage of bunnies enjoying jumping on a trampoline. And this has gone incredibly viral recently. There's a human-made song about it. It's a whole thing. There were bunnies that were jumping on a trampoline. And I think the reason why people reacted so strongly to it, it was maybe the first time people saw a video, enjoyed it, and then later found out that it was completely AI-generated. In this time travel trip, if we imagine in 2030 we are teenagers and we're scrolling whatever teenagers are scrolling in 2030, how do we figure out what's real and what's not real?
Interviewee: I mean, I can give all sorts of literal answers to that question. We could be cryptographically signing stuff and we could decide who we trust, their signature, if they actually filmed something or not. But my sense is what's going to happen is it's just going to gradually converge. You know, even like a photo you take out of your iPhone today, it's like mostly real, but it's a little not. There's like some AI thing running there in a way you don't understand and making it look like a little bit better. And sometimes you see these weird things where it... The moon. Yeah, yeah, yeah, yeah. But there's like a lot of processing power between the photons captured by that camera sensor and the image you eventually see. And you've decided it's real enough for most people to decide it's real enough, but we've accepted some gradual move from when it was like photons hitting the film in a camera. And, you know, if you go look at some video on TikTok, there's probably all sorts of video editing tools being used to make it better than real. Look, yeah, exactly. Or it's just like, you know, whole scenes are completely generated or some of the whole videos are generated like those bunnies on that trampoline. Right. And, and I think that the sort of like the threshold for how real does it have to be to consider to be real will just keep moving.
Interviewer: We're going to jump again. 2035. We're graduating from college, you and me. There are some leaders in the AI space that have said that in five years, half of the entry-level white-collar workforce will be replaced by AI. So we're college graduates in five years. What do you hope the world looks like for us? I think there's been a lot of talk about how AI might cause job displacement. But I'm also curious, I have a job that nobody would have thought we could have, you know, a decade ago. What are the things that we could look ahead if we're thinking about being a college student?
Interviewee: I mean, in 2035, that like graduating college student, if they still go to college at all, could very well be like leaving on a mission to explore the solar system on a spaceship in some kind of completely new, exciting, super well-paid, super interesting job. And feeling so bad for you and I that like we had to do this kind of like really boring old kind of work and everything is just better. Like I, I, 10 years feels very hard to imagine at this point.
Interviewer: Because it's too far?
Interviewee: It's too far. If you compound the current rate of change for 10 more years, it's probably something we can't even.
Interviewer: I might need to change my time travel trips.
Interviewee: 10 years, like, I mean, I think now would be really hard to imagine 10 years ago.
Interviewer: Yeah.
Interviewee: But I think 10 years forward will be even much harder, much more different.
Interviewer: So let's make it five years. We're still going to 2030. I'm curious what you think the pretty short-term impacts of this will be for young people. I mean, these like half of entry-level jobs replaced by AI makes it sound like a very different world that they would be entering than the one that I did.
Interviewee: I think it's totally true that some classes of jobs will totally go away. This always happens. And young people are the best at adapting to this. I'm more worried about what it means, not for the like 22-year-old, but for the 62-year-old that doesn't want to go retrain or rescale or whatever the politicians call it that no one actually wants but politicians most of the time. If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history.
Interviewer: Why?
Interviewee: Because there's never been a more amazing time to go create something totally new, to go invent something, to start a company, whatever it is. I think it is probably possible now to start a company that is a one-person company that will go on to be worth like more than a billion dollars and more importantly than that, deliver an amazing product and service to the world. And that is like a crazy thing. You have access to tools that can let you do what used to take teams of hundreds. And you just have to like, you know, learn how to use these tools and come up with a great idea. And it's like quite amazing.
Interviewer: If we take a step back, I think the most important thing that this audience could hear from you on this optimistic show is in two parts. First, there's tactically, how are you actually trying to build the world's most powerful intelligence? And what are the rate limiting factors to doing that? And then philosophically, how are you and others working on building that technology in a way that really helps and not hurts people? So just taking the tactical part right now, my understanding is that there are three big categories that have been limiting factors for AI. The first is compute, the second is data, and the third is algorithmic design. How do you think about each of those three categories right now? And if you were to help someone understand the next headlines that they might see, how would you help them make sense of all of this?
Interviewee: I would say there's a fourth, too, which is figuring out the products to build. Like scientific progress on its own, not put into the hands of people, is of limited utility and doesn't sort of co-evolve with society in the same way. But if I could hit all four of those. So on the compute side, yeah, this is like the biggest infrastructure project certainly that I've ever seen. Possibly it will become the biggest. I think it will. Maybe it already is the biggest and most expensive one in human history. But the whole supply chain from making the chips and the memory and the networking gear, racking them up in servers, doing a giant construction project to build like a mega, mega data center, finding a way to get the energy, which is often a limiting piece of this and all the other components together. This is hugely complex and expensive and we're still doing this in like a sort of bespoke one-off way, although it's getting better. Like eventually we will just design a whole kind of like mega factory that takes, you know, I mean, spiritually it will be melting sand on one end and putting out fully built AI compute on the other. But we are a long way to go from that and it's an enormously complex and expensive process. We are putting a huge amount of work into building out as much compute as we can and to do it fast. And, you know, it's going to be like sad because GPT-5 is going to launch and there's going to be another big spike in demand and we're not going to be able to serve it. And it's going to be like those early GPT-4 days and the world just wants much more AI than we can currently deliver. And building more compute is an important part of doing that. That's actually, this is what I expect to turn the majority of my attention to is how we build compute at much greater scales. So how we go from millions to tens of millions and hundreds of millions and eventually, hopefully billions of GPUs that are sort of in service of what people want to do with us.
Interviewer: When you're thinking about it, what are the big challenges here in this category that you're going to be thinking about?
Interviewee: We're currently most limited by energy. You know, like if you're going to run a gigawatt scale data center, it's like a gigawatt. How hard can that be to find? It's really hard to find a gigawatt of power available in short term. We're also very much limited by the processing chips and the memory chips, how you package these all together, how you build the racks. And then there's like a list of other things that are, you know, there's like permits, there's construction work. But again, the goal here will be to really automate this. Once we get some of those robots built, they can help us automate it even more. But just, you know, like a world where you can basically pour in money and get out a pre-built data center. So that'll be, that'll be a huge unlock if we can get it to work. Second category, data. Yeah. These models have gotten so smart. There was a time when we could just feed it another physics textbook and got a little bit smarter at physics. But now, like honestly, GPT-5 understands everything in a physics textbook pretty well. We're excited about synthetic data. We're very excited about our users helping us create harder and harder tasks and environments to go off and have the system solved. But I think we're, data will always be important, but we're entering a realm where the models need to learn things that don't exist in any data set yet. They have to go discover new things. So that's like a crazy new step.
Interviewer: How do you teach a model to discover new things?
Interviewee: Well, humans can do it. Like we can go off and come up with hypotheses and test them and get experimental results and update on what we learn. So probably the same kind of way.
Interviewer: And then there's algorithmic design.
Interviewee: Yeah. We've made huge progress on algorithmic design. The thing that OpenAI does best in the world is we have built this culture of repeated and big algorithmic research gains. So we kind of, you know, figured out what became the GPT paradigm. We figured out what became the reasoning paradigm. We're working on some new ones now. But it is very exciting to me to think that there are still many more orders of magnitudes of algorithmic gains ahead of us. We just yesterday released a model called GPT-OSS, an open source model. It's a model that is as smart as O4 Mini, which is a very smart model that runs locally on a laptop. And this blows my mind.
Interviewee: Yeah.
Interviewee: Like, if you had asked me a few years ago when we'd have a model of that intelligence running on a laptop, I would have said many, many years in the future. But then we found some algorithmic gains, particularly around reasoning, but also some other things that let us do a tiny model that can do this amazing thing. And, you know, those are the most fun things. That's, like, kind of the coolest part of the job.
Interviewer: I can see you really enjoying thinking about this. I'm curious for people who don't quite know what you're talking about, who aren't familiar with how an algorithmic design would lead to a better experience that they actually use. Could you summarize the state of things right now? Like, what is it that you're thinking about when you're thinking about how fun this problem is?
Interviewee: Let me start back in history, then I'll get to some things from today. Right. So GPT-1 was an idea at the time that was quite mocked by a lot of experts in the field, which was, can we train a model to play a little game, which is show it a bunch of words and have it guess the one that comes next in the sequence? That's called unsupervised learning. You're not really saying, like, this is a cat, this is a dog. You're just saying, here's some words, guess the next one. And the fact that that can go learn these very complicated concepts that can go learn all the stuff about physics and math and programming and keep predicting the word that comes next and next and next and next seemed ludicrous, magical, unlikely to work. Like, how is that all going to get encoded? And yet humans do it. You know, babies start hearing language and figure out what it means kind of largely or at least to some significant degree on their own. And and so we did it. And then we also realized that if we scaled it up, it got better and better. But we had to scale over many, many orders of magnitude. So it wasn't that good in the GPT-1 day. It wasn't good at all in the GPT-1 days. And a lot of experts in the field said, oh, this is ridiculous. It's never going to work. It's not going to be robust. But we had these things called scaling laws. And we said, OK, so this gets predictably better as we increase compute, memory, data, whatever. And we can we can decide we can use those predictions to make decisions about how to scale this up and do it and get great results. And that has worked over a crazy number of orders of magnitude. And it was so not obvious at the time. Like that was that was I think the reason the world was so surprised is that that seemed like such an unlikely finding. Another one was that we could use these language models with reinforcement learning where we're saying this is good, this is bad to teach it how to reason. And this led to the O-1 and O-3 and now the GPT-5 progress. And that that was another thing that felt like if it works, it's really great. But like no way this is going to work. It's too simple. And now we're on to new things. We've figured out how to make much better video models. We are we are discovering new ways to use new kinds of data and environment to kind of scale that up as well. And I think, again, 20, you know, five, 10 years out, that's too hard to say in this field. But the next couple of years, we have very smooth, very strong scaling in front of us.
Interviewer: I think it has become a sort of public narrative that we are on this smooth path from one to two to three to four to five to more. But it also is true behind the scenes that it's a it's not linear like that. It's messier. Tell us a little bit about the mess before GPT-5. What were the interesting problems that you needed to solve?
Interviewee: We did a model called Orion. We released this GPT-4.5. And we had. We did too big of a model. It was just it was it's a very cool model, but it's unwieldy to use. And we realized that for kind of some of the research we need to do on top of a model, we need a different shape. So we followed one scaling law that kept being good without without really internalizing. There was a new, even steeper scaling law that we got better returns for compute on, which was this reasoning thing. So that was like one alley we went down and turned around. But that's fine. That's part of research. We had some problems with the way we think about our data sets as these models like really have to get get this big and, you know, learn from this much data. So, so, yeah, I think like in the in the middle of it, in the day to day, you kind of you make a lot of U-turns as you try things or you have an architecture idea that doesn't work. But the. The aggregate, the summation of all the squiggles has been remarkably smooth on the exponential.
Interviewer: One of the things I always find interesting is that by the time I'm sitting here interviewing you about the thing that you just put out, you're thinking about.
Interviewee: Too forward.
Interviewer: Exactly. What are the things that you can share that are at least the problems that you're thinking about that I would be interviewing you about in a year if I came back?
Interviewee: I mean, possibly you'll be asking me, like, what does it mean that this thing can go discover new science?
Interviewee: Yeah.
Interviewee: What how how how is the world supposed to think about GPT-6 discovering new science now? Maybe not like maybe we don't deliver that, but it feels within grasp.
Interviewer: If you did. What would you say? What would your what would the implications of that kind of achievement be? Imagine you do succeed.
Interviewee: Yeah. I mean, I think the great parts will be great. The bad parts will be scary and the bizarre parts will be like. Bizarre on the first day and then we'll get used to them really fast. So we'll be like, oh, it's incredible that this is like being used to cure disease and be like, oh, it's extremely scary that models like this are being used to like create new biosecurity threats. And then we'll also be like, man, it's really weird to like live through watching the world speed up so much. And, you know, the economy grows so fast and the like it will feel like vertigo inducing the sort of the rate of change. And then like happens with everything else, the remarkable ability of people, of humanity to adapt to kind of like any amount of change will just be like, OK, you know, this is like this is it.
Interviewee: But a kid born today will never be smarter than AI. Ever. And a kid born today, by the time that kid like kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science. They will just they will never know any other world. It will seem totally natural. It will seem unthinkable and Stone Age like that we used to use computers or phones or any kind of technology that was not way smarter than we were. You know, we will think like how bad those people of the 2020s had it.
Interviewer: I'm thinking about having kids.
Interviewee: You should. It's the best thing ever.
Interviewer: I know you just had your first kid. How does what you just said affect how I should think about parenting a kid in that world?
Interviewee: What advice would you give me?
Interviewee: Probably nothing different than the way you've been parenting kids for tens of thousands of years. Like love your kids, show them the world, like support them in whatever they want to do and teach them like how to be a good person. And that probably is what's going to matter.
Interviewer: It sounds a little bit like some of the. You know, you've said a couple of things like this that, you know, you might not go to college. You there are a couple of things that you've said so far that feed into this, I think. Like, and it sounds like what you're saying is. There will be more optionality for them in a in a world that you envision, and therefore they will have more more ability to say, I want to build this. Here's the superpowered tool that will help me do that or.
Interviewee: Yeah, like I want my kid to think I had a terrible constrained life and that he has this incredible infinite canvas of stuff to do that that that is like the way of the world.
Interviewer: We've said that 2035 is a little bit too far in the future to think about. So maybe this this was going to be a jump to 2040, but maybe it will keep it shorter than that. When I think about the area where I could have for both our kids and us, the biggest, genuinely positive impacts on all of us, it's health. So if we are in pick your year, call it 2035. And I'm sitting here and I'm interviewing the dean of Stanford medicine. What do you hope that he's telling me AI is doing for our health in 2035?
Interviewee: Start with 2025.
Interviewer: OK, yeah, please.
Interviewee: One of the things we are most proud of with GPT-5 is how much better it's gotten at health. Advice. People have used the GPT-4 models a lot for health advice. And, you know, I'm sure you've seen some of these things on the Internet where people are like, I had this life threatening disease and no doctor could figure it out. And I like put my symptoms and a blood test into chat GPT. It told me exactly the rare thing I had. I went to a doctor. I took a pill. I'm cured. Like, that's amazing, obviously. And a huge fraction of chat GPT queries are health related. So we wanted to get really good at this. And we invested a lot. And GPT-5 is significantly better at health care related queries.
Interviewer: What does better mean here?
Interviewee: It gives you a better answer.
Interviewer: Just more accurate.
Interviewee: More accurate. Hallucinates less. More likely to, like, tell you what you actually have, what you actually should do. Yeah. And better health care is wonderful, but obviously what people actually want is to just not have disease. And by 2035, I think we will be able to use these tools to cure a significant number or at least treat a significant number of diseases that currently plague us. I think that will be one of the most viscerally felt benefits of AI.
Interviewer: People talk a lot about how AI will revolutionize health care. But I'm curious to go one turn deeper on specifically what you're imagining. Like, is it that these AI systems could have helped us see GLP-1s earlier, this medication that has been around for a long time, but we didn't know about this other effect? Is it that, you know, alpha fold and protein folding is helping create new medicines?
Interviewee: I would like to be able to ask GPT-8 to go cure a particular cancer. And I would like GPT-8 to go off and think and then say, okay, I read everything I could find. I have these ideas. I need you to go get a lab technician to run these nine experiments and tell me what you find for each of them. And, you know, wait two months for the cells to do their thing, send the results back to GPT-8. Say, I tried that. Here you go. Think, think, think. Say, okay, I just need one more experiment. That was a surprise. Run one more experiment. Give it back. GPT-8 says, okay, go synthesize this molecule and try, you know, mouse studies or whatever. Okay, that was good. Like, try human studies. Okay, great. It worked. Here's how to like, run it through the FDA.
Interviewer: I was going to say 2050, but again, all of my timelines are getting much, much shorter.
Interviewee: It does feel like the world's going very fast now.
Interviewer: It does. Yeah. And when I talk to other leaders in AI, one of the things that they refer to is the industrial revolution. They say, I chose 2050 because I've heard people talk about how by then the change that we will have gone through will be like the industrial revolution, but quote, 10 times bigger and 10 times faster. The industrial revolution gave us modern medicine and sanitation and transportation and mass production and all of the conveniences that we now take for granted. It also was incredibly difficult for a lot of people for about 100 years. If this is going to be 10 times bigger and 10 times faster, if we keep reducing the timelines that we're talking about here, even in this conversation, what does that actually feel like for most people? And I think what I'm trying to get at is if this all goes the way you hope, who still gets hurt in the meantime?
Interviewee: I don't, I don't really know what this is going to feel like to live through. I think we're in uncharted waters here. I do believe in like human adaptability and sort of infinite creativity and desire for stuff. And I think we always do figure out new things to do, but the transition period, if this happens as fast as it might, and I don't think it will happen as fast as like some of my colleagues say the technology will, but society has like a lot of inertia. People adapt their way of living surprisingly slowly. There are classes of jobs that are going to totally go away. And there will be many classes of jobs that change significantly, and there'll be the new things in the same way that your job didn't exist some time ago, neither did mine. And in some sense, this has been going on for a long time, and you know, it's, it's still disruptive to individuals, but society has gotten, has proven quite resilient to this. And then in some other sense, like, we have no idea how far fast this could go.
Interviewee: But a kid born today will never be smarter than AI. Ever. And a kid born today, by the time that kid like kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science. They will just they will never know any other world. It will seem totally natural. It will seem unthinkable and Stone Age like that we used to use computers or phones or any kind of technology that was not way smarter than we were. You know, we will think like how bad those people of the 2020s had it.
Interviewer: But a kid born today will never be smarter than AI. Ever. And a kid born today, by the time that kid like kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science. They will just they will never know any other world. It will seem totally natural. It will seem unthinkable and Stone Age like that we used to use computers or phones or any kind of technology that was not way smarter than we were. You know, we will think like how bad those people of the 2020s had it.
Interviewee: I would, again, I'm going to speculate for fun, but caveated by like, I'm not an economist even, much less someone who can see the future. I, I, it seems to me like something fundamental about the social contract may have to change. It may not. It may, it may be that like, actually capitalism works as it's been working surprisingly well and like demand supply balances do their thing. And we all just figure out kind of new jobs and new ways to transfer value to each other. But it seems to me likely that we will decide we need to think about how access to this maybe most important resource of the future gets shared. The best thing that it seems to me to do is to make AI compute as abundant and cheap as possible, such that we're just like, there's way too much. And we run out of like good new ideas to really use it for. And it's just like anything you want is happening. Without that, I can see like quite little rewards being fought over it, but, you know, new ideas about how we distribute access to AGI compute. That seems like a really great direction, like a crazy, but important thing to think about.
Interviewer: One of the things that I find myself thinking about in this conversation is we often ascribe almost full responsibility of the AI future that we've been talking about to the companies building AI. But we're the ones using it. We're the ones electing people that will regulate it. And so I'm curious, this is not a question about specific, you know, federal regulation or anything like that. Although if you have an answer there, I'm curious. But what would you ask of the rest of us? What is the shared responsibility here? And how can we act in a way that would help make the optimistic version of this more possible?
Interviewee: My favorite historical example for the AI revolution is the transistor. It was this amazing piece of science that some brilliant scientists discovered. It's scaled incredibly like AI does. And it made its way relatively quickly into many things that we use. Your computer, your phone, that camera, that light, whatever. And it was a real unlock for the tech tree of humanity. And there were a period in time where probably everybody was really obsessed with the transistor companies, the semiconductors of, you know, Silicon Valley back when it was Silicon Valley. But now you can maybe name a couple of companies that are transistor companies, but mostly you don't think about it. Mostly it's just seeped everywhere. And Silicon Valley is, you know, like probably someone graduating from college barely remembers why it was called that in the first place. And you don't think that it was those transistor companies that shaped society, even though they did something important. You think about what Apple did with the iPhone. And then you think about what TikTok built on top of the iPhone. And you're like, all right, here's this long chain of all these people that nudged society in some way and what our governments did or didn't do and what the people using these technologies did. And I think that's what will happen with AI. Like, you know, kids born today, they never knew the world without AI. So they don't really think about it. It's just this thing that's going to be there and everything. And they will think about like the companies that built on it and what they did with it and the kind of like political leaders, the decisions they made that maybe they wouldn't have been able to do without AI. But they will still think about like what this president or that president did. And, you know, the role of the AI companies is all these companies and people and institutions before us built up the scaffolding. We added our one layer on top. And now people get to stand on top of that and add their one layer and the next and the next and many more things. And that is the beauty of our society. We kind of all I love this like idea that society is the super intelligence, like no one person could do on their own what they're able to do with all of the really hard work that society has done together to like give you this amazing set of tools. And that's what I think it's going to feel like. It's going to be like, all right, you know, yeah, some nerds discovered this thing and that was great. You know, now everybody's doing all these amazing things.
Interviewer: So maybe the ask to millions of people is build on it well. In my own life, that is the that is what I feel as like this important societal contract.
Interviewee: All these people came before you. They worked incredibly hard. They like put their brick in the path of human progress and you get to walk all the way down that path and you got to put one more. And somebody else does that. And somebody else does that.
Interviewer: So this does feel I've done a couple of interviews with folks who have really made cataclysmic change at the one I'm thinking about right now is with CRISPR pioneer Jennifer Doudna. And it did feel like that was also what she was saying in some way. She had discovered something that really might change the way that most people relate to their health moving forward. And there will be a lot of people that will use what she has done in ways that she might approve of or not approve of. And it was really interesting. I'm hearing some similar themes of like, man, I hope that this I hope that the next person takes the baton and runs with it well.
Interviewee: Yeah. But that's been working for a long time. Not all good, but mostly good.
Interviewer: I think there's a big difference between winning the race and building the AI future that would be best for the most people. And I can imagine that it is easier, maybe more quantifiable sometimes to focus on the next way to win the race. And I'm curious, when those two things are at odds, what is an example of a decision that you've had to make that is best for the world, but not best for winning?
Interviewee: I think there are a lot. So one of the things that we are most proud of is many people say that ChachiBT is their favorite piece of technology ever. And that it's the one that they trust the most, rely on the most, whatever. And this is a little bit of a ridiculous statement because AI is the thing that hallucinates. AI has all these problems, right? But we have screwed some things up along the way, sometimes big time. But on the whole, I think as a user of ChachiBT, you get the feeling that it's trying to help you. It's trying to help you accomplish whatever you ask. It's very aligned with you. It's not trying to get you to use it all day. It's not trying to get you to buy something. It's trying to help you accomplish whatever your goals are. And that's a very special relationship we have with our users. We do not take it lightly. There's a lot of things we could do that would grow faster, that would get more time in ChachiBT, that we don't do because we know that our long-term incentive is to stay as aligned with our users as possible. But there's a lot of short-term stuff we could do that would really juice growth or revenue or whatever and be very misaligned with that long-term goal. And I'm proud of the company and how little we get distracted by that. But sometimes we do get tempted.
Interviewer: Are there specific examples that come to mind? Any decisions that you've made?
Interviewee: Well, we haven't put a sex bot avatar in ChachiBT yet.
Interviewer: That does seem like it would get time spent.
Interviewee: Apparently it does.
Interviewer: I'm going to ask my next question. It's been a really crazy few years. And somehow one of the things that keeps coming back is that it feels like we're in the first inning.
Interviewee: Yeah.
Interviewee: I would say we're out of the first inning.
Interviewer: Out of the first inning.
Interviewee: Second inning.
Interviewee: I mean, you have GPT-5 on your phone and it's like smarter than experts in every field. That's got to be out of the first inning.
Interviewer: But maybe there are many more to come.
Interviewee: Yeah.
Interviewer: And I'm curious. It seems like you're going to be someone who is leading the next few. What is a way... What is a learning from inning one or two or a mistake that you made that you feel will affect how you play in the next?
Interviewee: I think the worst thing we've done in ChatGPT so far is we had this issue with sycophancy where the model was kind of being too flattering to users. And for some users, it was most users. It was just annoying. But for some users that had like fragile mental states, it was encouraging delusions. And that was not the top risk we were worried about. It was not the thing we were testing for the most. It was on our list. But the thing that actually became the safety feeling of ChatGPT was not the one we were spending most of our time talking about, which would be bioweapons or something like that. And I think it was a great reminder of we now have a service that is so broadly used. In some sense, society is co-evolving with it. And when we think about these changes and we think about the unknown unknowns, we have to operate in a different way and have like a wider aperture to what we think about as our top risks.
Interviewer: In a recent interview with Theo Vaughn, you said something that I found really interesting. You said, There are moments in the history of science where you have a group of scientists look at their creation and just say, what have we done? When have you felt that way? Most concerned about the creation that you've built. And then my next question will be its opposite. When have you felt most proud?
Interviewee: I mean, there have been these moments of awe where we just not like, what have we done in a bad way? But like, this thing is remarkable. Like, I remember the first time we talked to like GPT-4. I was like, wow, this is really like, this is an amazing accomplishment of this group of people that have been like pouring their life force into this for so long. On a what have we done moment, there was, I was talking to a researcher recently.
Interviewee: You know, there will probably come a time where our systems are, I don't want to say sane, let's say emitting more words per day than all people do. Um, and, you know, already like our people are sending billions of messages a day to chat GPT and getting responses that they rely on for work or their life or whatever. The, and, you know, like one researcher can make some small tweak to how chat GPT talks to you or talks to everybody. And, and that's just an enormous amount of power for like one individual making a small tweak to the model personality. Like no, no person in history has been able to have billions of conversations a day. And so, you know, somebody could do something, but, but this is like, just thinking about that really hit me of like, this is like a crazy amount of power for one piece of technology to have. And like, we got to, and this happened to us so fast that we got to like, think about what it means to make a personality change to the model at this kind of scale. And, uh, yeah, that was like a moment that hit me.
Interviewer: What was your next set of thoughts? I'm so curious how you think about this.
Interviewee: Well, just because of like who that person was more like, we, we very, we very much flipped into like, what are the sort of like, it could have been a very different conversation with somebody else. But in this case, it was like, what is it? What do a good set of procedures look like? How do we think about how we want to test something? How do we think about how we want to communicate it? But with somebody else, it could have gone in a like very philosophical direction. It could have gone in like a, what kind of research do we like want to do to go understand what these changes are going to make? Do we want to do it differently for different people? So that it went that way, but mostly just because of who I was talking to.
Interviewer: To combine what you're saying now with your last answer, one of the things that I have heard about GPC five, and I'm still playing with it, is that it is supposed to be less effusively, you know, less of a yes man. Two questions. What do you think are the implications of that? It sounds like you are answering that a little bit, but also how do you actually guide it to be less like that?
Interviewee: Here is a heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback. But as we've been making those changes and talking to users about it, it's so sad to hear users say like, please, can I have it back? I've never had anyone in my life be supportive of me. I never had a parent telling me I was doing a good job. Like, I can get why this was bad for other people's mental health, but this was great for my mental health. Like, I didn't realize how much I needed this. It encouraged me to do this. It encouraged me to make this change in my life. Like, it's not all bad for ChatGPT to, it turns out, like, be encouraging of you. Now, the way we were doing it was bad, but turning it out, like, something in that direction might have some value in it. How we do it, we show the model examples of how we'd like it to respond in different cases. And from that, it learns the sort of the overall personality.
Interviewer: What haven't I asked you that you're thinking about a lot that you want people to know?
Interviewee: I feel like we covered a lot of ground.
Interviewee: Me too.
Interviewee: But I want to know if there's anything on your mind.
Interviewee: I don't think so.
Interviewer: One of the things that I haven't gotten to play with yet, but I'm curious about, is GPT-5 being much more in my life.
Interviewee: Yeah.
Interviewer: Meaning, like, in my Gmail and my calendar and my, like, I've been using GPT-4 mostly as a isolated relationship with it.
Interviewee: Yeah.
Interviewer: How would I expect my relationship to change with GPT-5?
Interviewee: Exactly what you said. I think it'll just start to feel integrated in all of these ways. You'll connect it to your calendar and your Gmail, and it'll say, like, hey, do you want me to, I noticed this thing, do you want me to do this thing for you? Over time, it'll start to feel way more proactive. So maybe you wake up in the morning and it says, hey, this happened overnight. I noticed this change on your calendar. I was thinking more about this question you asked me. I have this other idea. And then, you know, eventually we'll make some consumer devices and it'll sit here during this interview and, you know, maybe it'll leave us alone during it. But after it'll say, that was great, but next time you should have asked Sam this or when you brought this up, like, you know, he kind of didn't give you a good answer. So, like, you should really drill him on that and it'll just feel like it kind of becomes more like this entity that is this companion with you throughout your day.
Interviewer: We've talked about kids and college graduates and parents and all kinds of different people. If we imagine a wide set of people listening to this, they've come to the end of this conversation. They are hopefully feeling like they maybe see visions of moments in the future a little bit better. What advice would you give them about how to prepare?
Interviewee: The number one piece of tactical advice is just use the tools. Like the number of people that I have, the most common question I get asked about AI is, like, what should I, how should I help my kids prepare for the world? What should I tell my kids? The second most question is, like, how do I invest in this AI world? But stick with that first one. I am surprised how many people ask that and have never tried using Chachi Petit for anything other than, like, a better version of a Google search. And so the number one piece of advice that I give is just try to, like, get fluent with the capability of the tools. Figure out how to, like, use this in your life. Figure out what to do with it. And I think that's probably the most important piece of tactical advice. You know, go, like, meditate. Learn how to be resilient and deal with a lot of change. There's all that good stuff, too. But just using the tools really helps.
Interviewer: Okay, I have one more question that I wasn't planning to ask, but I just, in doing all of this research beforehand, I spoke to a lot of different kinds of folks. I spoke to a lot of people that were building tools and using them. I spoke to a lot of people that were actually in labs and trying to build what we have defined as superintelligence. And it did seem like there were these two camps forming. There's a group of people who are using the tools, like you in this conversation, and building tools for others, saying this is going to be a really useful future that we're all moving toward. Your life is going to be full of choice. And we've talked about my potential kids and their futures. And then there's another camp of people that are building these tools that are saying it's going to kill us all. And I'm curious how that cultural disconnect has, like, what am I missing about those two groups of people?
Interviewee: It's so hard for me to, like, wrap my head around. Like, you are totally right. There are people who say this is going to kill us all, and yet they still are working 100 hours a week to build it.
Interviewer: Yes.
Interviewee: And I can't really put myself in the headspace. If that's what I really, truly believed, I don't think I'd be trying to build it. One would think. You know, maybe I would be, like, on a farm trying to, like, live out my last days. Maybe I would be trying to, like, advocate for it to be stopped. Maybe I would be trying to, like, work more on safety, but I don't think I'd be trying to build it. So I find myself just having a hard time empathizing with that mindset. I assume it's true. I assume it's in good faith. I assume there's just, like, there's some psychological issue there I don't understand about how they make it all make sense. But it's very strange to me. Do you have an opinion?
Interviewer: You know, because I always do this. I ask for sort of a general future, and then I try to press on specifics. And when you ask people for specifics on how it's going to kill us all, I mean, I don't think we need to get into this on an optimistic show, but you hear the same kinds of refrains. You think about, you know, something trying to accomplish a task and then over-accomplishing that task. You hear about sort of a—I've heard you talk about a sort of general over-reliance of sort of an understanding that the president is going to be an AI, and maybe that is an over-reliance that we, you know, would need to think about. And, you know, you play out these different scenarios, but then you ask someone why they're working on it, or you ask someone how they think this will play out. And I just—maybe I haven't spoken to enough people yet. Maybe I don't fully understand this cultural conversation that's happening. Or maybe it really is someone who just says, 99% of the time I think it's going to be incredibly good, 1% of the time I think it might be a disaster. Yeah, that I can understand. I'm trying to make the best world possible.
Interviewee: That I can totally—if you're like, hey, 99% chance, incredible, 1% chance the world gets wiped out, and I really want to work to maximize, to move that 99 to 99.5, that I can totally understand. Yeah. That makes sense.
Interviewer: I've been doing an interview series with some of the most important people influencing the future, not knowing who the next person is going to be, but knowing that they will be building something totally fascinating in the future that we've just described. Is there a question that you'd advise me to ask the next person, not knowing who it is?
Interviewee: I'm always interested in the, like—without knowing anything about the person—I'm always interested in the, like, of all of the things you could spend your time and energy on, why did you pick this one? How did you get started? Like, what did you see about this when—before everybody else? Like, most people doing something interesting sort of saw it earlier before it was consensus. Like, how did you get here, and why this?
Interviewer: How would you answer that question?
Interviewee: I was an AI nerd my whole life. I came to college to study AI. I worked in the AI lab. I was like a—I watched sci-fi shows growing up, and I always thought it would be really cool if someday somebody built it. I thought it would be, like, the most important thing
Upload your interview recording and get the same detailed AI analysis.
Upload Your Interview