AI-generated interview qa format analysis of the interview
Explore AI's impact on productivity, hardware, and the future. Sam Altman shares insights on delegation, innovation, and the evolving role of AI in business and society.
Upload your interview recording and get the same detailed AI analysis.
Upload Your InterviewThis interview qa format was automatically generated by AI from the interview transcription. The analysis provides structured insights and key information extracted from the conversation.
Sam Altman
Complete analysis processed by AI from the interview transcription
Interviewer: Now, the last two months or so, there have been so many deals involving OpenAI, and I'm not even talking about the globetrotting. A lot of them are local, or there's new product features such as Pulse. Presumably, you were productive to begin with. How is it that you managed to up your productivity to get all this done?
Interviewee: I mean, a lot of, I don't think there's like one single takeaway other than I think you always are, people almost never allocate their time as well as they think they do, and as you have more demands and more opportunities, you find ways to continue to be more efficient, but we've been able to hire and promote great people, and I delegate a lot to them and get them to take stuff on, and that is kind of the only sustainable way I know how to do it. I do try to make sure we increasingly, as what we need to do comes into focus, and there's, as you mentioned, a lot of infrastructure that needs to be built out right now. I do try to make sure we understand, or I understand, what the core thing for us to do is, and it has, in some sense, simplified, and there's very clear what we need to do, so that's been helpful. I don't really know. I guess another thing that's happened is more of the world wants to work with us, so deals are quicker to negotiate.
Interviewer: You're doing much more with hardware or matters that are hardware adjacent. How is hiring or delegating to a good hardware person different from hiring good AI people, which is what you started off doing? You mean like consumer devices or chips? Both.
Interviewee: One thing that's different is that cycle times are much longer, the capital is more intense, the cost of screen-up is higher, so I like to spend more time getting to know the people before saying, okay, you just go do this, and I'll trust that it'll work out. But it's kind of otherwise, the theory is the same. You try to find good, effective, fast-moving people, get clear on what the goal is, and just let them go do it.
Interviewer: Like, I visited NVIDIA earlier this year. They were great. They were great to me. They're super smart. But it just felt so different from walking the floor of OpenAI. And I'm sure you've been to NVIDIA. Like, people read Twitter less. At least on the surface, they're less weird. Like, what is that intangible difference in the hardware world that one has to grasp to do well in it?
Interviewee: Look, I don't know if this is going to turn out to be a good sign or a bad sign, but our chip team feels more like the OpenAI research team than a chip company. I think it might work out phenomenally well.
Interviewer: There's this fellow on Twitter. His name is Rune. He's become quite well known. What is it that makes Rune special to you?
Interviewee: He's like a very lateral thinker. You can start down one path and sort of jump somewhere completely else, but keep going down, you know, stay on like the same trajectory, but in some sort of like very different context, that that's sort of unusual. He's clearly, he's great at phrasing observations in an interesting, useful, whatever way. And that sort of makes him fun and quite like useful to talk to. I don't know. He brings together like a lot of skills that don't often exist in one person's head.
Interviewer: There was someone put an essay online and it said, and all the time they worked at OpenAI, they hardly ever sent or received an email that so many things were done over Slack. Why is that? What's your model of why email is bad and Slack is good for OpenAI?
Interviewee: I'll agree email is bad. I don't know if Slack is good. I suspect it's not. I think email is very bad. So the threshold to make something better than email is not high. And I think Slack is better than email. We have a lot of things going on at the same time as you observed. And we have to do things extremely quickly. It's definitely a very fast moving organization. So there are positives about Slack, but there's also like, you know, I like kind of dread the first hour, the morning, the last hour before I go to bed, where I'm just like dealing with this explosion of Slack. And I think it does create a lot of fake work. I suspect there is something new to build that is going to replace a lot of the current sort of office productivity suite, whatever you think of like docs, slides, email, Slack, whatever, that will be sort of the AI driven version of all of these things. Not where you like tack on the horrible, like, you know, you accidentally clicked the wrong place and it tries to write a whole document for you or summarize some thread or whatever, but the actual version of like, you are trusting your AI agent and my AI agent to work most stuff out and escalate to us when necessary. I think there is probably finally a good solution for someone to make within reach.
Interviewer: How far are you from having that internally? Maybe not a product for the whole world, that it's in every way tested, but that you would use it with an open AI.
Interviewee: Very far, but I suspect just because we haven't made any effort to try, not because the models are that far away.
Interviewer: But since talent, time, human capital is so valuable within your company, why shouldn't that be a priority? Probably we should do it, but there's like, you know,
Interviewee: people get stuck in their own ways of doing things and we got like, a lot of stuff is going very well right now. So it's, there's a lot of activation energy for like a big new thing.
Interviewer: What is it about GPT-6 that makes that special to you?
Interviewee: Um, I think GPT-5. So if GPT-3 was like the first moment where you saw like a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It's like very tiny things, but you know, here and there, someone's posting like, oh, it figured this thing out or, oh, it came up with this new idea. Oh, it was like a useful collaborator on this paper. And there is a chance that GPT-6 will be a GPT-3 to 4 like leap that happened for kind of Turing test like stuff for science where five has these tiny glimmers and six can really do it.
Interviewer: So let's say I run a science lab and I know GPT-6 is coming. What should I be doing now to prepare for that?
Interviewee: It's always a very hard question. Like even if you know this thing is coming, I'd say I even had it now, right? What exactly would I do the next morning? Um, I mean, I guess the first thing you would do is just type in the current research questions you're struggling with and maybe it'll say like, here's an idea or run this experiment or go do this other thing.
Interviewer: But if I'm thinking about restructuring an entire organization to put GPT-6 or seven or whatever at the center of it, what is it I should be doing organizationally rather than just having all my top people use it as add-ons to their current stock of knowledge?
Interviewee: I've thought about this more for the context of companies than scientists, just because like, I understand that better. Uh, and I think it's a very important question. And right now I have met some orgs that are really saying like, okay, we're going to adopt AI and let AI kind of do this, but I'm very interested in this because like, shame on me if open AI is not the first big company run by an AI CEO, right? But just parts of it. And I thought the whole thing, that's very ambitious, but that's the finance department, whatever. Well, but eventually it should get to the whole thing. Yeah. So, so we can like use this and then try to like work backwards from that. And I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much, much better job of running open AI than me, which clearly will happen someday, but how can we accelerate that? What's in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and kind of like roadblocks will be. And I, and I assume someone running a science lab should kind of try to think the same way and they'll come to different conclusions.
Interviewer: How far off do you think it is that just say one division of open AI is 85% run by AIs? Any single division? Not a tiny, any significant division, mostly run by the AIs.
Interviewee: Some small single digit number of years, not very far, not very far.
Interviewer: And how does that way. When do you think the, when, when do you think I can be like, okay, Mr. AI CEO, you take over?
Interviewee: CEO is tricky because the public role of a CEO, as you know, becomes more and more important. Let's say I stay on the, I like, I mean, if anyone, if I can like pretend to be a politician, which is not my natural strength and AI can do it too. Like, but let's say I like stay involved for the like public facing, whatever. I just like actually making the good decisions, figuring out what to do. I think you'll have billion dollar companies run by two or three people with AIs. I don't know, in two and a half years, I used to think one year, but maybe I've put it off a bit. I'm not more pessimistic about the AI. Maybe I'm more pessimistic about the humans.
Interviewer: And you're hiring a lot of smart people. Do you ask yourself, what are the markers of how AI resistant this very smart person will be? Like, do you have an implicit mental test for that? Or you just hire smart people and hope it's all going to work out later?
Interviewee: No, I do ask, I do ask questions about that. But people will just lie, right? Like they know, they're talking to open AI. What do you actually look for in them? I mean, a big one is how they use AI today. And the people who still are like, oh yeah, you know, I'd use it for better Google search and nothing else. That's not necessarily disqualifier, but that's like a yellow flag. And people who are like seriously considering like what their day-to-day is going to look like in three years, that's a green flag. A lot of people aren't. They're like, oh yeah, you know, probably it's gonna be really smart.
Interviewer: Do you worry that the future holds the same for AI companies where the feds are your insurer? And how do you plan for that? Again, even if AI is pretty safe, as with nuclear power, people are nervous Nellies. How will you insure everything?
Interviewee: In some level, like at some level, when something gets sufficiently huge, whether or not they are on paper, the federal government is kind of the insurer of last resort, as we've seen in various financial crises and insurance companies screwing things up. So I guess, given the magnitude of what I expect AI economic impact to look like, sort of, I do think the government ends up as like the insurer of last resort. But I don't, I think I mean that in a different way than you mean that. And I don't expect them to actually be like writing the policies in the way that maybe they do for nuclear.
Interviewer: And there's a big difference between the government being the insurer of last resort and the insurer of first resort. Last resort's inevitable, but I'm worried they'll become the insurer of first resort and that I don't want. I don't want that either. It's not, I don't, I don't think that's what will happen. What we're seeing with Intel, lithium, rare earths is the government is becoming an equity holder. Again, not of last resort, but of second or third resort. And I don't mean this as a comment about the Trump administration. I think this is something we might be seeing in any case or see in the future after Trump is gone. But how do you plan for open AI knowing that's now a thing on the table in the American economy?
Interviewee: I put almost no probability on mass onto the world where no one has any meaning in the post-AGI world because the AI is doing everything. Like I think we're really great at finding new things to do, new games to play, new ways to be useful to each other, to compete, to get fulfilled, whatever. But I do put a significant probability that the social contract has to change significantly. I don't know what that will look like. Can I see the government getting more involved there and thus having some strong opinions by AI companies? I can totally see that. But we don't live our lives that way. We, we don't, we just kind of try to like work with capitalism as it currently exists. And I believe that that should be done by the companies and not the government. Although we'll partner with the government and try to be like a good collaborator. Like I don't, don't want them like writing our insurance policies.
Interviewer: So soon enough? How will that work? I think if chat GPT finds you the, to zoom out even before the answer, one of the sort of unusual things we noticed a while ago, and this was when it was a worst problem, chat GPT would consistently be reported as like a user's most trusted technology product from the big tech company. We don't really think of ourselves as a big tech company, but I guess we sort of are now. And that's very odd on the surface, right? Because AI is the thing that hallucinates. AI is the thing with all the errors. And this was when they were, that was a much more of a problem. And there's a question of why. Ads on a Google search are dependent on Google doing badly. Like if it was giving you the best answer, there'd be no reason ever to buy an ad above it. So you kind of like, you're like, that thing is not quite aligned with me. Chat GPT, maybe it gives you the best answer. Maybe it doesn't, but you're paying it or hopefully all are paying it. And it's at least trying to give you the best answer. And that has led to people having like a deep and pretty trusting relationship with chat GPT. You asked chat GPT for the best hotel, not Google or something else. If chat GPT were accepting payment to put a worse hotel above a better hotel, that's probably catastrophic for your relationship with chat GPT. On the other hand, if chat GPT shows you its guest, the best hotel, whatever that is. And then if you book it with one click takes the same cut that it would take from any other hotel. And there's nothing that influenced it, but there's some sort of transaction fee. I think that's probably okay. And with our recent commerce thing, that's the spirit of what we're trying to do. We'll do that for travel at some point.
Interviewee: I'm not worried about the payola issue, but let me tell you my worry. And that is, there may be a tight cap on the commission you can charge, because we're now in a world, say, where there's agents. And someone finds the best hotel through GPT seven or whatever. And then they just talk to their computer or their pendant, and they go to some stupider service. But the stupider service is an agent that books very cheaply. And they only really have to pay open AI, a commission equal to what the stupidest service will charge.
Interviewee: So one thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I'm happy about that. I think there's like a lot of taxes that just suck for the economy and getting those down should be great all around. But I think that most companies like open AI will make more money at a lower margin.
Interviewer: But do you worry about the discrepancy between the fixed upfront cost of making yours the smartest model compared to the very cheap cost of all the competing agent has to do is book it for someone? And how do you use the commissions to pay for making the model smarter in essence?
Interviewee: I think the way to monetize the world's smartest model is certainly not hotel booking. But you want to do it nonetheless. I mean, I want to discover new science and figure out a way to monetize that, that you can only do with the smartest model. There is a question of like, should many people have asked, should open AI do chat GPT at all? Why don't you just go build AGI? Why don't you go discover, you know, a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology. And it is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually. But my most likely story about how this works, how the world gets like dramatically better, is we put a really great super intelligence in the hands of everybody. We make it super easy to use. It's nicely integrated. We make you beautiful devices. We connected all your services. It gets to know you over your life. It does all this stuff for you. And we invest in infrastructure and chips and energy and the whole thing to make it super abundant and super cheap. And then you all figure out how the world gets way better. Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever. So maybe chat GPT and hotel booking and whatever else is not the best way we can make money. In fact, I'm certain it's not. I do think it's a very important thing to do for the world. And I'm happy for open AI to do some things that are not the like economic maxine thing.
Interviewer: To the extent you end up building your own chips, what's the hardest part of that?
Interviewee: Man, that's a hard thing all around. There's no, there's no easy part of that.
Interviewer: Yeah. No easy part of that. Well, Jonathan Ross said, it's just keeping up with what is new. People talk a lot about the recursive self-improvement loop for AI research, where AI can help researchers maybe today write code faster, eventually do automated research. And this thing is like well understood, very, very much discussed. Very little discussed are the, or relatively little discussed are the hardware implications of this. Robots that can build other robots, data centers that can build other data centers, chips that can design their own next generation. So there's many hard parts, but maybe a lot of them can get much easier. Maybe the problem of chip design will turn out to be a very good problem for previous generations of chips. You know, the stupidest question possible. Why don't we just make more GPUs? Because we need to make more electrons. But what's stopping that? What's the ultimate binding constraint?
Interviewee: We're working on it really hard. I mean, this is, you know.
Interviewer: But if you could have more of one thing to have more compute, what would the one thing be?
Interviewee: Electrons.
Interviewer: Electrons. Yeah. Just energy. And what's the most likely short-term solution for that? Short-term. Easing, not full solution, but easing of the constraint.
Interviewee: Short-term natural gas.
Interviewer: Long-term. In the American South. Or wherever.
Interviewee: Long-term it will be dominated, I believe, by fusion and by solar. I don't know what ratio, but I would say those are the two winners.
Interviewer: Now, I love Pulse. Why don't I hear more about Pulse? Or do you think there is a lot of chatter out there?
Interviewee: People love Pulse, but it is only available to our pro users right now, which is not that many. And also, we're not giving much per day to users. And we will change both of those things. But I suspect when we roll it out to Plus, you will hear about it a lot more. But people do love it. It gets great, great reviews.
Interviewer: And what do you use Pulse for?
Interviewee: There are kind of only two things in my life right now. Like, there's my family and work. And clearly, this is what I talk to ChatGBT about, because I get a lot of stuff about that. You know, I get the odd, like, new hypercar came out or, like, here's a great hiking trail or whatever. But it's mostly those two things. But it's very, it's great for that, both of those.
Interviewer: I'd just like to do a brief interlude on your broader view of the world and just see how I should think about how you think. So, people in California, they have a lot of views, like, on their own health. Some of which, to me, sound nutty. What do you think is your nuttiest view about your own health? That you're going to live forever? That, you know, the seed oils are bad? Or what is it? Or do you not have any?
Interviewee: I mean, when I was less busy, I was more disciplined on health-related stuff. I didn't have crazy views. But I was, like, I kind of ate healthy. I didn't drink that much. I, like, worked out a lot. I tried a few things here and there. Like, I was... I once ended up in a hospital for trying semaglutide before it was cool. Like, that kind of stuff. But I now do basically nothing. You just live family life and try to... I eat junk food. I don't exercise enough. It's, like, a pretty bad situation. Like, I'm feeling bullied into taking this more seriously again.
Interviewer: Yeah. But why eat junk food? Like, it doesn't taste good. Compared to, like, good sushi, you could afford good sushi. Sometimes, like, late at night, you just really want that chocolate chip cookie at 1130 at night. Yeah. Or at least I do. Yeah. Do you think there's any kind of alien life on the moons of Saturn? Because I do. That's one of my nutty views.
Interviewee: I have no opinion on the matter.
Interviewer: No opinion on the matter. I don't know. Yeah, that's a way of passing the test. What do you think about UAPs? Do you think there's a change?
Interviewee: I think something's going on there.
Interviewer: You think something's going on there? I have an opinion that there is something that I would like an explanation for. I kind of doubt it's Little Green Men. I extremely doubt it's Little Green Men. But I think someone's got something. And how many conspiracy theories do you believe in? Because I believe in close to zero, at least in the United States. They may be true for Pakistani military coups, but I think mostly they're just false.
Interviewee: True conspiracy theory, not just an unpopular belief. Correct. You know, I have one of those, what was it, the X-Files shirts, like I want to believe. Yeah. I still have one of those shirts from when I was in high school. I want to believe in conspiracy. I'm predisposed to believe in conspiracy theories, and I believe in either zero or very few.
Interviewer: Yeah, I'm the opposite of that. I don't want to believe, and I believe in very few. Like maybe the White Sox fixed the World Series way back when. Yeah, stuff like that. I don't quite count that. Like a true massive global government cover-up that requires a level of competence to people I just rarely ascribe. Now, some number of years ago, this was before even GPT-4, I asked you if you were directing a fund of money to revitalize St. Louis, which is where you grew up, how would you invest the money? Now it's a quite different world from when I asked you last time, and if I ask you again to revitalize St. Louis, how would you spend the money? Say it's a billion dollars, which is not actually transformational, but it's enough that it's some real money. A billion dollars, and I'm willing to go spend personal time on it. You have free time. The universe grants you free time. You don't take time away from anything else you're doing. You're in charge.
Interviewee: I would go try to start a thing that is like, this is not a deeply incisive answer because I think this is not a generally replicatable thing, but unique to me what I could do. I think I would try to go start a Y Combinator-like thing in St. Louis and get a ton of startup founders focused on AI to move there and start a bunch of companies.
Interviewer: That's a pretty similar answer to last time. I didn't remember what I said last time, so that's a good sign. You said the same thing, but you didn't mention AI. But AI to me seems quite clustered where we are in the Bay Area. Is trying to get AI into St. Louis the right way to do that? Isn't that in a way working at cross purposes? I mean, this is why I said it'd be like a unique to me thing. I think I could do it. Maybe that's like hopelessly naive. Yeah. Should it be legal to just release an AI agent into the wild, unowned, untraceable? Do we need some other AI agent to go out there and tackle it down? Or is there minimum capitalization? How do you think about that problem?
Interviewee: I think it's a question of thresholds. I don't think you'd advocate that most systems should have any oversight or regulation or legal questions or whatever. But if we have an agent that is going to like capable, like with serious probability of massively self-replicating over the internet and, you know, sweeping all the money out of bank accounts or whatever, you would then say, okay, maybe that one needs some like oversight. So I think it's a question of where you draw the threshold for where it should not be.
Interviewer: But say it's hiring the cloud computing from a semi-rogue nation, so you can't just turn it off. What actually should we do or will we be able to do? Just try to ring fence it somehow, identify it, surveil it, put sanctions on the country that's sponsoring it. Or what do we do for people that do that today? Well, there are a lot of cyber attacks that come from North Korea and I think we can't do that much about them, right?
Interviewee: My naive take is that, I don't know what the right answer is yet, but my naive take is we should try to solve this problem urgently for people using like rogue internet resources and AI will just be like a worse version of that problem.
Interviewer: But we'll have better defense also. Yeah. Yeah. Now, if I think about social media and AI, here's one thing I've noticed in my own work. I'm so, so keen to read the answers to my own queries to GPT-5, but when people send me the answers to their queries, I'm bored. I don't blame them. Like I know it's super useful for them, but that makes me a little skeptical about blending social media and AI. Am I missing something or would you try to talk me out of that somehow?
Interviewee: Uh, I've had, no, I've had the same. I was, I don't want to read your chat GPT queries. Yeah, but they're great for me. I'm sure. And I'm, I'm sure you don't want to read mine, but they're great for me. Uh, so chat GPT, I think is very much like a, a single player experience. I don't think that means there's not some interesting new kind of social product to build. Uh, in fact, I'm pretty sure there is, but I don't think it's the like, share your chat GPT queries, videos, or what any sense of what that was doing? Well, uh, you know, people clearly like they love making their own, but they also like watching other people's AI generated videos. But no, I, I think none of this stuff is the really interesting kind of things you can imagine when you and I and everybody else have like really great personal AI agents that can do stuff on our behalf. There's, there's probably entirely new social dynamics to think about.
Interviewer: And just the physical form of chat GPT on my screen or on my smartphone, is that more or less going to stay the same, but the thing will be better? Or 13 years from now, it will physically just be an entirely different beast? Because I can talk to it now, you know, now it does video. And is it just a better version or somehow it morphs?
Interviewee: We are going to try to make you a new kind of computer with a completely new kind of interface. That is meant for AI, which I think we're want something completely different than the computing paradigm we've been using for the last 50 years that we're currently stuck in. Like AI is a crazy change to the possibility space. And a lot of the like basic assumptions of how you use a computer and the fact that you should even be opening and having an operating system or opening a window or sending a query at all are now called into question. I realized that the track record of people saying they're going to invent a new kind of computer is very bad. But if there's one person that you should bet on to do it, I think Johnny Ive is like a credible, maybe the best bet you could take. So we'll see if it works. I'm very excited to try.
Interviewer: But say five years out, there's a so-called normie person. They're not a specialist. They want to learn how to use AI much better. What will they actually do that will give them a high return to acquiring that skill? To learn, to learn how to use AI specifically? Yeah. Yeah. Um, not program, not the inner guts, just actually in their job.
Interviewee: I'm smiling. Cause I, I remember when, uh, when I was a kid and Google came out there, like I, I had a job teaching old, older people how to use Google. Yeah. Um, and it felt like a, it just couldn't wrap my head around. Like I was like, you type the thing in and it does this. And you know, uh, a thing that I'm hopeful about for AI is that I think one of the reasons ChatGPT has grown so fast is it is so easy to learn how to use it and get real value out of it. So we don't need startups to do that or there is to teach people how to use AI? To teach people. Yeah. There is such a startup or what's the institution? Um, my school will teach me. That's hard to believe. You know, 10% of the world will use ChatGPT this week. Didn't exist three years ago. I suspect a year from now, maybe 30% of people will use ChatGPT that week. People, once they start using it, do find more and more sophisticated things to use it for. Uh, this is not a top of mind problem for me. I think like I believe in human creativity and adoption of new things over some period of number of years.
Interviewer: Let's say when your kids are old enough that they're grown, they can go out on their own in that future world, which is not like so far off. Do you think you'll still be reading books or you'll just be interacting with your AI?
Interviewee: Books have survived a lot of other technological chains change. So I think there is something deep about the format of a book that has persisted. It's very Lindy or whatever that current word for that is, but I suspect that there will be a new kind of way to like interact with a cluster of ideas that is better than a book for most things. So I don't think books will be gone, but I would bet they're like a smaller percentage of how people like learn a new or interact with a new idea.
Interviewer: And what's the cultural habit you have that you think will change the most? Like, oh, I won't watch movies anymore. I'll create my own or whatever for you. AI will obliterate what you did when you were 23.
Interviewee: This is kind of boring, but I think the way I work, you know, where I'm like doing emails and calls and meetings and, you know, like writing documents and dealing with Slack, like that I expect to change hugely. Uh, and that has become like a real, and I have like a cultural habit and like a rhythm of my work day at this point, spending time with my family, like spending time in nature, you know, eating food, my interactions with my friends, that stuff, I sort of expect to change almost not at all, at least not for a very long time.
Interviewer: You think San Francisco will remain the center for AI? Putting aside China issues, I just mean for, you know, the so-called West.
Interviewee: Yeah, I think that's the default. It's the default.
Interviewer: And you think the city is just absolutely making a comeback. It looks much nicer to me, seems nicer. Am I deluded? I love the city. I have always, I mean, I love the whole Bay Area. I particularly love the city, but I love the Bay Area. And I, so I'm like, I, I don't think I'm like a fair person to ask because I so want it to be making a comeback and to remain the place. I think so. I hope so. But you know, very biased. AI will improve many things very quickly, but what's the time horizon for making rent or home prices cheaper? That seems like a tough one. Not the fault of AI, but land is land and there's a lot of legal restrictions.
Interviewee: Yeah, I was going to push back on the land is land. There are a lot of other problems that I don't think AI can solve anytime soon. I, I, I wouldn't, I mean, there could be these like very strange second order effects where home prices get much cheaper, but sadly, I don't think AI has like a direct attack on solving anytime soon.
Interviewer: food prices would bet down, but what, you know, in the short run energy might be a bit more expensive. How long does it take for food prices to go down?
Interviewee: If they're not down in a decade, I'd be very disappointed.
Interviewer: If we think of healthcare, my sense is we're going to spend a lot more on healthcare. We'll get a lot for it because there'll be new inventions, but a lot of the world will feel more expensive because rent won't be cheaper. Food. I'm not sure about healthcare. You'll live to age 98, but you'll have to spend a lot more. You'll just be alive more you're spending. Right? So are people just going to think of AI as this very expensive thing or will it be thought of as a very cheap thing that makes life more affordable?
Interviewee: I would bet we spend less on healthcare. I bet there are a lot of diseases that we can just cure or come up with a very cheap treatment for that right now we have nothing but expensive chronic stuff that doesn't even work that well. So I, I would bet healthcare gets cheaper. Through pharmaceuticals, devices? Through pharmaceuticals and devices and even like delivery of actual healthcare services, housing is the one to me that just looks super hard and there will be other categories of things that we want to get more expensive. And of course those will status goods or whatever. Um, but I, I would take, I would take the healthcare goes down.
Interviewer: But and with all the, the blizzard of new ideas coming, you know, patent law, copyright law, those are based on earlier technologies and earlier models of how the world would work. Do we need to re-examine or change those radically for an AI drenched world? Or we can just keep what we have and modify it a bit. I really have no idea.
Interviewee: I'm a big free speech advocate, but I can imagine the world saying, well, with all this AI driven content, we need to re-examine the first amendment. Do you have a view on that? Without thinking much. I put out a tweet recently about how we're, you know, going to be allowing more freedom of expression in chat GPT. This is the famous erotica. Yeah. Yeah. It's funny what people get upset about. It is funny what animates people.
Interviewer: Because all you're saying is you're not going to stop people, right?
Interviewee: Well, we used to not, uh, a long time. I mean, well, that's not totally fair. We're going to allow more than we did in the past, but like a very important principle to me is that we treat our adult users like adults and that people have a very high degree of privacy with their AI, which we need legal change for. And also that people have a, uh, you know, very broad bounds of how they're able to use it. And to me, this should be one of the easiest things to agree on by like most people in the tech industry, or even like most people in the U S government. Like this was, I kind of dashed this tweet off and closed my computer and it didn't even like hit my mind that it was going to be like, you know, really a firestorm. It was, it was that we have, we made a decision, which I also think was a fair one over the summer that because there was, there were new problems and that particularly because we wanted to protect teenage users, we were going to heavily restrict chat GPT, which is also always a very unpopular thing to do. And along with the rolling out of age gating and some of these mental health mitigations, we were going to bring back and in some cases increase freedom of use for adults. I was like, yeah, you know, I'll tell people that's coming because the first, the first model update is shipping soon. Uh, but this should be a non-issue. And like, boy, did I get that one wrong? So clearly people, I think maybe it's just people don't believe in freedom of expression as much as they, they say they do.
Interviewee: That was kind of my only, like everyone thinks, okay, my own free expression, I can handle it. I need it. My ideas are all right, but yours like.
Interviewer: And for greater privacy rights, is it subpoena power that needs to be changed or something else in addition?
Interviewee: Subpoena power. Um, I, I believe that we should apply at least as much, well, I believe we should apply as much protection as when you talk to your doctor, your human doctor, or your human lawyer, as you do when you talk to your AI doctor or AI lawyer.
Interviewer: And right now we don't have that. Correct. Do you think there's enough trust in America today for people to trust the AI companies the way we sort of trust doctors, lawyers, and therapists? By revealed preference. Yes. Yes. By how many people talk to it?
Interviewee: LLM psychosis. Everyone on Twitter today is saying it's a thing. How much a thing is it? I mean, a very tiny thing, but not a zero thing, which is why we pissed off the whole user base or most of the user base by putting a bunch of restrictions in place. We, that treat adult users like adults includes an asterisk, which is people that are treat adults of sound mind, like adults, you know, society decides that we treat adults that are having a psychiatric crisis differently than, than other adults. It is one of these things that you learn as you go. But when we saw that the kind of like put JITPT into role-playing mode or, you know, pretend like it's writing a book and have it encourage someone in delusional thoughts, 99 point, some big number, percentage of adults, totally fine with that. Some tiny percentage of people. Also, if they talk to another, you know, person who encourages delusion, it's, it's bad. So we made a bunch of changes, uh, which are in conflict with the freedom of expression policy. And now that we have those mental health mitigations in place, we'll again, allow some of that stuff in, you know, creative mode, role-playing mode, writing mode, whatever of chat GPT. The thing I worry about is not that, you know, there will be a few basis points of people that are like close to losing grips with reality and we can trigger a psychotic break and we can get that right. Uh, the thing I worry about more is it's funny, the things that like stick in your mind. Someone said to me once, like, never, ever let yourself believe that propaganda doesn't work on you. They just haven't found the right thing for you yet. And again, I have no doubt that we can't address like the clear cases of people near a psychotic break, but for all of the talk about AI safety, I kind of would divide most AI thinkers into these two camps of, okay, it's, it's the, you know, bad guy uses AI to cause a lot of harm, or it's the AI itself is misaligned, wakes up, whatever intentionally takes over the world. There's this other category that gets third category that gets very little talk that I think is sort of much scarier and more interesting, which is the AI models like accidentally take over the world. It's not that they're going to do psychosis in you, but they're, you know, if you have the whole world talking to this, like one model, it's like not with any intentionality, but just as it learns from the world and this kind of continually co-evolving process, it just like subtly convinces you of something. No intention just does. It's learned that somehow. And that's like not as theatrical as chatbot psychosis, obviously, but I do think about that a lot.
Interviewer: Maybe I'm not good enough, but as a professor, I find people pretty hard to persuade, actually. I worry about this less than many of my AI related friends do. I hope you're right. Yeah. Last question on matters where you can speak publicly. At the margin, if you could call in an expert to help you resolve a question in your mind of substance, what would that question be?
Interviewee: I have an answer to this ready to go, but only because I got asked to, well, maybe I'll tell the story after. There will come, this is like a kind of spirit, spiritually, not literally. There will come a moment where the super intelligence is built. It is safety tested. It is ready to go. It is going to like, you know, we'll still be able to supervise it, but it's going to do just like vastly incredible things. It's going to be self-improving. It's going to launch the probes to the stars, whatever. And you get the opportunity to type in the prompt before you say, okay. And the question is, what should you type in?
Interviewer: And do you have a tentative answer now? No, I don't. The reason that I have this, had that ready to go in mind is someone was going to see the Dalai Lama and said, I'll ask any question about AI you want. And I was like, what a great opportunity. So I thought really hard about it. And that was my question. Sam Altman, thank you very much. Thank you.
Interviewee: Man, that's a hard thing all around. There's no, there's no easy part of that.
Upload your interview recording and get the same detailed AI analysis.
Upload Your Interview