Sam Altman on Trust, Persuasion, and the Future of Intelligence - Live at the Progress Conference

Explore AI's impact on productivity, hardware, and the future. Sam Altman shares insights on delegation, innovation, and the evolving role of AI in business and society.

AI Transcribed Real Interview

Source Video

Explore AI's impact on productivity, hardware, and the future. Sam Altman shares insights on delegation, innovation, and the evolving role of AI in business and society.

Get Your Interview Transcribed

Upload your interview recording and get professional transcription with AI-generated insights.

Upload Your Interview

Category

Sam Altman

Tags

Sam Altman, OpenAI, Artificial Intelligence, AI, Productivity, GPT-6, Future of AI, Technology

Full Transcription

SPEAKER_01 00:00 - 00:01

Hello, Sam. Happy to do this with you.

SPEAKER_01 00:02 - 00:02

Excited to do it again.

SPEAKER_01 00:03 - 00:09

Now, the last two months or so, there have been so many deals involving OpenAI, and I'm not even talking about the globetrotting.

SPEAKER_01 00:09 - 00:12

A lot of them are local, or there's new product features such as Pulse.

SPEAKER_01 00:13 - 00:15

Presumably, you were productive to begin with.

SPEAKER_01 00:15 - 00:19

How is it that you managed to up your productivity to get all this done?

SPEAKER_02 00:23 - 00:50

I mean, a lot of, I don't think there's like one single takeaway other than I think you always are, people almost never allocate their time as well as they think they do, and as you have more demands and more opportunities, you find ways to continue to be more efficient, but we've been able to hire and promote great people, and I delegate a lot to them and get them to take stuff on, and that is kind of the only sustainable way I know how to do it.

SPEAKER_02 00:51 - 00:59

I do try to make sure we increasingly, as what we need to do comes into focus, and there's, as you mentioned, a lot of infrastructure that needs to be built out right now.

SPEAKER_02 00:59 - 01:12

I do try to make sure we understand, or I understand, what the core thing for us to do is, and it has, in some sense, simplified, and there's very clear what we need to do, so that's been helpful.

SPEAKER_02 01:15 - 01:15

I don't really know.

SPEAKER_02 01:16 - 01:21

I guess another thing that's happened is more of the world wants to work with us, so deals are quicker to negotiate.

SPEAKER_01 01:21 - 01:25

You're doing much more with hardware or matters that are hardware adjacent.

SPEAKER_01 01:26 - 01:33

How is hiring or delegating to a good hardware person different from hiring good AI people, which is what you started off doing?

SPEAKER_01 01:33 - 01:35

You mean like consumer devices or chips?

SPEAKER_01 01:36 - 01:36

Both.

SPEAKER_02 01:38 - 02:02

One thing that's different is that cycle times are much longer, the capital is more intense, the cost of screen-up is higher, so I like to spend more time getting to know the people before saying, okay, you just go do this, and I'll trust that it'll work out. But it's kind of otherwise, the theory is the same. You try to find good, effective, fast-moving people, get clear on what the goal is, and just let them go do it.

SPEAKER_01 02:03 - 02:23

Like, I visited NVIDIA earlier this year. They were great. They were great to me. They're super smart. But it just felt so different from walking the floor of OpenAI. And I'm sure you've been to NVIDIA. Like, people read Twitter less. At least on the surface, they're less weird. Like, what is that intangible difference in the hardware world that one has to grasp to do well in it?

SPEAKER_02 02:24 - 02:30

Look, I don't know if this is going to turn out to be a good sign or a bad sign, but our chip team feels more like the OpenAI research team than a chip company.

SPEAKER_02 02:31 - 02:34

I think it might work out phenomenally well.

SPEAKER_01 02:34 - 02:38

So you're extending the model with your previous hires to hardware.

SPEAKER_01 02:38 - 02:39

With some risk, but we are.

SPEAKER_01 02:39 - 02:44

There's this fellow on Twitter. His name is Rune. He's become quite well known.

SPEAKER_01 02:45 - 02:47

What is it that makes Rune special to you?

SPEAKER_02 02:51 - 03:25

He's like a very lateral thinker. You can start down one path and sort of jump somewhere completely else, but keep going down, you know, stay on like the same trajectory, but in some sort of like very different context, that that's sort of unusual. He's clearly, he's great at phrasing observations in an interesting, useful, whatever way. And that sort of makes him fun and quite like useful to talk to. I don't know. He brings together like a lot of

SPEAKER_02 03:25 - 03:27

skills that don't often exist in one person's head.

SPEAKER_01 03:27 - 03:34

And how does that shape what you have him work on? So you see this about him, and then you think, ah, Rune should do X.

SPEAKER_02 03:34 - 03:41

I very rarely get to have anybody work on anything. Like one thing about researchers is they're going to work on what they're going to work on. And that's kind of that.

SPEAKER_01 03:41 - 03:50

There was someone put an essay online and it said, and all the time they worked at OpenAI, they hardly ever sent or received an email that so many things were done over Slack.

SPEAKER_01 03:51 - 03:55

Why is that? What's your model of why email is bad and Slack is good for OpenAI?

SPEAKER_02 03:55 - 04:28

I'll agree email is bad. I don't know if Slack is good. I suspect it's not. I think email is very bad. So the threshold to make something better than email is not high. And I think Slack is better than email. We have a lot of things going on at the same time as you observed. And we have to do things extremely quickly. It's definitely a very fast moving organization. So there are positives about Slack, but there's also like, you know, I like kind of dread the first hour, the morning, the last hour before I go to bed, where I'm just like dealing with this explosion of Slack. And I think it does

SPEAKER_02 04:28 - 04:48

create a lot of fake work. I suspect there is something new to build that is going to replace a lot of the current sort of office productivity suite, whatever you think of like docs, slides, email, Slack, whatever, that will be sort of the AI driven version of all of these things.

SPEAKER_02 04:48 - 05:14

Not where you like tack on the horrible, like, you know, you accidentally clicked the wrong place and it tries to write a whole document for you or summarize some thread or whatever, but the actual version of like, you are trusting your AI agent and my AI agent to work most stuff out and escalate to us when necessary. I think there is probably finally a good solution for someone to make within reach.

SPEAKER_01 05:15 - 05:23

How far are you from having that internally? Maybe not a product for the whole world, that it's in every way tested, but that you would use it with an open AI.

SPEAKER_02 05:23 - 05:28

Very far, but I suspect just because we haven't made any effort to try, not because the models are

SPEAKER_01 05:28 - 05:38

that far away. But since talent, time, human capital is so valuable within your company, why shouldn't that be a priority? Probably we should do it, but there's like, you know,

SPEAKER_02 05:38 - 05:45

people get stuck in their own ways of doing things and we got like, a lot of stuff is going very well right now. So it's, there's a lot of activation energy for like a big new thing.

SPEAKER_01 05:46 - 05:49

What is it about GPT-6 that makes that special to you?

SPEAKER_02 05:50 - 06:22

Um, I think GPT-5. So if GPT-3 was like the first moment where you saw like a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It's like very tiny things, but you know, here and there, someone's posting like, oh, it figured this thing out or, oh, it came up with this new idea. Oh, it was like a useful collaborator on this paper. And there is a chance that GPT-6 will be a GPT-3 to

SPEAKER_02 06:22 - 06:28

4 like leap that happened for kind of Turing test like stuff for science where five has these tiny

SPEAKER_01 06:28 - 06:34

glimmers and six can really do it. So let's say I run a science lab and I know GPT-6 is coming.

SPEAKER_01 06:35 - 06:49

What should I be doing now to prepare for that? It's always a very hard question. Like even if you know this thing is coming, I'd say I even had it now, right? What exactly would I do the next morning?

SPEAKER_02 06:50 - 07:02

Um, I mean, I guess the first thing you would do is just type in the current research questions you're struggling with and maybe it'll say like, here's an idea or run this experiment or go do

SPEAKER_01 07:02 - 07:18

this other thing. But if I'm thinking about restructuring an entire organization to put GPT-6 or seven or whatever at the center of it, what is it I should be doing organizationally rather than just having all my top people use it as add-ons to their current stock of knowledge?

SPEAKER_02 07:19 - 07:29

I've thought about this more for the context of companies than scientists, just because like, I understand that better. Uh, and I think it's a very important question.

SPEAKER_02 07:29 - 07:44

And right now I have met some orgs that are really saying like, okay, we're going to adopt AI and let AI kind of do this, but I'm very interested in this because like, shame on me if open AI is not the first big company run by an AI CEO, right? But just parts of it. And I thought the whole thing,

SPEAKER_01 07:44 - 07:48

that's very ambitious, but that's the finance department, whatever. Well, but eventually it

SPEAKER_02 07:48 - 07:57

should get to the whole thing. Yeah. So, so we can like use this and then try to like work backwards from that. And I find this a very interesting thought experiment of what would have to happen

SPEAKER_02 07:58 - 08:21

for an AI CEO to be able to do a much, much better job of running open AI than me, which clearly will happen someday, but how can we accelerate that? What's in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and kind of like roadblocks will be. And I, and I assume someone running a science lab should kind of try to think the same way and they'll come to different conclusions.

SPEAKER_01 08:22 - 08:29

How far off do you think it is that just say one division of open AI is 85% run by AIs?

SPEAKER_01 08:30 - 08:31

Any single division?

SPEAKER_01 08:32 - 08:36

Not a tiny, any significant division, mostly run by the AIs.

SPEAKER_01 08:38 - 08:42

Some small single digit number of years, not very far, not very far. And how does that way?

SPEAKER_02 08:43 - 08:47

When do you think the, when, when do you think I can be like, okay, Mr. AI CEO, you take over?

SPEAKER_01 08:48 - 08:53

CEO is tricky because the public role of a CEO, as you know, becomes more and more important.

SPEAKER_02 08:53 - 09:08

Let's say I stay on the, I like, I mean, if anyone, if I can like pretend to be a politician, which is not my natural strength and AI can do it too. Like, but let's say I like stay involved for the like public facing, whatever. I just like actually making the good decisions, figuring out

SPEAKER_01 09:08 - 09:13

what to do. I think you'll have billion dollar companies run by two or three people with AIs.

SPEAKER_01 09:13 - 09:17

I don't know, in two and a half years, I used to think one year, but maybe I've put it off a bit.

SPEAKER_01 09:17 - 09:21

I'm not more pessimistic about the AI. Maybe I'm more pessimistic about the humans.

SPEAKER_01 09:22 - 09:23

But what's your forecast?

SPEAKER_02 09:24 - 09:35

I agree on all of those counts. I think the AI can do it sooner than that. People have a great, and I think this is a good thing for society and a good thing for the future, not a bad one.

SPEAKER_02 09:35 - 09:45

People have a great deal higher trust in other people over an AI, even if they shouldn't, even if that's irrational. You know, the AI doctor is better, but you want the human, whatever.

SPEAKER_02 09:46 - 09:56

So I think it may take much longer for society to get really comfortable with this and for people in an organization to get really comfortable with this. But on the actual decision-making for most things, maybe the AI is pretty good pretty soon.

SPEAKER_01 09:57 - 10:11

And you're hiring a lot of smart people. Do you ask yourself, what are the markers of how AI resistant this very smart person will be? Like, do you have an implicit mental test for that? Or you just hire smart people and hope it's all going to work out later?

SPEAKER_01 10:11 - 10:22

No, I do ask, I do ask questions about that. But people will just lie, right? Like they know, they're talking to open AI. What do you actually look for in them? I mean, a big one is how they

SPEAKER_02 10:22 - 10:32

use AI today. And the people who still are like, oh yeah, you know, I'd use it for better Google search and nothing else. That's not necessarily disqualifier, but that's like a yellow flag.

SPEAKER_02 10:32 - 10:39

And people who are like seriously considering like what their day-to-day is going to look like in three years, that's a green flag. A lot of people aren't. They're like, oh yeah,

SPEAKER_01 10:39 - 11:13

you know, probably it's gonna be really smart. Do you think scientific labs might get GPT-6 this year? Not this year. Not this year. Here's a very difficult question. As you know, both you and I were fans of nuclear power, but we also know the insurance for nuclear power plants is provided by the government. The plants might be quite safe, but people worry. They're nervous Nellies. There's a lot of parties involved. So the federal government does the insurance. Do you worry that the future holds the same for AI companies where the feds are your insurer? And how do you

SPEAKER_01 11:13 - 11:18

plan for that? Again, even if AI is pretty safe, as with nuclear power, people are nervous Nellies.

SPEAKER_01 11:19 - 11:21

How will you insure everything?

SPEAKER_02 11:25 - 11:56

In some level, like at some level, when something gets sufficiently huge, whether or not they are on paper, the federal government is kind of the insurer of last resort, as we've seen in various financial crises and insurance companies screwing things up. So I guess, given the magnitude of what I expect AI economic impact to look like, sort of, I do think the government ends up as like the insurer of last resort. But I don't, I think I mean that in a different way than you mean that. And I don't

SPEAKER_02 11:56 - 12:00

expect them to actually be like writing the policies in the way that maybe they do for nuclear.

SPEAKER_01 12:01 - 12:29

And there's a big difference between the government being the insurer of last resort and the insurer of first resort. Last resort's inevitable, but I'm worried they'll become the insurer of first resort and that I don't want. I don't want that either. It's not, I don't, I don't think that's what will happen. What we're seeing with Intel, lithium, rare earths is the government is becoming an equity holder. Again, not of last resort, but of second or third resort. And I don't mean this as a comment about the Trump administration.

SPEAKER_01 12:29 - 12:34

I think this is something we might be seeing in any case or see in the future after Trump is gone.

SPEAKER_01 12:34 - 12:40

But how do you plan for open AI knowing that's now a thing on the table in the American economy?

SPEAKER_02 12:41 - 12:54

I put almost no probability on mass onto the world where no one has any meaning in the post-AGI world because the AI is doing everything. Like I think we're really great at finding new things to do, new games to play, new ways to be useful to each other, to compete, to get fulfilled, whatever.

SPEAKER_02 12:55 - 13:00

But I do put a significant probability that the social contract has to change significantly.

SPEAKER_02 13:00 - 13:08

I don't know what that will look like. Can I see the government getting more involved there and thus having some strong opinions by AI companies? I can totally see that.

SPEAKER_02 13:09 - 13:21

But we don't live our lives that way. We, we don't, we just kind of try to like work with capitalism as it currently exists. And I believe that that should be done by the companies and not the government. Although we'll partner with the government and try to be like a good

SPEAKER_02 13:22 - 13:25

collaborator. Like I don't, don't want them like writing our insurance policies.

SPEAKER_01 13:26 - 13:29

Now I did a trip through France and Spain this summer with my wife.

SPEAKER_01 13:30 - 13:35

Every hotel we booked other than the first one we booked through, well, we found it through GPT-5.

SPEAKER_01 13:35 - 13:40

We didn't book it through GPT. Almost every meal we ate. Right. And you didn't get a dime for this.

SPEAKER_01 13:41 - 13:47

And I'm telling my wife, well, this just seems wrong. Right. What is the new world going to look like

SPEAKER_02 13:47 - 14:17

soon enough? How will that work? I think if chat GPT finds you the, to zoom out even before the answer, one of the sort of unusual things we noticed a while ago, and this was when it was a worst problem, chat GPT would consistently be reported as like a user's most trusted technology product from the big tech company. We don't really think of ourselves as a big tech company, but I guess we sort of are now. And that's very odd on the surface, right? Because AI is the thing that hallucinates. AI is the thing with all the errors. And this was when they were, that was a much more of a problem.

SPEAKER_02 14:17 - 14:47

And there's a question of why. Ads on a Google search are dependent on Google doing badly. Like if it was giving you the best answer, there'd be no reason ever to buy an ad above it. So you kind of like, you're like, that thing is not quite aligned with me. Chat GPT, maybe it gives you the best answer. Maybe it doesn't, but you're paying it or hopefully all are paying it. And it's at least trying to give you the best answer. And that has led to people having like a deep and pretty trusting relationship with chat GPT. You asked chat GPT for the best hotel, not Google or something else.

SPEAKER_02 14:48 - 14:57

If chat GPT were accepting payment to put a worse hotel above a better hotel, that's probably catastrophic for your relationship with chat GPT.

SPEAKER_02 14:58 - 15:05

On the other hand, if chat GPT shows you its guest, the best hotel, whatever that is.

SPEAKER_02 15:06 - 15:11

And then if you book it with one click takes the same cut that it would take from any other hotel.

SPEAKER_02 15:11 - 15:16

And there's nothing that influenced it, but there's some sort of transaction fee.

SPEAKER_02 15:16 - 15:23

I think that's probably okay. And with our recent commerce thing, that's the spirit of what we're trying to do. We'll do that for travel at some point.

SPEAKER_01 15:23 - 15:32

I'm not worried about the payola issue, but let me tell you my worry. And that is, there may be a tight cap on the commission you can charge, because we're now in a world, say, where there's agents.

SPEAKER_01 15:33 - 15:52

And someone finds the best hotel through GPT seven or whatever. And then they just talk to their computer or their pendant, and they go to some stupider service. But the stupider service is an agent that books very cheaply. And they only really have to pay open AI, a commission equal to what the stupidest service will charge.

SPEAKER_02 15:53 - 16:11

So one thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I'm happy about that. I think there's like a lot of taxes that just suck for the economy and getting those down should be great all around.

SPEAKER_02 16:11 - 16:18

But I think that most companies like open AI will make more money at a lower margin.

SPEAKER_01 16:19 - 16:30

But do you worry about the discrepancy between the fixed upfront cost of making yours the smartest model compared to the very cheap cost of all the competing agent has to do is book it for someone?

SPEAKER_01 16:30 - 16:34

And how do you use the commissions to pay for making the model smarter in essence?

SPEAKER_02 16:35 - 16:40

I think the way to monetize the world's smartest model is certainly not hotel booking.

SPEAKER_02 16:41 - 16:42

But you want to do it nonetheless.

SPEAKER_02 16:42 - 17:10

I mean, I want to discover new science and figure out a way to monetize that, that you can only do with the smartest model. There is a question of like, should many people have asked, should open AI do chat GPT at all? Why don't you just go build AGI? Why don't you go discover, you know, a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology. And it is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually.

SPEAKER_02 17:11 - 17:28

But my most likely story about how this works, how the world gets like dramatically better, is we put a really great super intelligence in the hands of everybody. We make it super easy to use. It's nicely integrated. We make you beautiful devices. We connected all your services.

SPEAKER_02 17:28 - 17:37

It gets to know you over your life. It does all this stuff for you. And we invest in infrastructure and chips and energy and the whole thing to make it super abundant and super cheap.

SPEAKER_02 17:38 - 18:02

And then you all figure out how the world gets way better. Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever. So maybe chat GPT and hotel booking and whatever else is not the best way we can make money. In fact, I'm certain it's not. I do think it's a very important thing to do for the world. And I'm happy for open AI to do some things that are

SPEAKER_01 18:02 - 18:23

not the like economic maxine thing. Now you have a deal in the works with Walmart that people can use GPT. They ask good questions. What should I buy at Walmart? And then they can buy it at Walmart and you own Walmart. Have some arrangement. Do you think Amazon will fold and join that, or are they going to fight back and try to do their own thing? I have no idea. If I were them, I would fight back. You would fight back. I think so. Yeah.

SPEAKER_01 18:24 - 18:29

How important a revenue source will ads be for open AI? Again, there's a kind of ad that I think would be

SPEAKER_02 18:29 - 18:34

really bad like the one we talked about. There are kinds of ads that I think would be very good

SPEAKER_02 18:35 - 18:52

or pretty good to do. I expected something we'll try at some point. I do not think it is our biggest revenue opportunity. What will the ad look like on the page? I have no idea. You asked like a question about productivity earlier. Yeah. I'm really good about not doing the things I don't want to do.

SPEAKER_02 18:53 - 19:00

And that's something you don't want to do? You know, we have like the world expert thinking about our product strategy. I used to do that. I used to spend a lot of time thinking about product.

SPEAKER_02 19:02 - 19:05

And now she's much better at it than me. I have other things to think about.

SPEAKER_01 19:06 - 19:15

I'm sure she'll figure it out. Whether or not you agree with it, what is the best, this is not a bubble argument? Is it just the insatiable demand for compute?

SPEAKER_02 19:15 - 19:40

There's a lot of arguments I'm tempted to give, but I think the intellectually most interesting one is we have no idea how much past human level intelligence can go and what you can do with it as it does. So there's all the arguments that everyone has made. The one I would like to see people talk about much more is how are you even supposed to think about like vastly superhuman

SPEAKER_01 19:40 - 19:58

intelligence and the economic impacts of that? Now, OpenAI is in talks with Saudi Arabia, with UAE. Let's take the most optimistic scenario for how all that goes. What is it that top OpenAI management needs to know or understand about those countries? And how is it you learn it?

SPEAKER_02 19:59 - 20:17

Well, it would depend on what we were doing with them. Putting data centers in a country or taking investment from a country or deploying commercial services would be very different than a set of other collaborations we could imagine. But generally speaking, to put data centers in a country,

SPEAKER_02 20:18 - 20:31

what we need to understand is who's going to run it. We don't operate our own data centers, but you know, Microsoft or Oracle or somebody else. What workload are we going to put there? So what model weights are we going to put there? And what are the security guarantees going to look like?

SPEAKER_02 20:31 - 20:40

We do want to build data centers around the world with lots of countries. But for this question, which is kind of the main thing we deal with other countries for, those are the kinds of questions.

SPEAKER_02 20:40 - 20:50

If we were, which we don't have current plans, if we were like developing a custom model for some country, we'd have a whole bunch more questions. But they have different legal codes, different

SPEAKER_01 20:50 - 20:54

expectations from a deal. I'm not saying it's in a bad way. It's just quite different, right?

SPEAKER_01 20:55 - 21:05

And do you do the Jared Kushner thing? Here's the 25 books I read? Or you sit down and you ask GPT-6 how to understand this culture? Or you bring in three experts? We bring in experts.

SPEAKER_02 21:05 - 21:24

You bring in experts. We talk to the US government a lot. We bring in experts. Again, if we're building a data center that a very trusted partner is going to operate, we know what the workload is, and it's being built like a kind of US embassy or US military base, we have a very different set of questions than if we were doing other things which we have not yet decided to do and we'd bring in more experts for.

SPEAKER_01 21:24 - 21:36

And those are quite intangible forms of knowledge often. How good do you think GPT-6 is at teaching you those things? Or you still need the human experts to come in? Because you could just ask your own model,

SPEAKER_02 21:36 - 21:47

right? I don't think GPT-6 will have those intangibles. It might surprise us, but that'd be very unexpected if I was like, oh, don't need to talk to experts anymore.

SPEAKER_02 21:47 - 21:50

Do you have an evaluation for that in the works?

SPEAKER_02 21:51 - 21:54

Actually, for something very close to that, we do. I don't want to pre-announce it, but

SPEAKER_02 21:55 - 21:58

that class of stuff, yes, we do have an evaluation.

SPEAKER_02 21:58 - 21:58

You do? Yes.

SPEAKER_01 21:58 - 22:01

Yeah. How good will GPT-6 be at poetry?

SPEAKER_02 22:04 - 22:05

How good do you think GPT-5 is at poetry?

SPEAKER_01 22:06 - 22:20

Not that good. It's not what I want it for, so that's not a complaint. My guess is in a year, you'll have some model that can write a poem as good as the median Pablo Neruda poem, but not the best.

SPEAKER_02 22:20 - 22:35

I was going to say, I don't want to say GPT, whether it's 6 or 7, but I think we will get to something where you will say, this is not a long way to the very best, but this is a real poet's okay poem.

SPEAKER_01 22:36 - 22:49

In my view, there's a big gap between a Neruda poem that's a 7 on a scale of 1 to 10 and one that's a 10. I'm not sure you'll ever reach the 10. I think you'll reach the 8.8 within a few years.

SPEAKER_01 22:49 - 22:52

I think we will reach the 10 in it. You won't care.

SPEAKER_02 22:54 - 22:54

Who won't care?

SPEAKER_02 22:55 - 22:55

You won't care.

SPEAKER_02 22:56 - 22:56

I'll care.

SPEAKER_02 22:56 - 22:58

I promise.

SPEAKER_02 22:59 - 23:32

I mean, you'll care in terms of the technological accomplishment, but in terms of the great pieces of art and emotion, whatever else produced by humanity, you care a lot about the person or that a person produced it. And it's definitely something for an AI to write a 10 on its technical merits. You know, my classic example of this is the greatest chess players don't really care that AI is hugely better than the chess. It doesn't demotivate them to play. They don't

SPEAKER_02 23:32 - 23:43

really care that they are. They really care about beating the other human, and they really like get obsessed with that dude sitting across from them. But the fact that the AI is better, they don't care. Watching two AIs play each other, not that fun for that long.

SPEAKER_01 23:43 - 23:49

But let me tell you my worry about reaching the 10. Evaluations rely a lot on these rubrics.

SPEAKER_01 23:49 - 24:03

And the rubrics will become good enough to produce very good poems. But maybe there's something about the 10 poem that stands outside the rubric. And if you're just training on rubrics, rubrics, rubrics, it might in a way be counterproductive for reaching the 10.

SPEAKER_02 24:04 - 24:14

Uh, I mean, evals can rely on a lot of things, including when you call a poem a 10 and when you don't. And you can read a bunch in the process and provide some real-time signal.

SPEAKER_01 24:14 - 24:30

But say we have no human poets today writing 10s. And we're asking those same people to judge and grade the GPTs. I'm, I'm worried. Again, I think it will be fine. But we're, to me, we're talking about a nine, not a 10. You don't have William Wordsworth working for OpenAI.

SPEAKER_02 24:31 - 24:39

This gets to like a very interesting thing, which is, let's say you can't write a 10, but you can decide when something is a 10. Yeah.

SPEAKER_01 24:40 - 24:41

That might be all that we need.

SPEAKER_01 24:41 - 24:49

Right. Maybe humanity only decides collectively what's a 10 and there's something a little mysterious and history laden about that process.

SPEAKER_02 24:49 - 25:15

Okay. But still we can do it now. Maybe our decision is not very good because it is history laden and it does drift over time. And some things we all agree are great. We, the next generation decides or not whatever, but if whatever process humanity has to determine what a poem is a 10, you could imagine that providing some sort of signal to an AI. Now that again, if you know it's an AI, maybe you don't care. We see this phenomenon with AI art.

SPEAKER_01 25:15 - 25:19

To the extent you end up building your own chips, what's the hardest part of that?

SPEAKER_01 25:19 - 25:22

Man, that's a hard thing all around. There's no, there's no easy part of that.

SPEAKER_01 25:23 - 25:24

Yeah. No easy part of that.

SPEAKER_01 25:25 - 25:28

Well, Jonathan Ross said, it's just keeping up with what is new.

SPEAKER_01 25:28 - 25:33

People talk a lot about the recursive self-improvement loop for AI research,

SPEAKER_02 25:35 - 25:40

where AI can help researchers maybe today write code faster, eventually do automated research.

SPEAKER_02 25:40 - 26:06

And this thing is like well understood, very, very much discussed. Very little discussed are the, or relatively little discussed are the hardware implications of this. Robots that can build other robots, data centers that can build other data centers, chips that can design their own next generation. So there's many hard parts, but maybe a lot of them can get much easier. Maybe the problem of chip design will turn out to be a very good problem for previous generations of chips.

SPEAKER_01 26:07 - 26:10

You know, the stupidest question possible. Why don't we just make more GPUs?

SPEAKER_01 26:11 - 26:12

Because we need to make more electrons.

SPEAKER_01 26:13 - 26:15

But what's stopping that? What's the ultimate binding constraint?

SPEAKER_01 26:15 - 26:18

We're working on it really hard. I mean, this is, you know.

SPEAKER_01 26:18 - 26:22

But if you could have more of one thing to have more compute, what would the one thing be?

SPEAKER_01 26:22 - 26:23

Electrons.

SPEAKER_01 26:23 - 26:24

Electrons.

SPEAKER_01 26:24 - 26:24

Yeah.

SPEAKER_01 26:25 - 26:25

Just energy.

SPEAKER_01 26:26 - 26:26

Yeah.

SPEAKER_01 26:27 - 26:30

And what's the most likely short-term solution for that?

SPEAKER_01 26:30 - 26:31

Short-term.

SPEAKER_01 26:31 - 26:33

Easing, not full solution, but easing of the constraint.

SPEAKER_01 26:33 - 26:34

Short-term natural gas.

SPEAKER_02 26:34 - 26:35

Long-term.

SPEAKER_01 26:35 - 26:36

In the American South.

SPEAKER_01 26:36 - 26:37

Or wherever.

SPEAKER_02 26:37 - 26:40

Long-term it will be dominated, I believe, by fusion and by solar.

SPEAKER_02 26:41 - 26:43

I don't know what ratio, but I would say those are the two winners.

SPEAKER_01 26:44 - 26:45

And you're still bullish on fusion?

SPEAKER_01 26:45 - 26:46

Very much.

SPEAKER_01 26:46 - 26:47

And solar.

SPEAKER_01 26:48 - 26:51

Do you worry that as long as it's called nuclear power, even if it works?

SPEAKER_02 26:51 - 26:52

Did I say the word nuclear?

SPEAKER_01 26:52 - 26:53

No, you didn't.

SPEAKER_01 26:53 - 26:54

But other people will.

SPEAKER_01 26:55 - 26:56

The people just won't want it.

SPEAKER_01 26:57 - 27:00

Getting back to the irrationality point in the insurance.

SPEAKER_02 27:01 - 27:03

You're the economist, not me.

SPEAKER_02 27:03 - 27:11

But I think there is some price point at a given level of safety where the demand for this will be overwhelming.

SPEAKER_02 27:11 - 27:17

If this is the same price as natural gas, maybe it's unfortunately hard.

SPEAKER_02 27:17 - 27:20

If it's one-tenth the price, I think we could agree it would happen very fast.

SPEAKER_02 27:20 - 27:21

I don't know what the cut point is between.

SPEAKER_01 27:22 - 27:28

Do you ever worry there's some scenario where ultimately super intelligence doesn't need that much compute?

SPEAKER_01 27:28 - 27:34

And in some funny way, by investing in compute, you're betting against progress over 30-year time horizon?

SPEAKER_02 27:35 - 27:40

In the same way that people always want more energy if it's cheaper, I think people always want more compute if it's cheaper.

SPEAKER_02 27:40 - 27:51

So even if you can make incredibly smart models with much less compute, which I'm sure you can, the desire to consume in all sorts of new ways and do more stuff with more abundant intelligence, I'll take that bet every day.

SPEAKER_02 27:52 - 27:59

The related thing I worry about is that there is a huge phase shift on how we do compute,

SPEAKER_02 28:00 - 28:05

and we're all kind of chasing a dead-end paradigm.

SPEAKER_02 28:05 - 28:05

That would be bad.

SPEAKER_01 28:06 - 28:08

And what would that look like?

SPEAKER_02 28:08 - 28:09

I don't know.

SPEAKER_02 28:09 - 28:11

We all switch to full-on optical compute or something.

SPEAKER_01 28:12 - 28:13

And just have to spend a lot of money all over again?

SPEAKER_02 28:14 - 28:14

Yeah.

SPEAKER_01 28:14 - 28:15

Well, not on all of it.

SPEAKER_01 28:15 - 28:17

The energy is the energy, but yes, on everything else.

SPEAKER_01 28:18 - 28:19

Now, I love Pulse.

SPEAKER_01 28:20 - 28:21

Why don't I hear more about Pulse?

SPEAKER_01 28:22 - 28:23

Or do you think there is a lot of chatter out there?

SPEAKER_02 28:24 - 28:28

People love Pulse, but it is only available to our pro users right now, which is not that many.

SPEAKER_02 28:29 - 28:33

And also, we're not giving much per day to users.

SPEAKER_02 28:34 - 28:36

And we will change both of those things.

SPEAKER_02 28:36 - 28:40

But I suspect when we roll it out to Plus, you will hear about it a lot more.

SPEAKER_02 28:40 - 28:42

But people do love it.

SPEAKER_02 28:42 - 28:43

It gets great, great reviews.

SPEAKER_02 28:43 - 28:44

And what do you use Pulse for?

SPEAKER_02 28:46 - 28:48

There are kind of only two things in my life right now.

SPEAKER_02 28:48 - 28:49

Like, there's my family and work.

SPEAKER_02 28:49 - 28:53

And clearly, this is what I talk to ChatGBT about, because I get a lot of stuff about that.

SPEAKER_02 28:54 - 28:59

You know, I get the odd, like, new hypercar came out or, like, here's a great hiking trail or whatever.

SPEAKER_02 28:59 - 29:00

But it's mostly those two things.

SPEAKER_02 29:00 - 29:03

But it's very, it's great for that, both of those.

SPEAKER_01 29:04 - 29:10

I'd just like to do a brief interlude on your broader view of the world and just see how I should think about how you think.

SPEAKER_01 29:11 - 29:14

So, people in California, they have a lot of views, like, on their own health.

SPEAKER_01 29:14 - 29:16

Some of which, to me, sound nutty.

SPEAKER_01 29:16 - 29:20

What do you think is your nuttiest view about your own health?

SPEAKER_01 29:21 - 29:22

That you're going to live forever?

SPEAKER_01 29:22 - 29:25

That, you know, the seed oils are bad?

SPEAKER_01 29:25 - 29:26

Or what is it?

SPEAKER_01 29:26 - 29:27

Or do you not have any?

SPEAKER_02 29:27 - 29:33

I mean, when I was less busy, I was more disciplined on health-related stuff.

SPEAKER_02 29:34 - 29:35

I didn't have crazy views.

SPEAKER_02 29:35 - 29:37

But I was, like, I kind of ate healthy.

SPEAKER_02 29:38 - 29:39

I didn't drink that much.

SPEAKER_02 29:39 - 29:40

I, like, worked out a lot.

SPEAKER_02 29:40 - 29:42

I tried a few things here and there.

SPEAKER_02 29:42 - 29:43

Like, I was...

SPEAKER_02 29:44 - 29:48

I once ended up in a hospital for trying semaglutide before it was cool.

SPEAKER_02 29:48 - 29:50

Like, that kind of stuff.

SPEAKER_02 29:51 - 29:52

But I now do basically nothing.

SPEAKER_02 29:53 - 29:55

You just live family life and try to...

SPEAKER_02 29:55 - 29:55

I eat junk food.

SPEAKER_02 29:55 - 29:56

I don't exercise enough.

SPEAKER_02 29:57 - 29:58

It's, like, a pretty bad situation.

SPEAKER_02 29:58 - 30:02

Like, I'm feeling bullied into taking this more seriously again.

SPEAKER_01 30:03 - 30:03

Yeah.

SPEAKER_01 30:03 - 30:04

But why eat junk food?

SPEAKER_01 30:04 - 30:05

Like, it doesn't taste good.

SPEAKER_01 30:06 - 30:07

It does taste good.

SPEAKER_01 30:08 - 30:12

Compared to, like, good sushi, you could afford good sushi.

SPEAKER_02 30:14 - 30:18

Sometimes, like, late at night, you just really want that chocolate chip cookie at 1130 at night.

SPEAKER_02 30:18 - 30:18

Yeah.

SPEAKER_02 30:19 - 30:20

Or at least I do.

SPEAKER_01 30:20 - 30:20

Yeah.

SPEAKER_01 30:20 - 30:24

Do you think there's any kind of alien life on the moons of Saturn?

SPEAKER_01 30:28 - 30:29

Because I do.

SPEAKER_01 30:30 - 30:31

That's one of my nutty views.

SPEAKER_01 30:32 - 30:33

I have no opinion on the matter.

SPEAKER_01 30:33 - 30:34

No opinion on the matter.

SPEAKER_01 30:34 - 30:35

I don't know.

SPEAKER_01 30:35 - 30:37

Yeah, that's a way of passing the test.

SPEAKER_01 30:38 - 30:39

What do you think about UAPs?

SPEAKER_01 30:40 - 30:40

Do you think there's a change?

SPEAKER_02 30:40 - 30:41

I think something's going on there.

SPEAKER_01 30:41 - 30:42

You think something's going on there?

SPEAKER_02 30:43 - 30:47

I have an opinion that there is something that I would like an explanation for.

SPEAKER_02 30:47 - 30:48

I kind of doubt it's Little Green Men.

SPEAKER_02 30:48 - 30:50

I extremely doubt it's Little Green Men.

SPEAKER_02 30:50 - 30:51

But I think someone's got something.

SPEAKER_01 30:52 - 30:55

And how many conspiracy theories do you believe in?

SPEAKER_01 30:55 - 30:58

Because I believe in close to zero, at least in the United States.

SPEAKER_01 30:59 - 31:04

They may be true for Pakistani military coups, but I think mostly they're just false.

SPEAKER_02 31:05 - 31:08

True conspiracy theory, not just an unpopular belief.

SPEAKER_02 31:08 - 31:08

Correct.

SPEAKER_02 31:10 - 31:14

You know, I have one of those, what was it, the X-Files shirts, like I want to believe.

SPEAKER_02 31:14 - 31:14

Yeah.

SPEAKER_02 31:14 - 31:17

I still have one of those shirts from when I was in high school.

SPEAKER_02 31:17 - 31:19

I want to believe in conspiracy.

SPEAKER_02 31:19 - 31:23

I'm predisposed to believe in conspiracy theories, and I believe in either zero or very few.

SPEAKER_01 31:23 - 31:25

Yeah, I'm the opposite of that.

SPEAKER_01 31:25 - 31:27

I don't want to believe, and I believe in very few.

SPEAKER_01 31:28 - 31:31

Like maybe the White Sox fixed the World Series way back when.

SPEAKER_01 31:31 - 31:32

Yeah, stuff like that.

SPEAKER_02 31:32 - 31:34

I don't quite count that.

SPEAKER_02 31:35 - 31:42

Like a true massive global government cover-up that requires a level of competence to people I just rarely ascribe.

SPEAKER_01 31:43 - 31:53

Now, some number of years ago, this was before even GPT-4, I asked you if you were directing a fund of money to revitalize St. Louis, which is where you grew up, how would you invest the money?

SPEAKER_01 31:54 - 31:58

Now it's a quite different world from when I asked you last time, and if I ask you again

SPEAKER_01 32:00 - 32:02

to revitalize St. Louis, how would you spend the money?

SPEAKER_01 32:03 - 32:09

Say it's a billion dollars, which is not actually transformational, but it's enough that it's some real money.

SPEAKER_01 32:09 - 32:12

A billion dollars, and I'm willing to go spend personal time on it.

SPEAKER_01 32:12 - 32:13

You have free time.

SPEAKER_01 32:13 - 32:15

The universe grants you free time.

SPEAKER_01 32:15 - 32:17

You don't take time away from anything else you're doing.

SPEAKER_01 32:17 - 32:18

You're in charge.

SPEAKER_02 32:18 - 32:33

I would go try to start a thing that is like, this is not a deeply incisive answer because I think this is not a generally replicatable thing, but unique to me what I could do.

SPEAKER_02 32:33 - 32:44

I think I would try to go start a Y Combinator-like thing in St. Louis and get a ton of startup founders focused on AI to move there and start a bunch of companies.

SPEAKER_01 32:45 - 32:47

That's a pretty similar answer to last time.

SPEAKER_01 32:47 - 32:49

I didn't remember what I said last time, so that's a good sign.

SPEAKER_01 32:49 - 32:51

You said the same thing, but you didn't mention AI.

SPEAKER_01 32:53 - 32:57

But AI to me seems quite clustered where we are in the Bay Area.

SPEAKER_01 32:58 - 33:02

Is trying to get AI into St. Louis the right way to do that?

SPEAKER_01 33:02 - 33:04

Isn't that in a way working at cross purposes?

SPEAKER_02 33:04 - 33:06

I mean, this is why I said it'd be like a unique to me thing.

SPEAKER_02 33:06 - 33:07

I think I could do it.

SPEAKER_02 33:07 - 33:09

Maybe that's like hopelessly naive.

SPEAKER_02 33:09 - 33:09

Yeah.

SPEAKER_01 33:11 - 33:17

Should it be legal to just release an AI agent into the wild, unowned, untraceable?

SPEAKER_01 33:17 - 33:21

Do we need some other AI agent to go out there and tackle it down?

SPEAKER_01 33:21 - 33:23

Or is there minimum capitalization?

SPEAKER_01 33:23 - 33:24

How do you think about that problem?

SPEAKER_02 33:25 - 33:27

I think it's a question of thresholds.

SPEAKER_02 33:30 - 33:37

I don't think you'd advocate that most systems should have any oversight or regulation or legal questions or whatever.

SPEAKER_02 33:37 - 33:41

But if we have an agent that is going to like capable, like with serious probability

SPEAKER_02 33:43 - 33:51

of massively self-replicating over the internet and, you know, sweeping all the money out of bank accounts or whatever, you would then say, okay, maybe that one needs some like

SPEAKER_02 33:52 - 33:52

oversight.

SPEAKER_02 33:53 - 33:56

So I think it's a question of where you draw the threshold for where it should not be.

SPEAKER_01 33:57 - 34:02

But say it's hiring the cloud computing from a semi-rogue nation, so you can't just turn it off.

SPEAKER_01 34:02 - 34:06

What actually should we do or will we be able to do?

SPEAKER_01 34:06 - 34:14

Just try to ring fence it somehow, identify it, surveil it, put sanctions on the country that's sponsoring it.

SPEAKER_01 34:14 - 34:16

Or what do we do for people that do that today?

SPEAKER_01 34:17 - 34:22

Well, there are a lot of cyber attacks that come from North Korea and I think we can't do that much about them, right?

SPEAKER_02 34:23 - 34:35

My naive take is that, I don't know what the right answer is yet, but my naive take is we should try to solve this problem urgently for people using like rogue internet resources and AI will just be like a worse version of that problem.

SPEAKER_01 34:35 - 34:37

But we'll have better defense also.

SPEAKER_02 34:37 - 34:38

For sure.

SPEAKER_01 34:38 - 34:38

Yeah.

SPEAKER_01 34:39 - 34:44

Now, if I think about social media and AI, here's one thing I've noticed in my own work.

SPEAKER_01 34:44 - 34:53

I'm so, so keen to read the answers to my own queries to GPT-5, but when people send me the answers to their queries, I'm bored.

SPEAKER_01 34:54 - 34:55

I don't blame them.

SPEAKER_01 34:55 - 35:02

Like I know it's super useful for them, but that makes me a little skeptical about blending social media and AI.

SPEAKER_01 35:02 - 35:06

Am I missing something or would you try to talk me out of that somehow?

SPEAKER_02 35:07 - 35:08

Uh, I've had, no, I've had the same.

SPEAKER_02 35:08 - 35:10

I was, I don't want to read your chat GPT queries.

SPEAKER_02 35:11 - 35:11

Yeah, but they're great for me.

SPEAKER_02 35:12 - 35:12

I'm sure.

SPEAKER_02 35:12 - 35:14

And I'm, I'm sure you don't want to read mine, but they're great for me.

SPEAKER_02 35:15 - 35:19

Uh, so chat GPT, I think is very much like a, a single player experience.

SPEAKER_02 35:20 - 35:26

I don't think that means there's not some interesting new kind of social product to build.

SPEAKER_02 35:26 - 35:37

Uh, in fact, I'm pretty sure there is, but I don't think it's the like, share your chat GPT queries, videos, or what any sense of what that was doing?

SPEAKER_02 35:37 - 35:44

Well, uh, you know, people clearly like they love making their own, but they also like watching other people's AI generated videos.

SPEAKER_02 35:44 - 35:59

But no, I, I think none of this stuff is the really interesting kind of things you can imagine when you and I and everybody else have like really great personal AI agents that can do stuff on our behalf. There's, there's probably entirely new social dynamics to think about.

SPEAKER_01 35:59 - 36:10

And just the physical form of chat GPT on my screen or on my smartphone, is that more or less going to stay the same, but the thing will be better?

SPEAKER_01 36:11 - 36:15

Or 13 years from now, it will physically just be an entirely different beast?

SPEAKER_01 36:16 - 36:18

Because I can talk to it now, you know, now it does video.

SPEAKER_01 36:19 - 36:22

And is it just a better version or somehow it morphs?

SPEAKER_02 36:22 - 36:26

We are going to try to make you a new kind of computer with a completely new kind of interface.

SPEAKER_02 36:26 - 36:39

That is meant for AI, which I think we're want something completely different than the computing paradigm we've been using for the last 50 years that we're currently stuck in. Like AI is a crazy change to the possibility space.

SPEAKER_02 36:39 - 37:03

And a lot of the like basic assumptions of how you use a computer and the fact that you should even be opening and having an operating system or opening a window or sending a query at all are now called into question. I realized that the track record of people saying they're going to invent a new kind of computer is very bad. But if there's one person that you should bet on to do it, I think Johnny Ive is like a credible, maybe the best bet you could take.

SPEAKER_02 37:04 - 37:06

So we'll see if it works. I'm very excited to try.

SPEAKER_01 37:07 - 37:13

But haven't you already been surprised how robust it is that people love typing text into boxes?

SPEAKER_01 37:13 - 37:26

This sort of shocks me in the bigger picture. People are still texting all the time. It's one of the most robust forms of internet, anything. And maybe that will just stick forever. And it, it's a sign of our own limitations. But how do we get past that?

SPEAKER_02 37:29 - 37:38

I mean, I, I text, texting command lines, search queries, that's my favorite interface.

SPEAKER_01 37:38 - 37:39

Yeah.

SPEAKER_01 37:39 - 37:39

I think like.

SPEAKER_01 37:39 - 37:41

You like it. I like it.

SPEAKER_01 37:41 - 37:41

Yeah.

SPEAKER_02 37:41 - 37:43

Maybe we're just going to keep it.

SPEAKER_02 37:43 - 37:46

Well, a lot of people use it. Like people love to text. People like ChatGPT.

SPEAKER_02 37:46 - 38:00

Like, you know, I, I, I remember when we were thinking about the interface for ChatGPT. I was like very set that this was something people would be familiar with and want to use. And

SPEAKER_02 38:01 - 38:10

I, I think I, I just like, I grew up as like a child of the internet with a lot of conviction that that was the right, that was the right kind of inner, like, you know, texting was like my life as a teenager.

SPEAKER_01 38:10 - 38:21

If you have some kind of ideal arrangement partnership with an institution of higher education, say in within two to three years, what does that look like? You get to write the whole thing.

SPEAKER_02 38:23 - 38:51

I suspect that the whole model should change, but I don't know what to like. I think the ideal partnership would look like we try 20 different experiments. We see like what leads to the best results. I've been watching these AI schools pop up with great interest. It seems like a lot of them with very different approaches are all showing positive results. But I think the first few years of the ideal partnership would look like we run 20, like wildly different experiments.

SPEAKER_01 38:52 - 39:27

Sometimes I have the fear that these institutions don't have enough internal reputational strength or credibility to make any major change. Forget about AI. And that to do a partnership with an institution like that is maybe intrinsically frustrating. And for the next 10 years, the actual model is kind of privatized AI use on the side by faculty, by students, by labs. And in a sense, there is no partnership other than actually just marketing your product to these people. Do you ever think that might be true? Yeah. And I don't, it wouldn't like super upset me if that's what

SPEAKER_01 39:27 - 39:37

happens. Yeah. What do you think will happen to the returns to a college degree? Not Harvard, not Stanford, but like a quite good state school, five, 10 years out.

SPEAKER_01 39:39 - 39:41

What's the historical rate of decline of the value of that?

SPEAKER_01 39:41 - 39:46

Uh, recently it's gone down though for a long time, it was going up quite a bit.

SPEAKER_01 39:46 - 39:47

Yeah. I mean like the last decade.

SPEAKER_01 39:48 - 39:49

Oh, it's gone down. I don't know how much.

SPEAKER_02 39:50 - 39:59

I would kind of guess that it goes down at a slightly higher rate than the last decade, but it does not like collapse to zero as fast as it should.

SPEAKER_01 39:59 - 40:08

And then it's the returns to doing what other than learning AI that go up, like being on the college football team or, or what?

SPEAKER_02 40:09 - 40:18

I don't think the returns will, I mean, their massive returns will accrue to doing AI for a small set of people, but I think the returns to using AI super well,

SPEAKER_02 40:19 - 40:34

surprisingly widely distributed. Uh, I, I am not a believer that the, I think the most important thing AI will do is discover new science for all of us. And people will, a lot of people will benefit from that and people will start companies or get jobs doing that. But I am not a believer that that is

SPEAKER_02 40:36 - 41:07

like the only thing that eventually makes money. I think people will just use AI for all sorts of new kinds of jobs or to do existing jobs better. You know, maybe, maybe the starkest example of this in 2025 is what it, the day to day of how, you know, the average programmer in Silicon Valley did their workflow at the beginning of this year versus the end of this year, extremely different, extremely different. And you don't really have to know how to program AI to do that, but you now have like, you can get more done and you probably have much more value and the world is going to get much

SPEAKER_02 41:07 - 41:10

more software. I think we'll see things like that for a surprising number of industries.

SPEAKER_01 41:11 - 41:14

But say five years out, there's a so-called normie person. They're not a specialist.

SPEAKER_01 41:14 - 41:23

They want to learn how to use AI much better. What will they actually do that will give them a high return to acquiring that skill? To learn, to learn how to use AI specifically?

SPEAKER_01 41:24 - 41:28

Yeah. Yeah. Um, not program, not the inner guts, just actually in their job.

SPEAKER_02 41:28 - 41:57

I'm smiling. Cause I, I remember when, uh, when I was a kid and Google came out there, like I, I had a job teaching old, older people how to use Google. Yeah. Um, and it felt like a, it just couldn't wrap my head around. Like I was like, you type the thing in and it does this. And you know, uh, a thing that I'm hopeful about for AI is that I think one of the reasons ChatGPT has grown so fast is it is so easy to learn how to use it and get real value out of it. So we don't need startups

SPEAKER_01 41:57 - 42:06

to do that or there is to teach people how to use AI? To teach people. Yeah. There is such a startup or what's the institution? Um, my school will teach me. That's hard to believe. You know,

SPEAKER_02 42:06 - 42:22

10% of the world will use ChatGPT this week. Didn't exist three years ago. I suspect a year from now, maybe 30% of people will use ChatGPT that week. People, once they start using it, do find more and more sophisticated things to use it for. Uh, this is not a top of mind problem for me.

SPEAKER_02 42:22 - 42:30

I think like I believe in human creativity and adoption of new things over some period of number

SPEAKER_01 42:30 - 42:51

of years. But you might just want to support or invest in the startups that will do this because if you're bullish on AI, presumably you're bullish on those startups and it will help your business in turn. So it seems odd not to have a theory of how we're all going to learn to use AI better. Like you can go to dog trainer school and they teach you how to train a dog. Okay. I, maybe I have

SPEAKER_02 42:51 - 42:56

like a blind spot here and I promise I'll go think about this more, but if you ask ChatGPT, like teach

SPEAKER_01 42:56 - 42:59

me how to use you. Yeah. Maybe that's it. It's pretty good. Yeah. So maybe you're the school.

SPEAKER_01 43:00 - 43:17

Maybe. Yeah. Let's say when your kids are old enough that they're grown, they can go out on their own in that future world, which is not like so far off. Do you think you'll still be reading books or you'll just be interacting with your AI? Books have survived a lot of other technological

SPEAKER_02 43:17 - 43:24

chains change. So I think there is something deep about the format of a book that has persisted.

SPEAKER_02 43:25 - 43:29

It's very Lindy or whatever that current word for that is, but I suspect that

SPEAKER_02 43:31 - 43:48

there will be a new kind of way to like interact with a cluster of ideas that is better than a book for most things. So I don't think books will be gone, but I would bet they're like a smaller percentage of how people like learn a new or interact with a new idea. And what's the cultural

SPEAKER_01 43:48 - 44:03

habit you have that you think will change the most? Like, oh, I won't watch movies anymore. I'll create my own or whatever for you. AI will obliterate what you did when you were 23. I mean, this is

SPEAKER_02 44:03 - 44:17

kind of boring, but I think the way I work, you know, where I'm like doing emails and calls and meetings and, you know, like writing documents and dealing with Slack, like that I expect to change

SPEAKER_02 44:18 - 44:33

hugely. Uh, and that has become like a real, and I have like a cultural habit and like a rhythm of my work day at this point, spending time with my family, like spending time in nature, you know, eating food, my interactions with my friends, that stuff, I sort of expect to change almost not at all,

SPEAKER_01 44:34 - 44:39

at least not for a very long time. You think San Francisco will remain the center for AI?

SPEAKER_01 44:39 - 44:44

Putting aside China issues, I just mean for, you know, the so-called West.

SPEAKER_01 44:44 - 44:54

Yeah, I think that's the default. It's the default. And you think the city is just absolutely making a comeback. It looks much nicer to me, seems nicer. Am I deluded? I love the city. I have always,

SPEAKER_02 44:54 - 45:09

I mean, I love the whole Bay Area. I particularly love the city, but I love the Bay Area. And I, so I'm like, I, I don't think I'm like a fair person to ask because I so want it to be making a comeback and to remain the place. I think so. I hope so. But you know, very biased.

SPEAKER_01 45:09 - 45:23

AI will improve many things very quickly, but what's the time horizon for making rent or home prices cheaper? That seems like a tough one. Not the fault of AI, but land is land and there's a lot

SPEAKER_02 45:23 - 45:42

of legal restrictions. Yeah, I was going to push back on the land is land. There are a lot of other problems that I don't think AI can solve anytime soon. I, I, I wouldn't, I mean, there could be these like very strange second order effects where home prices get much cheaper, but sadly, I don't think AI has like a direct attack on solving anytime soon.

SPEAKER_01 45:43 - 45:48

food prices would bet down, but what, you know, in the short run energy might be a bit more expensive.

SPEAKER_01 45:49 - 45:51

How long does it take for food prices to go down?

SPEAKER_01 45:52 - 45:54

If they're not down in a decade, I'd be very disappointed.

SPEAKER_01 45:54 - 46:14

If we think of healthcare, my sense is we're going to spend a lot more on healthcare. We'll get a lot for it because there'll be new inventions, but a lot of the world will feel more expensive because rent won't be cheaper. Food. I'm not sure about healthcare. You'll live to age 98, but you'll have to spend a lot more. You'll just be alive more you're spending. Right?

SPEAKER_01 46:15 - 46:27

So are people just going to think of AI as this very expensive thing or will it be thought of as a very cheap thing that makes life more affordable? I would bet we spend less on healthcare. I bet there

SPEAKER_02 46:27 - 46:56

are a lot of diseases that we can just cure or come up with a very cheap treatment for that right now we have nothing but expensive chronic stuff that doesn't even work that well. So I, I would bet healthcare gets cheaper. Through pharmaceuticals, devices? Through pharmaceuticals and devices and even like delivery of actual healthcare services, housing is the one to me that just looks super hard and there will be other categories of things that we want to get more expensive. And of course those will status goods or whatever. Um, but I, I would take, I would take the healthcare goes down.

SPEAKER_01 46:56 - 47:06

But and with all the, the blizzard of new ideas coming, you know, patent law, copyright law, those are based on earlier technologies and earlier models of how the world would work.

SPEAKER_01 47:07 - 47:17

Do we need to re-examine or change those radically for an AI drenched world? Or we can just keep what we have and modify it a bit. I really have no idea.

SPEAKER_01 47:18 - 47:28

I'm a big free speech advocate, but I can imagine the world saying, well, with all this AI driven content, we need to re-examine the first amendment. Do you have a view on that?

SPEAKER_02 47:29 - 47:39

Without thinking much. I put out a tweet recently about how we're, you know, going to be allowing more freedom of expression in chat GPT. This is the famous erotica. Yeah. Yeah.

SPEAKER_02 47:42 - 47:46

It's funny what people get upset about. It is funny what animates people.

SPEAKER_01 47:46 - 47:48

Because all you're saying is you're not going to stop people, right?

SPEAKER_02 47:49 - 48:20

Well, we used to not, uh, a long time. I mean, well, that's not totally fair. We're going to allow more than we did in the past, but like a very important principle to me is that we treat our adult users like adults and that people have a very high degree of privacy with their AI, which we need legal change for. And also that people have a, uh, you know, very broad bounds of how they're able to use it. And to me, this should be one of the easiest things to agree on

SPEAKER_02 48:20 - 48:54

by like most people in the tech industry, or even like most people in the U S government. Like this was, I kind of dashed this tweet off and closed my computer and it didn't even like hit my mind that it was going to be like, you know, really a firestorm. It was, it was that we have, we made a decision, which I also think was a fair one over the summer that because there was, there were new problems and that particularly because we wanted to protect teenage users, we were going to heavily restrict chat GPT, which is also always a very unpopular thing to do. And

SPEAKER_02 48:54 - 49:03

along with the rolling out of age gating and some of these mental health mitigations, we were going to bring back and in some cases increase freedom of use for adults.

SPEAKER_02 49:04 - 49:22

I was like, yeah, you know, I'll tell people that's coming because the first, the first model update is shipping soon. Uh, but this should be a non-issue. And like, boy, did I get that one wrong? So clearly people, I think maybe it's just people don't believe in freedom of expression as much as they, they say they do.

SPEAKER_01 49:22 - 49:23

That is my opinion. Yeah.

SPEAKER_02 49:24 - 49:28

That was kind of my only, like everyone thinks, okay, my own free expression, I can handle it. I need it.

SPEAKER_02 49:28 - 49:30

My ideas are all right, but yours like.

SPEAKER_01 49:31 - 49:36

And for greater privacy rights, is it subpoena power that needs to be changed or something else

SPEAKER_02 49:36 - 49:52

in addition? Subpoena power. Um, I, I believe that we should apply at least as much, well, I believe we should apply as much protection as when you talk to your doctor, your human doctor, or your human lawyer, as you do when you talk to your AI doctor or AI lawyer.

SPEAKER_01 49:53 - 49:54

And right now we don't have that.

SPEAKER_02 49:54 - 49:54

Correct.

SPEAKER_01 49:55 - 50:04

Do you think there's enough trust in America today for people to trust the AI companies the way we sort of trust doctors, lawyers, and therapists?

SPEAKER_01 50:05 - 50:07

By revealed preference. Yes.

SPEAKER_01 50:07 - 50:14

Yes. By how many people talk to it? LLM psychosis. Everyone on Twitter today is saying it's a thing.

SPEAKER_01 50:15 - 50:16

How much a thing is it?

SPEAKER_02 50:17 - 50:39

I mean, a very tiny thing, but not a zero thing, which is why we pissed off the whole user base or most of the user base by putting a bunch of restrictions in place. We, that treat adult users like adults includes an asterisk, which is people that are treat adults of sound mind, like adults, you know, society decides that we treat adults that are having a psychiatric crisis differently than, than other adults. It is one of these things that you learn as you go.

SPEAKER_02 50:39 - 51:14

But when we saw that the kind of like put JITPT into role-playing mode or, you know, pretend like it's writing a book and have it encourage someone in delusional thoughts, 99 point, some big number, percentage of adults, totally fine with that. Some tiny percentage of people. Also, if they talk to another, you know, person who encourages delusion, it's, it's bad. So we made a bunch of changes, uh, which are in conflict with the freedom of expression policy. And now that we have those mental health mitigations in place, we'll again, allow some of that stuff in, you know, creative mode,

SPEAKER_02 51:14 - 51:28

role-playing mode, writing mode, whatever of chat GPT. The thing I worry about is not that, you know, there will be a few basis points of people that are like close to losing grips with reality and we can trigger a psychotic break and we can get that right.

SPEAKER_02 51:28 - 51:38

Uh, the thing I worry about more is it's funny, the things that like stick in your mind. Someone said to me once, like, never, ever let yourself believe that propaganda doesn't work on you.

SPEAKER_02 51:39 - 51:44

They just haven't found the right thing for you yet. And again, I have no doubt that we can't

SPEAKER_02 51:45 - 52:18

address like the clear cases of people near a psychotic break, but for all of the talk about AI safety, I kind of would divide most AI thinkers into these two camps of, okay, it's, it's the, you know, bad guy uses AI to cause a lot of harm, or it's the AI itself is misaligned, wakes up, whatever intentionally takes over the world. There's this other category that gets third category that gets very little talk that I think is sort of much scarier and more interesting,

SPEAKER_02 52:18 - 52:37

which is the AI models like accidentally take over the world. It's not that they're going to do psychosis in you, but they're, you know, if you have the whole world talking to this, like one model, it's like not with any intentionality, but just as it learns from the world and this kind of continually co-evolving process, it just like subtly convinces you of something. No intention just does.

SPEAKER_02 52:37 - 52:46

It's learned that somehow. And that's like not as theatrical as chatbot psychosis, obviously,

SPEAKER_01 52:46 - 53:01

but I do think about that a lot. Maybe I'm not good enough, but as a professor, I find people pretty hard to persuade, actually. I worry about this less than many of my AI related friends do. I hope you're right. Yeah. Last question on matters where you can speak publicly.

SPEAKER_01 53:02 - 53:07

At the margin, if you could call in an expert to help you resolve a question in your mind of substance,

SPEAKER_02 53:07 - 53:42

what would that question be? I have an answer to this ready to go, but only because I got asked to, well, maybe I'll tell the story after. There will come, this is like a kind of spirit, spiritually, not literally. There will come a moment where the super intelligence is built. It is safety tested. It is ready to go. It is going to like, you know, we'll still be able to supervise it, but it's going to do just like vastly incredible things. It's going to be self-improving. It's going to launch the probes to the stars, whatever. And you get the opportunity

SPEAKER_02 53:42 - 53:50

to type in the prompt before you say, okay. And the question is, what should you type in?

SPEAKER_01 53:50 - 53:54

And do you have a tentative answer now? No, I don't. The reason that I

SPEAKER_02 53:54 - 54:07

have this, had that ready to go in mind is someone was going to see the Dalai Lama and said, I'll ask any question about AI you want. And I was like, what a great opportunity. So I thought really hard about it. And that was my question. Sam Altman, thank you very much. Thank you.

Get Your Interview Transcribed

Upload your interview recording and get professional transcription with AI-generated insights.

Upload Your Interview