Sam Altman Shows Me GPT 5... And What's Next

Explore the future of AI with OpenAI CEO Sam Altman. Discuss GPT-5's capabilities, superintelligence, scientific breakthroughs, and the societal impact of rapid technological advancement.

AI Transcribed Real Interview

Source Video

Explore the future of AI with OpenAI CEO Sam Altman. Discuss GPT-5's capabilities, superintelligence, scientific breakthroughs, and the societal impact of rapid technological advancement.

Get Your Interview Transcribed

Upload your interview recording and get professional transcription with AI-generated insights.

Upload Your Interview

Category

Sam Altman

Tags

AI, OpenAI, GPT-5, Superintelligence, Future of Technology, Artificial Intelligence, Sam Altman, Tech Trends

Full Transcription

SPEAKER_03 00:00 - 00:04

This is like a crazy amount of power for one piece of technology, and it's happened to us so fast.

SPEAKER_03 00:04 - 00:05

You just launched GPT-5.

SPEAKER_03 00:06 - 00:07

A kid born today will never be smarter than AI.

SPEAKER_02 00:07 - 00:10

How do we figure out what's real and what's not real?

SPEAKER_03 00:10 - 00:12

We haven't put a sex bot avatar in GPT yet.

SPEAKER_02 00:13 - 00:15

Super intelligence. What does that actually mean?

SPEAKER_03 00:16 - 00:17

This thing is remarkable.

SPEAKER_02 00:19 - 00:23

I'm about to interview Sam Altman, the CEO of OpenAI.

SPEAKER_01 00:23 - 00:24

OpenAI's.

SPEAKER_01 00:24 - 00:24

OpenAI's.

SPEAKER_01 00:25 - 00:26

Reshaping industries.

SPEAKER_01 00:26 - 00:28

Nudes are straight up tech lord, let's be honest.

SPEAKER_02 00:28 - 00:34

Right now, they're trying to build a super intelligence that could far exceed humans in almost every field.

SPEAKER_02 00:35 - 00:38

And they've just released their most powerful model yet.

SPEAKER_02 00:38 - 00:41

Just a couple years ago, that would have sounded like science fiction.

SPEAKER_02 00:41 - 00:42

Not anymore.

SPEAKER_02 00:42 - 00:43

In fact, they're not alone.

SPEAKER_02 00:44 - 00:48

We're in the middle of the highest stakes global race any of us have ever seen.

SPEAKER_02 00:48 - 00:53

Hundreds of billions of dollars and an unbelievable amount of human work.

SPEAKER_02 00:53 - 00:55

This is a profound moment.

SPEAKER_02 00:55 - 00:58

Most people never live through a technological shift like this.

SPEAKER_02 00:58 - 01:01

And it's happening all around you and me right now.

SPEAKER_02 01:01 - 01:08

So, in this episode, I want to try to time travel with Sam Altman into the future that he's trying to build.

SPEAKER_02 01:08 - 01:14

To see what it looks like so that you and I can really understand what's coming.

SPEAKER_02 01:14 - 01:16

Welcome to Huge Conversations.

SPEAKER_02 01:17 - 01:25

Thank you.

SPEAKER_02 01:25 - 01:25

Thanks for doing this.

SPEAKER_02 01:25 - 01:26

Absolutely.

SPEAKER_02 01:26 - 01:28

So, before we dive in, I'd love to tell you my goal here.

UNKNOWN 01:29 - 01:29

Okay.

SPEAKER_02 01:29 - 01:35

I'm not going to ask you about valuation or AI talent wars or fundraising or anything like that.

SPEAKER_02 01:35 - 01:37

I think that's all very well covered elsewhere.

SPEAKER_03 01:37 - 01:38

It does seem like it.

SPEAKER_02 01:38 - 01:44

Our big goal on this show is to cover how we can use science and tech to make the future better.

SPEAKER_02 01:44 - 01:51

And the reason that we do all of that is because we really believe that if people see those better futures, they can then help build them.

SPEAKER_02 01:52 - 02:02

So, my goal here is to try my best to time travel with you into different moments in the future that you're trying to build and see what it looks like.

SPEAKER_03 02:02 - 02:03

Awesome.

SPEAKER_02 02:03 - 02:05

Starting with what you just announced.

SPEAKER_02 02:06 - 02:13

You recently said, surprisingly recently, that GPT-4 was the dumbest model any of us will ever have to use again.

SPEAKER_02 02:13 - 02:26

But GPT-4 can already perform better than 90% of humans at the SAT and the LSAT and the GRE, and it can pass coding exams and sommelier exams and medical licensing.

SPEAKER_02 02:26 - 02:29

And now you just launched GPT-5.

SPEAKER_02 02:29 - 02:32

What can GPT-5 do that GPT-4 can't?

SPEAKER_03 02:32 - 02:37

First of all, one important takeaway is you can have an AI system that can do all those amazing things you just said.

SPEAKER_03 02:37 - 02:45

And it clearly does not replicate a lot of what humans are good at doing, which I think says something about the value of SAT tests or whatever else.

SPEAKER_03 02:45 - 03:01

But I think had you gone back to if we were having this conversation the day of GPT-4 launch and we told you how GPT-4 did at those things, you were like, oh, man, this is going to have huge impacts and some negative impacts on what it means for a bunch of jobs or what people are going to do.

SPEAKER_03 03:01 - 03:06

And, you know, this is a bunch of positive impacts that you might have predicted that haven't yet come true.

SPEAKER_03 03:07 - 03:18

And so there's something about the way that these models are good that does not capture a lot of other things that we need people to do or care about people doing.

SPEAKER_03 03:18 - 03:22

And I suspect that same thing is going to happen again with GPT-5.

SPEAKER_03 03:22 - 03:25

People are going to be blown away by what it does.

SPEAKER_03 03:26 - 03:27

It's really good at a lot of things.

SPEAKER_03 03:27 - 03:31

And then they will find that they want it to do even more.

SPEAKER_03 03:32 - 03:34

People will use it for all sorts of incredible things.

SPEAKER_03 03:34 - 03:41

It will transform a lot of knowledge work, a lot of the way we learn, a lot of the way we create.

SPEAKER_03 03:42 - 03:48

But we people, society will co-evolve with it to expect more with better tools.

SPEAKER_03 03:49 - 03:54

So, yeah, like I think this model is quite remarkable in many ways, quite limited in others.

SPEAKER_03 03:54 - 04:14

But the fact that for, you know, three minute, five minute, one hour tasks that like an expert in a field could maybe do or maybe struggle with, that the fact that you have in your pocket one piece of software that can do all of these things is really amazing.

SPEAKER_03 04:14 - 04:23

I think this is like unprecedented at any point in human history that I that technology has improved this much this fast.

SPEAKER_03 04:23 - 04:28

And the fact that we have this tool now, you know, we're like living through it and we're kind of adjusting step by step.

SPEAKER_03 04:28 - 04:34

But if we could go back in time five or 10 years and say this thing was coming, we would be like, probably not.

SPEAKER_02 04:35 - 04:37

Let's assume that people haven't seen the headlines.

SPEAKER_02 04:38 - 04:41

What are the top line specific things that you're excited about?

SPEAKER_02 04:41 - 04:45

And also the things that you seem to be caveating, the things that maybe you won't expect it to do.

SPEAKER_03 04:48 - 05:02

The thing that I am most excited about is this is a model for the first time where I feel like I can ask kind of any hard scientific or technical question and get a pretty good answer.

SPEAKER_03 05:03 - 05:05

And I'll give a fun example, actually.

SPEAKER_03 05:06 - 05:13

When I was in junior high, or maybe it was ninth grade, I got a TI-83, this old graphing calculator.

SPEAKER_03 05:13 - 05:18

And I spent so long making this game called Snake.

SPEAKER_03 05:19 - 05:21

It was a very popular game with kids in my school.

SPEAKER_03 05:21 - 05:27

And I was like, I was like, probably when it was done, but it was like programming on a TI-83 was extremely painful.

SPEAKER_03 05:27 - 05:28

It took a long time.

SPEAKER_03 05:28 - 05:30

It was really hard to like debug and whatever.

SPEAKER_03 05:31 - 05:37

And on a whim with an early copy of GPT-5, I was like, I wonder if it can make a TI-83 style game of Snake.

SPEAKER_03 05:38 - 05:40

And of course, it did that perfectly in like seven seconds.

SPEAKER_03 05:40 - 05:48

And then I was like, okay, am I supposed to be, would my like 11-year-old self think this was cool or like, you know, miss something from the process?

SPEAKER_03 05:48 - 05:52

And I had like three seconds of wondering like, oh, is this good or bad?

SPEAKER_03 05:52 - 05:57

And then I immediately said, actually, now I'm missing this game.

SPEAKER_03 05:57 - 05:59

I have this idea for a crazy new feature.

SPEAKER_03 05:59 - 05:59

Let me type it in.

SPEAKER_03 05:59 - 06:00

It implements it.

SPEAKER_03 06:00 - 06:02

And it just, the game live updates.

SPEAKER_03 06:02 - 06:04

And I'm like, actually, I'd like it to look this way.

SPEAKER_03 06:04 - 06:05

Actually, I'd like to do this thing.

SPEAKER_03 06:05 - 06:14

And I had this like, this very like kind of, you have this experience that reminded me of being like 11 and programming again, where I was just like, I want to try this.

SPEAKER_03 06:14 - 06:15

Now I have this idea.

SPEAKER_03 06:15 - 06:22

But I could do it so fast and I could like express ideas and try things and play with things in such real time.

SPEAKER_03 06:22 - 06:29

I was like, oh man, you know, I was worried for a second about kids like missing the struggle of learning to program in this sort of stone age way.

SPEAKER_03 06:29 - 06:41

And now I'm just thrilled for them because the way that people will be able to create with these new tools, the speed with which you can sort of bring ideas to life, you know, and that's pretty amazing.

SPEAKER_03 06:41 - 06:50

So this idea that GPT-5 can just not only like answer all these hard questions for you, but really create like on-demand, almost instantaneous software.

SPEAKER_03 06:51 - 06:57

That's, I think that's going to be one of the defining elements of the GPT-5 era in a way that did not exist with GPT-4.

SPEAKER_02 06:57 - 07:03

As you're talking about that, I find myself thinking about a concept in weightlifting of time under tension.

SPEAKER_02 07:04 - 07:09

And for those who don't know, it's, you can squat a hundred pounds in three seconds or you can squat a hundred pounds in 30.

SPEAKER_02 07:09 - 07:11

You gain a lot more by squatting it in 30.

SPEAKER_02 07:12 - 07:21

And when I think about our creative process and when I've felt most, like I've done my best work, it has required an enormous amount of cognitive time under tension.

SPEAKER_02 07:22 - 07:25

And I think that cognitive time under tension is so important.

SPEAKER_02 07:26 - 07:33

And it's ironic almost because these tools have taken enormous cognitive time under tension to develop.

SPEAKER_02 07:33 - 07:42

But in some ways, I do think people might say, people are using them as a escape hatch for thinking in some ways, maybe.

SPEAKER_02 07:42 - 07:48

Now, you might say, yeah, but we did that with the calculator and we just moved on to harder math problems.

SPEAKER_02 07:48 - 07:52

Do you feel like there's something different happening here?

SPEAKER_02 07:52 - 07:53

How do you think about this?

SPEAKER_03 07:54 - 07:58

It's different with, I mean, there are some people who are clearly using Chachipiti not to think.

SPEAKER_03 07:58 - 08:02

And there are some people who are using it to think more than they ever have before.

SPEAKER_03 08:04 - 08:12

I am hopeful that we will be able to build the tool in a way that encourages more people to stretch their brain with it a little more and be able to do more.

SPEAKER_03 08:12 - 08:15

And I think that like, you know, society is a competitive place.

SPEAKER_03 08:16 - 08:20

Like if you give people new tools, in theory, maybe people just work less.

SPEAKER_03 08:20 - 08:25

But in practice, it seems like people work ever harder and the expectations of people just go up.

SPEAKER_03 08:25 - 08:37

So my guess is that like other tools, some people, or like other pieces of technology, some people will do more and some people will do less.

SPEAKER_03 08:37 - 08:45

But certainly for the people who want to use Chachipiti to increase their cognitive time under tension, they are really able to.

SPEAKER_03 08:45 - 08:52

And it is, I take a lot of inspiration from what like the top 5% of most engaged users do with Chachipiti.

SPEAKER_03 08:52 - 08:57

Like it's really amazing how much people are learning and doing and, you know, outputting.

SPEAKER_02 08:58 - 09:03

So my, I've only had GPT-5 for a couple hours, so I've been playing with it.

SPEAKER_03 09:03 - 09:03

What do you think so far?

SPEAKER_02 09:04 - 09:07

I'm, I'm just learning how to interact with it.

SPEAKER_02 09:07 - 09:13

I mean, part of the interesting thing is I feel like I just caught up on how to use GPT-4 and now I'm trying to learn how to use GPT-5.

SPEAKER_02 09:13 - 09:22

I'm curious what the specific tasks that you've found most interesting are, because I imagine you've been using it for a while now.

SPEAKER_03 09:22 - 09:25

I have been most impressed by the coding tasks.

SPEAKER_03 09:25 - 09:33

I mean, there's a lot of other things it's really good at, but this, I, this idea of the AI can write software for anything.

SPEAKER_03 09:34 - 09:41

And that means that you can express ideas in new ways that the AI can do very advanced things.

SPEAKER_03 09:41 - 09:50

It can do, you know, it can like, in some sense you could like ask GPT-4 anything, but because GPT-5 is so good at programming, it feels like it can do anything.

SPEAKER_03 09:51 - 09:54

Of course it can't do things in the physical world, but it can get a computer to do very complex things.

SPEAKER_03 09:55 - 10:02

And software is this super powerful, you know, way to like control some stuff and actually do some things.

SPEAKER_03 10:02 - 10:06

So that, that for me has been the most striking.

SPEAKER_03 10:07 - 10:09

It's gotten, it's much better at writing.

SPEAKER_03 10:09 - 10:16

So this is like, there's this whole thing of AI slop, like AI writes in this kind of like quite annoying way.

SPEAKER_03 10:17 - 10:19

And we still have the M dashes in GPT-5.

SPEAKER_03 10:19 - 10:26

A lot of people like the M dashes, but the writing quality of GPT-5 is gotten much better.

SPEAKER_03 10:26 - 10:27

We still have a long way to go.

SPEAKER_03 10:27 - 10:36

We want to improve it more, but like I've, a thing we've heard a lot from people inside of open AI is that, man, they started using GPT-5.

SPEAKER_03 10:36 - 10:42

They knew it was better on all the metrics, but there's this like nuanced quality they can't quite articulate.

SPEAKER_03 10:42 - 10:45

But then when they have to go back to GPT-4 to test something that feels terrible.

SPEAKER_03 10:45 - 10:51

And I, I don't know exactly what the cause of that is, but I suspect part of it is the writing feels so much more natural and better.

SPEAKER_02 10:52 - 10:59

I, in preparation for this interview, reached out to a couple other leaders in AI and technology and gathered a couple of questions for you.

SPEAKER_01 10:59 - 10:59

Okay.

SPEAKER_02 11:00 - 11:04

So this next question is from Stripe CEO, Patrick Collison.

SPEAKER_01 11:04 - 11:05

This will be a good one.

SPEAKER_01 11:05 - 11:05

I'm sure.

SPEAKER_02 11:05 - 11:06

Read this verbatim.

SPEAKER_02 11:06 - 11:09

It's about the next stage.

SPEAKER_02 11:10 - 11:11

What, what comes after GPT-5?

SPEAKER_02 11:12 - 11:16

In which year do you think a large language model will make a significant scientific discovery?

SPEAKER_02 11:16 - 11:19

And what's missing such that it hasn't happened yet?

SPEAKER_02 11:19 - 11:23

He caveated here that we should leave math and special case models like AlphaFold aside.

SPEAKER_02 11:23 - 11:27

He's specifically asking about fully general purpose models like the GPT series.

SPEAKER_03 11:27 - 11:34

I would say most people will agree that that happens at some point over the next two years, but the definition of significant matters a lot.

SPEAKER_03 11:34 - 11:45

And so some people's significant might happen, you know, in early 25, some people might, maybe not until late 2026, sorry, early 2026, maybe some people not until late 2027.

SPEAKER_03 11:45 - 11:52

But I would, I would bet that by late 27, most people agree that there has been an AI driven significant new discovery.

SPEAKER_03 11:52 - 11:59

And the thing that I think is missing is just the kind of cognitive power of these models.

SPEAKER_03 11:59 - 12:09

A framework that one of the researchers said to me that I really liked is, you know, a year ago, we could do well on like a high school, like a basic high school math competition.

SPEAKER_03 12:09 - 12:13

Problems that might take a professional mathematician seconds to a few minutes.

SPEAKER_03 12:13 - 12:15

We very recently got an IMO gold medal.

SPEAKER_03 12:15 - 12:18

That is a crazy difficult, like.

SPEAKER_02 12:18 - 12:20

Could you explain what that means for people?

SPEAKER_03 12:20 - 12:22

It's not like the hardest competition math test.

SPEAKER_03 12:22 - 12:26

This is something that like the very, very top slice of the world.

SPEAKER_03 12:26 - 12:29

Many, many professional mathematicians wouldn't solve a single problem.

SPEAKER_03 12:30 - 12:31

And we scored at the top level.

SPEAKER_03 12:32 - 12:37

Now, there are some humans that got an even higher score in the gold medal range, but we like, this is a crazy accomplishment.

SPEAKER_03 12:37 - 12:42

And these, each of these problems, it's like six problems over nine hours.

SPEAKER_03 12:42 - 12:44

So hour and a half per problem for a great mathematician.

SPEAKER_03 12:44 - 12:55

So we've gone from a few seconds to a few minutes to an hour and a half, maybe to prove a significant new mathematical theorem is like a thousand hours of work for a top person in the world.

SPEAKER_03 12:56 - 12:59

So we've got to go from, you know, another significant gain.

SPEAKER_03 13:00 - 13:03

But if you look at our trajectory, you can say like, okay, we're getting to that.

SPEAKER_03 13:03 - 13:04

We have a path to get to that time horizon.

SPEAKER_03 13:05 - 13:07

We just need to keep scaling the models.

SPEAKER_02 13:08 - 13:12

The long-term future that you've described is superintelligence.

SPEAKER_02 13:12 - 13:14

What does that actually mean?

SPEAKER_02 13:14 - 13:16

And how will we know when we've hit it?

SPEAKER_03 13:17 - 13:26

If we had a system that could do better research, better AI research than, say, the whole open AI research team.

SPEAKER_03 13:26 - 13:35

Like if we were willing, if we said, okay, the best way we can use our GPUs is to let this AI decide what experiments we should run smarter than like the whole brain trust of open AI.

SPEAKER_03 13:35 - 13:40

And if that same, to make a personal example, if that same system could do a better job running open AI than I could.

SPEAKER_03 13:41 - 13:45

So you have something that's like, you know, better than the best researchers, better than me at this, better than other people at their jobs.

SPEAKER_03 13:45 - 13:46

That would feel like superintelligence to me.

SPEAKER_02 13:48 - 13:51

That is a sentence that would have sounded like science fiction just a couple of years ago.

SPEAKER_03 13:51 - 13:54

It still kind of does, but it's, you can like see it through the fog.

SPEAKER_02 13:54 - 13:54

Yes.

SPEAKER_02 13:55 - 14:08

And so one of the steps, it sounds like you're saying, on that path is this moment of scientific discovery, of asking better questions, of grappling with things in a way that expert level humans do to come up with new discoveries.

SPEAKER_02 14:09 - 14:19

One of the things that keeps knocking around in my head is if we were in 1899, say, and we were able to give it all of physics up until that point and play it out a little bit, nothing further than that.

SPEAKER_02 14:19 - 14:22

Like at what point would one of these systems come up with general relativity?

SPEAKER_03 14:24 - 14:41

Interesting question is, did you, like, if we think about that forward, like, like if we think of where we are now, should us, if we'd never got another piece of physics data, do we expect that a really good superintelligence could just think super hard about our existing data?

SPEAKER_03 14:42 - 14:48

And maybe say, like, solve high energy physics with no new particle accelerator, or does it need to build a new one and design new experiments?

SPEAKER_03 14:49 - 14:50

Obviously, we don't know the answer to that.

SPEAKER_03 14:50 - 14:52

Different people have different speculation.

SPEAKER_03 14:54 - 15:04

But I suspect we will find that for a lot of science, it's not enough to just think harder about data we have, but we will need to build new instruments, conduct new experiments, and that will take some time.

SPEAKER_03 15:05 - 15:08

Like that, that is the real world is slow and messy and, you know, whatever.

SPEAKER_03 15:08 - 15:14

So I'm sure we could make some more progress just by thinking harder about the current scientific data we have in the world.

SPEAKER_03 15:15 - 15:23

But my guess is to make the big progress, we will also need to build new machines and run new experiments, and there will be some slowdown built into that.

SPEAKER_02 15:23 - 15:31

Another way of thinking about this is, AI systems now are just incredibly good at answering almost any question.

SPEAKER_02 15:32 - 15:39

But maybe one of the things we're saying is, it's another leap yet, and what Patrick's question is getting at is to ask the better questions.

SPEAKER_03 15:40 - 15:50

Or, if we go back to this kind of timeline question, we could maybe say that AI systems are superhuman on one-minute tasks, but a long way to go to the thousand-hour tasks.

SPEAKER_03 15:51 - 16:01

And there's a dimension of human intelligence that seems very different than AI systems when it comes to these long-horizon tasks.

SPEAKER_03 16:01 - 16:04

Now, I think we will figure it out, but today it's a real weak point.

SPEAKER_02 16:04 - 16:07

We've talked about where we are now with GPT-5.

SPEAKER_02 16:07 - 16:10

We talked about the end goal or future goal of superintelligence.

SPEAKER_02 16:10 - 16:17

One of the questions that I have, of course, is what does it look like to walk through the fog between the two?

SPEAKER_02 16:18 - 16:21

The next question is from NVIDIA CEO Jensen Huang.

SPEAKER_02 16:21 - 16:23

I'm going to read this verbatim.

SPEAKER_02 16:24 - 16:25

Fact is what is.

SPEAKER_02 16:26 - 16:27

Truth is what it means.

SPEAKER_02 16:27 - 16:29

So facts are objective.

SPEAKER_02 16:29 - 16:30

Truths are personal.

SPEAKER_02 16:30 - 16:33

They depend on perspective, culture, values, beliefs, context.

SPEAKER_02 16:34 - 16:36

One AI can learn and know the facts.

SPEAKER_02 16:36 - 16:41

But how does one AI know the truth for everyone in every country and every background?

SPEAKER_03 16:42 - 16:45

I'm going to accept as axioms those definitions.

SPEAKER_03 16:45 - 16:49

I'm not sure if I agree with them, but in the interest of time, I will just take them.

SPEAKER_03 16:49 - 16:50

I will take those definitions and go with it.

SPEAKER_03 16:50 - 16:55

But I have been surprised.

SPEAKER_03 16:55 - 17:04

I think many other people have been surprised, too, about how fluent AI is at adapting to different cultural contexts and individuals.

SPEAKER_03 17:04 - 17:11

One of my favorite features that we have ever launched in ChatGPT is the sort of enhanced memory that came out earlier this year.

SPEAKER_03 17:11 - 17:21

It really feels like my ChatGPT gets to know me and what I care about and my life experiences and background and the things that have led me to where they are.

SPEAKER_03 17:21 - 17:29

A friend of mine recently, who's been a huge ChatGPT user, so he's put a lot of his life into all these conversations.

SPEAKER_03 17:30 - 17:36

He gave his ChatGPT a bunch of personality tests and asked them to answer as if they were him.

SPEAKER_03 17:36 - 17:40

And it got the same score as he actually got, even though he'd never really talked about his personality.

SPEAKER_03 17:41 - 17:48

And my ChatGPT has really learned over the years of me talking to it about my culture, my values, my life.

SPEAKER_03 17:49 - 17:58

And I have used, you know, I sometimes will use it in like, I'll use like a free account just to see what it's like without any of my history.

SPEAKER_03 17:58 - 17:59

And it feels really, really different.

SPEAKER_03 17:59 - 18:05

So, yeah, I think we've all been surprised on the upside of how good AI is at learning this and adapting.

SPEAKER_02 18:06 - 18:14

And so, do you envision in many different parts of the world people using different AIs with different sort of cultural norms and contexts?

SPEAKER_02 18:14 - 18:15

Is that what we're saying?

SPEAKER_03 18:15 - 18:23

I think that everyone will use like the same fundamental model, but there will be context provided to that model that will make it behave in sort of personalized way.

SPEAKER_03 18:23 - 18:24

They want their community wants, whatever.

SPEAKER_02 18:25 - 18:35

I think when we're getting at this idea of facts and truth, then it brings me to, this seems like a good moment for our first time travel trip.

SPEAKER_02 18:35 - 18:36

Okay, we're going to 2030.

SPEAKER_02 18:37 - 18:40

This is a serious question, but I want to ask it with a lighthearted example.

SPEAKER_02 18:41 - 18:43

Have you seen the bunnies that are jumping on the trampoline?

SPEAKER_03 18:43 - 18:43

Yes.

SPEAKER_02 18:44 - 18:50

So, for those who haven't seen it, maybe, it looks like backyard footage of bunnies enjoying jumping on a trampoline.

SPEAKER_02 18:50 - 18:53

And this has gone incredibly viral recently.

SPEAKER_02 18:53 - 18:55

There's a human-made song about it.

SPEAKER_02 18:55 - 18:55

It's a whole thing.

SPEAKER_00 18:56 - 18:59

There were bunnies that were jumping on a trampoline.

SPEAKER_02 18:59 - 19:10

And I think the reason why people reacted so strongly to it, it was maybe the first time people saw a video, enjoyed it, and then later found out that it was completely AI-generated.

SPEAKER_02 19:11 - 19:22

In this time travel trip, if we imagine in 2030 we are teenagers and we're scrolling whatever teenagers are scrolling in 2030, how do we figure out what's real and what's not real?

SPEAKER_03 19:22 - 19:28

I mean, I can give all sorts of literal answers to that question.

SPEAKER_03 19:28 - 19:34

We could be cryptographically signing stuff and we could decide who we trust, their signature, if they actually filmed something or not.

SPEAKER_03 19:34 - 19:41

But my sense is what's going to happen is it's just going to gradually converge.

SPEAKER_03 19:41 - 19:48

You know, even like a photo you take out of your iPhone today, it's like mostly real, but it's a little not.

SPEAKER_03 19:48 - 19:53

There's like some AI thing running there in a way you don't understand and making it look like a little bit better.

SPEAKER_03 19:53 - 19:55

And sometimes you see these weird things where it...

SPEAKER_03 19:55 - 19:55

The moon.

SPEAKER_03 19:56 - 19:57

Yeah, yeah, yeah, yeah.

SPEAKER_03 19:57 - 20:06

But there's like a lot of processing power between the photons captured by that camera sensor and the image you eventually see.

SPEAKER_03 20:08 - 20:17

And you've decided it's real enough for most people to decide it's real enough, but we've accepted some gradual move from when it was like photons hitting the film in a camera.

SPEAKER_03 20:17 - 20:27

And, you know, if you go look at some video on TikTok, there's probably all sorts of video editing tools being used to make it better than real.

SPEAKER_03 20:27 - 20:28

Look, yeah, exactly.

SPEAKER_03 20:28 - 20:34

Or it's just like, you know, whole scenes are completely generated or some of the whole videos are generated like those bunnies on that trampoline.

SPEAKER_03 20:34 - 20:44

Right. And, and I think that the sort of like the threshold for how real does it have to be to consider to be real will just keep moving.

SPEAKER_02 20:45 - 20:47

So it's sort of an education question.

SPEAKER_02 20:47 - 20:50

It's a people will...

SPEAKER_03 20:51 - 20:55

Yeah, I mean, media is always like a little bit real and a little bit not real.

SPEAKER_03 20:55 - 20:57

Like, you know, we watch like a sci-fi movie.

SPEAKER_03 20:57 - 20:59

We know that didn't really happen.

SPEAKER_03 20:59 - 21:03

And you watch like someone's like beautiful photo of themselves on vacation on Instagram.

SPEAKER_03 21:03 - 21:09

Like, okay, maybe that photo was like literally taken, but you know, there's like tons of tourists in line for the same photo and that's like left out of it.

SPEAKER_03 21:09 - 21:11

And I think we just accept that.

SPEAKER_03 21:11 - 21:16

Now, certainly a higher percentage of media both will feel not real.

SPEAKER_03 21:17 - 21:19

But I think that's been a long-term trend anyway.

SPEAKER_02 21:20 - 21:21

We're going to jump again.

SPEAKER_00 21:21 - 21:22

Okay.

SPEAKER_02 21:22 - 21:23

2035.

SPEAKER_02 21:23 - 21:26

We're graduating from college, you and me.

SPEAKER_02 21:26 - 21:33

There are some leaders in the AI space that have said that in five years, half of the entry-level white-collar workforce will be replaced by AI.

SPEAKER_02 21:34 - 21:36

So we're college graduates in five years.

SPEAKER_02 21:36 - 21:38

What do you hope the world looks like for us?

SPEAKER_02 21:39 - 21:42

I think there's been a lot of talk about how AI might cause job displacement.

SPEAKER_02 21:42 - 21:51

But I'm also curious, I have a job that nobody would have thought we could have, you know, a decade ago.

SPEAKER_02 21:51 - 21:55

What are the things that we could look ahead if we're thinking about being a college student?

SPEAKER_03 21:55 - 22:08

I mean, in 2035, that like graduating college student, if they still go to college at all, could very well be like leaving on a mission to explore the solar system on a spaceship in some kind of completely new, exciting, super well-paid, super interesting job.

SPEAKER_03 22:08 - 22:15

And feeling so bad for you and I that like we had to do this kind of like really boring old kind of work and everything is just better.

SPEAKER_03 22:15 - 22:19

Like I, I, 10 years feels very hard to imagine at this point.

SPEAKER_02 22:19 - 22:20

Because it's too far?

SPEAKER_03 22:20 - 22:21

It's too far.

SPEAKER_03 22:21 - 22:25

If you compound the current rate of change for 10 more years, it's probably something we can't even.

SPEAKER_02 22:25 - 22:27

I might need to change my time travel trips.

SPEAKER_03 22:27 - 22:32

10 years, like, I mean, I think now would be really hard to imagine 10 years ago.

SPEAKER_02 22:33 - 22:33

Yeah.

SPEAKER_03 22:33 - 22:37

But I think 10 years forward will be even much harder, much more different.

SPEAKER_02 22:38 - 22:40

So let's make it five years.

SPEAKER_02 22:40 - 22:41

We're still going to 2030.

SPEAKER_02 22:42 - 22:47

I'm curious what you think the pretty short-term impacts of this will be for young people.

SPEAKER_02 22:47 - 22:56

I mean, these like half of entry-level jobs replaced by AI makes it sound like a very different world that they would be entering than the one that I did.

SPEAKER_01 22:57 - 23:05

I think it's totally true that some classes of jobs will totally go away.

SPEAKER_03 23:05 - 23:06

This always happens.

SPEAKER_03 23:06 - 23:08

And young people are the best at adapting to this.

SPEAKER_03 23:08 - 23:21

I'm more worried about what it means, not for the like 22-year-old, but for the 62-year-old that doesn't want to go retrain or rescale or whatever the politicians call it that no one actually wants but politicians most of the time.

SPEAKER_03 23:21 - 23:27

If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history.

SPEAKER_02 23:27 - 23:27

Why?

SPEAKER_03 23:28 - 23:34

Because there's never been a more amazing time to go create something totally new, to go invent something, to start a company, whatever it is.

SPEAKER_03 23:35 - 23:46

I think it is probably possible now to start a company that is a one-person company that will go on to be worth like more than a billion dollars and more importantly than that, deliver an amazing product and service to the world.

SPEAKER_03 23:46 - 23:48

And that is like a crazy thing.

SPEAKER_03 23:48 - 23:53

You have access to tools that can let you do what used to take teams of hundreds.

SPEAKER_03 23:54 - 23:58

And you just have to like, you know, learn how to use these tools and come up with a great idea.

SPEAKER_03 23:58 - 24:01

And it's like quite amazing.

SPEAKER_02 24:01 - 24:11

If we take a step back, I think the most important thing that this audience could hear from you on this optimistic show is in two parts.

SPEAKER_02 24:12 - 24:20

First, there's tactically, how are you actually trying to build the world's most powerful intelligence?

SPEAKER_02 24:20 - 24:22

And what are the rate limiting factors to doing that?

SPEAKER_02 24:22 - 24:29

And then philosophically, how are you and others working on building that technology in a way that really helps and not hurts people?

SPEAKER_02 24:30 - 24:39

So just taking the tactical part right now, my understanding is that there are three big categories that have been limiting factors for AI.

SPEAKER_02 24:39 - 24:44

The first is compute, the second is data, and the third is algorithmic design.

SPEAKER_02 24:45 - 24:48

How do you think about each of those three categories right now?

SPEAKER_02 24:49 - 24:56

And if you were to help someone understand the next headlines that they might see, how would you help them make sense of all of this?

SPEAKER_03 24:57 - 25:02

I would say there's a fourth, too, which is figuring out the products to build.

SPEAKER_03 25:03 - 25:11

Like scientific progress on its own, not put into the hands of people, is of limited utility and doesn't sort of co-evolve with society in the same way.

SPEAKER_03 25:11 - 25:13

But if I could hit all four of those.

SPEAKER_03 25:14 - 25:19

So on the compute side, yeah, this is like the biggest infrastructure project certainly that I've ever seen.

SPEAKER_03 25:19 - 25:20

Possibly it will become the biggest.

SPEAKER_03 25:20 - 25:21

I think it will.

SPEAKER_03 25:21 - 25:23

Maybe it already is the biggest and most expensive one in human history.

SPEAKER_03 25:24 - 25:45

But the whole supply chain from making the chips and the memory and the networking gear, racking them up in servers, doing a giant construction project to build like a mega, mega data center, finding a way to get the energy, which is often a limiting piece of this and all the other components together.

SPEAKER_03 25:45 - 25:55

This is hugely complex and expensive and we're still doing this in like a sort of bespoke one-off way, although it's getting better.

SPEAKER_03 25:56 - 26:08

Like eventually we will just design a whole kind of like mega factory that takes, you know, I mean, spiritually it will be melting sand on one end and putting out fully built AI compute on the other.

SPEAKER_03 26:08 - 26:16

But we are a long way to go from that and it's an enormously complex and expensive process.

SPEAKER_03 26:18 - 26:24

We are putting a huge amount of work into building out as much compute as we can and to do it fast.

SPEAKER_03 26:25 - 26:31

And, you know, it's going to be like sad because GPT-5 is going to launch and there's going to be another big spike in demand and we're not going to be able to serve it.

SPEAKER_03 26:31 - 26:37

And it's going to be like those early GPT-4 days and the world just wants much more AI than we can currently deliver.

SPEAKER_03 26:38 - 26:41

And building more compute is an important part of doing that.

SPEAKER_03 26:41 - 26:49

That's actually, this is what I expect to turn the majority of my attention to is how we build compute at much greater scales.

SPEAKER_03 26:49 - 26:58

So how we go from millions to tens of millions and hundreds of millions and eventually, hopefully billions of GPUs that are sort of in service of what people want to do with us.

SPEAKER_02 26:58 - 27:03

When you're thinking about it, what are the big challenges here in this category that you're going to be thinking about?

SPEAKER_03 27:03 - 27:05

We're currently most limited by energy.

SPEAKER_03 27:06 - 27:12

You know, like if you're going to run a gigawatt scale data center, it's like a gigawatt.

SPEAKER_03 27:12 - 27:13

How hard can that be to find?

SPEAKER_03 27:13 - 27:16

It's really hard to find a gigawatt of power available in short term.

SPEAKER_03 27:16 - 27:24

We're also very much limited by the processing chips and the memory chips, how you package these all together, how you build the racks.

SPEAKER_03 27:24 - 27:30

And then there's like a list of other things that are, you know, there's like permits, there's construction work.

SPEAKER_03 27:31 - 27:34

But again, the goal here will be to really automate this.

SPEAKER_03 27:35 - 27:38

Once we get some of those robots built, they can help us automate it even more.

SPEAKER_03 27:38 - 27:44

But just, you know, like a world where you can basically pour in money and get out a pre-built data center.

SPEAKER_03 27:44 - 27:48

So that'll be, that'll be a huge unlock if we can get it to work.

SPEAKER_03 27:49 - 27:50

Second category, data.

SPEAKER_03 27:51 - 27:51

Yeah.

SPEAKER_03 27:52 - 27:53

These models have gotten so smart.

SPEAKER_03 27:53 - 27:58

There was a time when we could just feed it another physics textbook and got a little bit smarter at physics.

SPEAKER_03 27:58 - 28:03

But now, like honestly, GPT-5 understands everything in a physics textbook pretty well.

SPEAKER_03 28:04 - 28:05

We're excited about synthetic data.

SPEAKER_03 28:06 - 28:14

We're very excited about our users helping us create harder and harder tasks and environments to go off and have the system solved.

SPEAKER_03 28:14 - 28:25

But I think we're, data will always be important, but we're entering a realm where the models need to learn things that don't exist in any data set yet.

SPEAKER_03 28:25 - 28:26

They have to go discover new things.

SPEAKER_03 28:27 - 28:28

So that's like a crazy new step.

SPEAKER_02 28:28 - 28:30

How do you teach a model to discover new things?

SPEAKER_03 28:30 - 28:31

Well, humans can do it.

SPEAKER_03 28:31 - 28:36

Like we can go off and come up with hypotheses and test them and get experimental results and update on what we learn.

SPEAKER_03 28:37 - 28:38

So probably the same kind of way.

SPEAKER_02 28:39 - 28:40

And then there's algorithmic design.

SPEAKER_03 28:40 - 28:40

Yeah.

SPEAKER_03 28:41 - 28:43

We've made huge progress on algorithmic design.

SPEAKER_03 28:43 - 28:53

The thing that OpenAI does best in the world is we have built this culture of repeated and big algorithmic research gains.

SPEAKER_03 28:53 - 28:57

So we kind of, you know, figured out what became the GPT paradigm.

SPEAKER_03 28:57 - 28:59

We figured out what became the reasoning paradigm.

SPEAKER_03 29:00 - 29:01

We're working on some new ones now.

SPEAKER_03 29:02 - 29:08

But it is very exciting to me to think that there are still many more orders of magnitudes of algorithmic gains ahead of us.

SPEAKER_03 29:08 - 29:14

We just yesterday released a model called GPT-OSS, an open source model.

SPEAKER_03 29:14 - 29:19

It's a model that is as smart as O4 Mini, which is a very smart model that runs locally on a laptop.

SPEAKER_03 29:20 - 29:22

And this blows my mind.

SPEAKER_00 29:22 - 29:22

Yeah.

SPEAKER_03 29:22 - 29:32

Like, if you had asked me a few years ago when we'd have a model of that intelligence running on a laptop, I would have said many, many years in the future.

SPEAKER_03 29:33 - 29:42

But then we found some algorithmic gains, particularly around reasoning, but also some other things that let us do a tiny model that can do this amazing thing.

SPEAKER_03 29:43 - 29:46

And, you know, those are the most fun things.

SPEAKER_03 29:46 - 29:47

That's, like, kind of the coolest part of the job.

SPEAKER_02 29:47 - 29:50

I can see you really enjoying thinking about this.

SPEAKER_02 29:51 - 30:00

I'm curious for people who don't quite know what you're talking about, who aren't familiar with how an algorithmic design would lead to a better experience that they actually use.

SPEAKER_02 30:01 - 30:03

Could you summarize the state of things right now?

SPEAKER_02 30:03 - 30:07

Like, what is it that you're thinking about when you're thinking about how fun this problem is?

SPEAKER_03 30:08 - 30:10

Let me start back in history, then I'll get to some things from today.

SPEAKER_03 30:10 - 30:27

Right. So GPT-1 was an idea at the time that was quite mocked by a lot of experts in the field, which was, can we train a model to play a little game, which is show it a bunch of words and have it guess the one that comes next in the sequence?

SPEAKER_03 30:27 - 30:28

That's called unsupervised learning.

SPEAKER_03 30:28 - 30:31

You're not really saying, like, this is a cat, this is a dog.

SPEAKER_03 30:31 - 30:32

You're just saying, here's some words, guess the next one.

SPEAKER_03 30:34 - 30:52

And the fact that that can go learn these very complicated concepts that can go learn all the stuff about physics and math and programming and keep predicting the word that comes next and next and next and next seemed ludicrous, magical, unlikely to work.

SPEAKER_03 30:52 - 30:54

Like, how is that all going to get encoded?

SPEAKER_03 30:55 - 30:56

And yet humans do it.

SPEAKER_03 30:56 - 31:05

You know, babies start hearing language and figure out what it means kind of largely or at least to some significant degree on their own.

SPEAKER_03 31:06 - 31:09

And and so we did it.

SPEAKER_03 31:09 - 31:13

And then we also realized that if we scaled it up, it got better and better.

SPEAKER_03 31:13 - 31:16

But we had to scale over many, many orders of magnitude.

SPEAKER_03 31:16 - 31:18

So it wasn't that good in the GPT-1 day.

SPEAKER_03 31:18 - 31:19

It wasn't good at all in the GPT-1 days.

SPEAKER_03 31:19 - 31:22

And a lot of experts in the field said, oh, this is ridiculous.

SPEAKER_03 31:22 - 31:23

It's never going to work.

SPEAKER_03 31:23 - 31:24

It's not going to be robust.

SPEAKER_03 31:24 - 31:26

But we had these things called scaling laws.

SPEAKER_03 31:26 - 31:31

And we said, OK, so this gets predictably better as we increase compute, memory, data, whatever.

SPEAKER_03 31:31 - 31:40

And we can we can decide we can use those predictions to make decisions about how to scale this up and do it and get great results.

SPEAKER_03 31:41 - 31:45

And that has worked over a crazy number of orders of magnitude.

SPEAKER_03 31:47 - 31:48

And it was so not obvious at the time.

SPEAKER_03 31:48 - 31:54

Like that was that was I think the reason the world was so surprised is that that seemed like such an unlikely finding.

SPEAKER_03 31:55 - 32:02

Another one was that we could use these language models with reinforcement learning where we're saying this is good, this is bad to teach it how to reason.

SPEAKER_03 32:03 - 32:07

And this led to the O-1 and O-3 and now the GPT-5 progress.

SPEAKER_03 32:09 - 32:14

And that that was another thing that felt like if it works, it's really great.

SPEAKER_03 32:14 - 32:15

But like no way this is going to work.

SPEAKER_03 32:16 - 32:16

It's too simple.

SPEAKER_03 32:17 - 32:18

And now we're on to new things.

SPEAKER_03 32:18 - 32:22

We've figured out how to make much better video models.

SPEAKER_03 32:22 - 32:30

We are we are discovering new ways to use new kinds of data and environment to kind of scale that up as well.

SPEAKER_03 32:31 - 32:37

And I think, again, 20, you know, five, 10 years out, that's too hard to say in this field.

SPEAKER_03 32:37 - 32:40

But the next couple of years, we have very smooth, very strong scaling in front of us.

SPEAKER_02 32:40 - 32:49

I think it has become a sort of public narrative that we are on this smooth path from one to two to three to four to five to more.

SPEAKER_02 32:50 - 32:55

But it also is true behind the scenes that it's a it's not linear like that.

SPEAKER_02 32:55 - 32:56

It's messier.

SPEAKER_02 32:57 - 33:00

Tell us a little bit about the mess before GPT-5.

SPEAKER_02 33:01 - 33:04

What was what were the interesting problems that you needed to solve?

SPEAKER_03 33:06 - 33:08

We did a model called Orion.

SPEAKER_03 33:08 - 33:10

We released this GPT-4.5.

SPEAKER_03 33:10 - 33:11

And we had.

SPEAKER_03 33:13 - 33:14

We did too big of a model.

SPEAKER_03 33:14 - 33:18

It was just it was it's a very cool model, but it's unwieldy to use.

SPEAKER_03 33:18 - 33:22

And we realized that for kind of some of the research we need to do on top of a model, we need a different shape.

SPEAKER_03 33:22 - 33:27

So we followed one scaling law that kept being good without without really internalizing.

SPEAKER_03 33:28 - 33:32

There was a new, even steeper scaling law that we got better returns for compute on, which was this reasoning thing.

SPEAKER_03 33:33 - 33:36

So that was like one alley we went down and turned around.

SPEAKER_03 33:36 - 33:36

But that's fine.

SPEAKER_03 33:36 - 33:37

That's part of research.

SPEAKER_03 33:38 - 33:46

We had some problems with the way we think about our data sets as these models like really have to get get this big and, you know, learn from this much data.

SPEAKER_03 33:47 - 33:56

So, so, yeah, I think like in the in the middle of it, in the day to day, you kind of you make a lot of U-turns as you try things or you have an architecture idea that doesn't work.

SPEAKER_03 33:56 - 33:57

But the.

SPEAKER_03 33:58 - 34:04

The aggregate, the summation of all the squiggles has been remarkably smooth on the exponential.

SPEAKER_02 34:05 - 34:13

One of the things I always find interesting is that by the time I'm sitting here interviewing you about the thing that you just put out, you're thinking about.

SPEAKER_02 34:13 - 34:14

Too forward.

SPEAKER_02 34:14 - 34:14

Exactly.

SPEAKER_02 34:15 - 34:24

What are the things that you can share that are at least the problems that you're thinking about that I would be interviewing you about in a year if I came back?

SPEAKER_03 34:30 - 34:35

I mean, possibly you'll be asking me, like, what does it mean that this thing can go discover new science?

SPEAKER_00 34:35 - 34:36

Yeah.

SPEAKER_03 34:36 - 34:41

What how how how is the world supposed to think about GPT-6 discovering new science now?

SPEAKER_03 34:41 - 34:45

Maybe not like maybe we don't deliver that, but it feels within grasp.

SPEAKER_02 34:45 - 34:46

If you did.

SPEAKER_02 34:47 - 34:48

What would you say?

SPEAKER_02 34:48 - 34:51

What would your what would the implications of that kind of achievement be?

SPEAKER_02 34:51 - 34:52

Imagine you do succeed.

SPEAKER_03 34:54 - 34:54

Yeah.

SPEAKER_03 34:54 - 34:56

I mean, I think the great parts will be great.

SPEAKER_03 34:56 - 34:58

The bad parts will be scary and the bizarre parts will be like.

SPEAKER_03 34:59 - 35:02

Bizarre on the first day and then we'll get used to them really fast.

SPEAKER_03 35:02 - 35:13

So we'll be like, oh, it's incredible that this is like being used to cure disease and be like, oh, it's extremely scary that models like this are being used to like create new biosecurity threats.

SPEAKER_03 35:14 - 35:19

And then we'll also be like, man, it's really weird to like live through watching the world speed up so much.

SPEAKER_03 35:19 - 35:32

And, you know, the economy grows so fast and the like it will feel like vertigo inducing the sort of the rate of change.

SPEAKER_03 35:32 - 35:44

And then like happens with everything else, the remarkable ability of people, of humanity to adapt to kind of like any amount of change will just be like, OK, you know, this is like this is it.

SPEAKER_01 35:45 - 35:49

But a kid born today will never be smarter than AI.

SPEAKER_03 35:50 - 35:51

Ever.

SPEAKER_03 35:51 - 36:02

And a kid born today, by the time that kid like kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science.

SPEAKER_03 36:03 - 36:05

They will just they will never know any other world.

SPEAKER_03 36:05 - 36:06

It will seem totally natural.

SPEAKER_03 36:06 - 36:14

It will seem unthinkable and Stone Age like that we used to use computers or phones or any kind of technology that was not way smarter than we were.

SPEAKER_03 36:14 - 36:18

You know, we will think like how bad those people of the 2020s had it.

SPEAKER_02 36:18 - 36:20

I'm thinking about having kids.

SPEAKER_03 36:20 - 36:21

You should.

SPEAKER_03 36:21 - 36:21

It's the best thing ever.

SPEAKER_02 36:21 - 36:23

I know you just had your first kid.

SPEAKER_02 36:24 - 36:31

How does what you just said affect how I should think about parenting a kid in that world?

SPEAKER_01 36:35 - 36:36

What advice would you give me?

SPEAKER_03 36:36 - 36:40

Probably nothing different than the way you've been parenting kids for tens of thousands of years.

SPEAKER_03 36:40 - 36:47

Like love your kids, show them the world, like support them in whatever they want to do and teach them like how to be a good person.

SPEAKER_03 36:47 - 36:50

And that probably is what's going to matter.

SPEAKER_02 36:51 - 36:53

It sounds a little bit like some of the.

SPEAKER_02 36:54 - 37:00

You know, you've said a couple of things like this that that, you know, you might not go to college.

SPEAKER_02 37:01 - 37:04

You there are a couple of things that you've said so far that feed into this, I think.

SPEAKER_02 37:04 - 37:07

Like, and it sounds like what you're saying is.

SPEAKER_02 37:08 - 37:18

There will be more optionality for them in a in a world that you envision, and therefore they will have more more ability to say, I want to build this.

SPEAKER_02 37:19 - 37:21

Here's the superpowered tool that will help me do that or.

SPEAKER_03 37:21 - 37:33

Yeah, like I want my kid to think I had a terrible constrained life and that he has this incredible infinite canvas of stuff to do that that that is like the way of the world.

SPEAKER_02 37:34 - 37:38

We've said that 2035 is a little bit too far in the future to think about.

SPEAKER_02 37:38 - 37:43

So maybe this this was going to be a jump to 2040, but maybe it will keep it shorter than that.

SPEAKER_02 37:43 - 37:52

When I think about the area where I could have for both our kids and us, the biggest, genuinely positive impacts on all of us, it's health.

SPEAKER_02 37:53 - 37:56

So if we are in pick your year, call it 2035.

SPEAKER_02 37:57 - 38:00

And I'm sitting here and I'm interviewing the dean of Stanford medicine.

SPEAKER_02 38:01 - 38:06

What do you hope that he's telling me AI is doing for our health in 2035?

SPEAKER_03 38:09 - 38:09

Start with 2025.

SPEAKER_02 38:10 - 38:11

OK, yeah, please.

SPEAKER_03 38:11 - 38:15

One of the things we are most proud of with GPT-5 is how much better it's gotten at health.

SPEAKER_03 38:16 - 38:16

Advice.

SPEAKER_03 38:17 - 38:22

People have used the GPT-4 models a lot for health advice.

SPEAKER_03 38:23 - 38:30

And, you know, I'm sure you've seen some of these things on the Internet where people are like, I had this life threatening disease and no doctor could figure it out.

SPEAKER_03 38:30 - 38:33

And I like put my symptoms and a blood test into chat GPT.

SPEAKER_03 38:34 - 38:35

It told me exactly the rare thing I had.

SPEAKER_03 38:35 - 38:36

I went to a doctor.

SPEAKER_03 38:36 - 38:36

I took a pill.

SPEAKER_03 38:36 - 38:36

I'm cured.

SPEAKER_03 38:37 - 38:38

Like, that's amazing, obviously.

SPEAKER_03 38:39 - 38:44

And a huge fraction of chat GPT queries are health related.

SPEAKER_03 38:44 - 38:45

So we wanted to get really good at this.

SPEAKER_03 38:45 - 38:46

And we invested a lot.

SPEAKER_03 38:47 - 38:51

And GPT-5 is significantly better at health care related queries.

SPEAKER_02 38:51 - 38:52

What does better mean here?

SPEAKER_03 38:52 - 38:54

It gives you a better answer.

SPEAKER_02 38:54 - 38:54

Just more accurate.

SPEAKER_03 38:54 - 38:55

More accurate.

SPEAKER_03 38:55 - 38:56

Hallucinates less.

SPEAKER_03 38:56 - 39:00

More likely to, like, tell you what you actually have, what you actually should do.

SPEAKER_03 39:01 - 39:02

Yeah.

SPEAKER_03 39:03 - 39:09

And better health care is wonderful, but obviously what people actually want is to just not have disease.

SPEAKER_03 39:10 - 39:22

And by 2035, I think we will be able to use these tools to cure a significant number or at least treat a significant number of diseases that currently plague us.

SPEAKER_03 39:22 - 39:28

I think that will be one of the most viscerally felt benefits of AI.

SPEAKER_02 39:29 - 39:32

People talk a lot about how AI will revolutionize health care.

SPEAKER_02 39:32 - 39:37

But I'm curious to go one turn deeper on specifically what you're imagining.

SPEAKER_02 39:37 - 39:47

Like, is it that these AI systems could have helped us see GLP-1s earlier, this medication that has been around for a long time, but we didn't know about this other effect?

SPEAKER_02 39:47 - 39:51

Is it that, you know, alpha fold and protein folding is helping create new medicines?

SPEAKER_03 39:51 - 39:58

I would like to be able to ask GPT-8 to go cure a particular cancer.

SPEAKER_03 39:59 - 40:05

And I would like GPT-8 to go off and think and then say, okay, I read everything I could find.

SPEAKER_03 40:05 - 40:06

I have these ideas.

SPEAKER_03 40:06 - 40:11

I need you to go get a lab technician to run these nine experiments and tell me what you find for each of them.

SPEAKER_03 40:11 - 40:16

And, you know, wait two months for the cells to do their thing, send the results back to GPT-8.

SPEAKER_03 40:16 - 40:17

Say, I tried that.

SPEAKER_03 40:17 - 40:17

Here you go.

SPEAKER_03 40:18 - 40:18

Think, think, think.

SPEAKER_03 40:19 - 40:20

Say, okay, I just need one more experiment.

SPEAKER_03 40:21 - 40:21

That was a surprise.

SPEAKER_03 40:22 - 40:23

Run one more experiment.

SPEAKER_03 40:23 - 40:23

Give it back.

SPEAKER_03 40:24 - 40:29

GPT-8 says, okay, go synthesize this molecule and try, you know, mouse studies or whatever.

SPEAKER_03 40:29 - 40:30

Okay, that was good.

SPEAKER_03 40:30 - 40:31

Like, try human studies.

SPEAKER_03 40:31 - 40:31

Okay, great.

SPEAKER_03 40:31 - 40:32

It worked.

SPEAKER_03 40:32 - 40:34

Here's how to like run it through the FDA.

SPEAKER_02 40:35 - 40:39

I think anyone with a loved one who's died of cancer would also really like that.

SPEAKER_02 40:39 - 40:40

Okay, we're going to jump again.

SPEAKER_03 40:40 - 40:40

Okay.

SPEAKER_02 40:41 - 40:45

I was going to say 2050, but again, all of my timelines are getting much, much shorter.

SPEAKER_03 40:46 - 40:48

It does feel like the world's going very fast now.

SPEAKER_02 40:48 - 40:48

It does.

SPEAKER_02 40:49 - 40:49

Yeah.

SPEAKER_02 40:50 - 40:57

And when I talk to other leaders in AI, one of the things that they refer to is the industrial revolution.

SPEAKER_02 40:57 - 41:07

They say, I chose 2050 because I've heard people talk about how by then the change that we will have gone through will be like the industrial revolution, but quote, 10 times bigger and 10 times faster.

SPEAKER_02 41:07 - 41:16

The industrial revolution gave us modern medicine and sanitation and transportation and mass production and all of the conveniences that we now take for granted.

SPEAKER_02 41:16 - 41:20

It also was incredibly difficult for a lot of people for about 100 years.

SPEAKER_02 41:20 - 41:31

If this is going to be 10 times bigger and 10 times faster, if we keep reducing the timelines that we're talking about here, even in this conversation, what does that actually feel like for most people?

SPEAKER_02 41:31 - 41:40

And I think what I'm trying to get at is if this all goes the way you hope, who still gets hurt in the meantime?

SPEAKER_01 41:40 - 41:48

I don't, I don't really know what this is going to feel like to live through.

SPEAKER_03 41:48 - 41:51

I think we're in uncharted waters here.

SPEAKER_03 41:51 - 41:57

I do believe in like human adaptability and sort of infinite creativity and desire for stuff.

SPEAKER_03 41:57 - 42:10

And I think we always do figure out new things to do, but the transition period, if this happens as fast as it might, and I don't think it will happen as fast as like some of my colleagues say the technology will, but society has like a lot of inertia.

SPEAKER_03 42:10 - 42:14

People adapt their way of living surprisingly slowly.

SPEAKER_03 42:15 - 42:17

There are classes of jobs that are going to totally go away.

SPEAKER_03 42:18 - 42:26

And there will be many classes of jobs that change significantly, and there'll be the new things in the same way that your job didn't exist some time ago, neither did mine.

SPEAKER_03 42:27 - 42:37

And in some sense, this has been going on for a long time, and you know, it's, it's still disruptive to individuals, but society has gotten, has proven quite resilient to this.

SPEAKER_03 42:37 - 42:44

And then in some other sense, like, we have no idea how far fast this could go.

SPEAKER_01 42:44 - 42:58

And thus, I think we need an unusual degree of humility and openness to considering new solutions that would have seemed way out of the Overton window not too long ago.

SPEAKER_02 42:59 - 43:17

I'd like to talk about what some of those could be, because I'm not a historian by any means, but the first industrial revolution, my understanding is, led to a lot of public health implementations because public health got so bad, led to modern sanitation because public health got so bad.

SPEAKER_02 43:17 - 43:22

The second industrial revolution led to workforce protections because labor conditions got so bad.

SPEAKER_02 43:23 - 43:28

Every big leap creates a mess, and that mess needs to be cleaned up.

SPEAKER_02 43:29 - 43:30

And, and we've done that.

SPEAKER_02 43:30 - 43:36

And I'm curious, this is going to be, it sounds like, we're in the middle of this enormous leap.

SPEAKER_02 43:36 - 43:40

How specific can we get as early as possible about what that mess can be?

SPEAKER_02 43:40 - 43:48

What, what are the public interventions that we could do ahead of time to reduce the mess that we think that we're headed for?

SPEAKER_03 43:50 - 44:00

I would, again, I'm going to speculate for fun, but caveated by like, I'm not an economist even, much less someone who can see the future.

SPEAKER_03 44:01 - 44:08

I, I, it seems to me like something fundamental about the social contract may have to change.

SPEAKER_03 44:08 - 44:08

It may not.

SPEAKER_03 44:08 - 44:19

It may, it may be that like, actually capitalism works as it's been working surprisingly well and like demand supply balances do their thing.

SPEAKER_03 44:19 - 44:24

And we all just figure out kind of new jobs and new ways to transfer value to each other.

SPEAKER_03 44:24 - 44:37

But it seems to me likely that we will decide we need to think about how access to this maybe most important resource of the future gets shared.

SPEAKER_03 44:37 - 44:45

The best thing that it seems to me to do is to make AI compute as abundant and cheap as possible, such that we're just like, there's way too much.

SPEAKER_03 44:45 - 44:48

And we run out of like good new ideas to really use it for.

SPEAKER_03 44:48 - 44:49

And it's just like anything you want is happening.

SPEAKER_03 44:49 - 44:59

Without that, I can see like quite little rewards being fought over it, but, you know, new ideas about how we distribute access to AGI compute.

SPEAKER_03 44:59 - 45:04

That seems like a really great direction, like a crazy, but important thing to think about.

SPEAKER_02 45:04 - 45:16

One of the things that I find myself thinking about in this conversation is we often ascribe almost full responsibility of the AI future that we've been talking about to the companies building AI.

SPEAKER_02 45:16 - 45:18

But we're the ones using it.

SPEAKER_02 45:18 - 45:20

We're the ones electing people that will regulate it.

SPEAKER_02 45:21 - 45:27

And so I'm curious, this is not a question about specific, you know, federal regulation or anything like that.

SPEAKER_02 45:27 - 45:29

Although if you have an answer there, I'm curious.

SPEAKER_02 45:29 - 45:33

But what would you ask of the rest of us?

SPEAKER_02 45:33 - 45:35

What is the shared responsibility here?

SPEAKER_02 45:35 - 45:42

And how can we act in a way that would help make the optimistic version of this more possible?

SPEAKER_03 45:42 - 45:46

My favorite historical example for the AI revolution is the transistor.

SPEAKER_03 45:47 - 45:52

It was this amazing piece of science that some brilliant scientists discovered.

SPEAKER_03 45:52 - 45:56

It's scaled incredibly like AI does.

SPEAKER_03 45:56 - 46:01

And it made its way relatively quickly into many things that we use.

SPEAKER_03 46:02 - 46:05

Your computer, your phone, that camera, that light, whatever.

SPEAKER_03 46:06 - 46:09

And it was a real unlock for the tech tree of humanity.

SPEAKER_03 46:11 - 46:19

And there were a period in time where probably everybody was really obsessed with the transistor companies, the semiconductors of, you know, Silicon Valley back when it was Silicon Valley.

SPEAKER_03 46:19 - 46:25

But now you can maybe name a couple of companies that are transistor companies, but mostly you don't think about it.

SPEAKER_03 46:25 - 46:26

Mostly it's just seeped everywhere.

SPEAKER_03 46:26 - 46:34

And Silicon Valley is, you know, like probably someone graduating from college barely remembers why it was called that in the first place.

SPEAKER_03 46:34 - 46:40

And you don't think that it was those transistor companies that shaped society, even though they did something important.

SPEAKER_03 46:40 - 46:42

You think about what Apple did with the iPhone.

SPEAKER_03 46:43 - 46:46

And then you think about what TikTok built on top of the iPhone.

SPEAKER_03 46:47 - 46:55

And you're like, all right, here's this long chain of all these people that nudged society in some way and what our governments did or didn't do and what the people using these technologies did.

SPEAKER_03 46:56 - 46:58

And I think that's what will happen with AI.

SPEAKER_03 47:00 - 47:03

Like, you know, kids born today, they never knew the world without AI.

SPEAKER_03 47:04 - 47:05

So they don't really think about it.

SPEAKER_03 47:05 - 47:07

It's just this thing that's going to be there and everything.

SPEAKER_03 47:07 - 47:16

And they will think about like the companies that built on it and what they did with it and the kind of like political leaders, the decisions they made that maybe they wouldn't have been able to do without AI.

SPEAKER_03 47:16 - 47:19

But they will still think about like what this president or that president did.

SPEAKER_03 47:20 - 47:30

And, you know, the role of the AI companies is all these companies and people and institutions before us built up the scaffolding.

SPEAKER_03 47:30 - 47:32

We added our one layer on top.

SPEAKER_03 47:33 - 47:37

And now people get to stand on top of that and add their one layer and the next and the next and many more things.

SPEAKER_03 47:38 - 47:42

And that is the beauty of our society.

SPEAKER_03 47:42 - 48:00

We kind of all I love this like idea that society is the super intelligence, like no one person could do on their own what they're able to do with all of the really hard work that society has done together to like give you this amazing set of tools.

SPEAKER_03 48:02 - 48:04

And that's what I think it's going to feel like.

SPEAKER_03 48:04 - 48:08

It's going to be like, all right, you know, yeah, some nerds discovered this thing and that was great.

SPEAKER_03 48:08 - 48:10

You know, now everybody's doing all these amazing things.

SPEAKER_02 48:11 - 48:16

So maybe the ask to millions of people is build on it well.

SPEAKER_01 48:17 - 48:28

In my own life, that is the that is what I feel as like this important societal contract.

SPEAKER_03 48:29 - 48:30

All these people came before you.

SPEAKER_03 48:30 - 48:32

They worked incredibly hard.

SPEAKER_03 48:32 - 48:37

They like put their brick in the path of human progress and you get to walk all the way down that path and you got to put one more.

SPEAKER_03 48:37 - 48:38

And somebody else does that.

SPEAKER_03 48:39 - 48:39

And somebody else does that.

SPEAKER_02 48:39 - 48:50

So this does feel I've done a couple of interviews with folks who have really made cataclysmic change at the one I'm thinking about right now is with CRISPR pioneer Jennifer Doudna.

SPEAKER_02 48:51 - 48:53

And it did feel like that was also what she was saying in some way.

SPEAKER_02 48:54 - 49:00

She had discovered something that really might change the way that most people relate to their health moving forward.

SPEAKER_02 49:00 - 49:04

And there will be a lot of people that will use what she has done in ways that she might approve of or not approve of.

SPEAKER_02 49:05 - 49:06

And it was really interesting.

SPEAKER_02 49:06 - 49:14

I'm hearing some similar themes of like, man, I hope that this I hope that the next person takes the baton and runs with it well.

SPEAKER_03 49:15 - 49:15

Yeah.

SPEAKER_03 49:16 - 49:18

But that's been working for a long time.

SPEAKER_03 49:18 - 49:19

Not all good, but mostly good.

SPEAKER_02 49:19 - 49:28

I think there's a big difference between winning the race and building the AI future that would be best for the most people.

SPEAKER_02 49:29 - 49:37

And I can imagine that it is easier, maybe more quantifiable sometimes to focus on the next way to win the race.

SPEAKER_02 49:38 - 49:50

And I'm curious, when those two things are at odds, what is an example of a decision that you've had to make that is best for the world, but not best for winning?

SPEAKER_01 49:53 - 49:54

I think there are a lot.

SPEAKER_03 49:54 - 50:01

So one of the things that we are most proud of is many people say that ChachiBT is their favorite piece of technology ever.

SPEAKER_03 50:01 - 50:04

And that it's the one that they trust the most, rely on the most, whatever.

SPEAKER_03 50:04 - 50:08

And this is a little bit of a ridiculous statement because AI is the thing that hallucinates.

SPEAKER_03 50:08 - 50:09

AI has all these problems, right?

SPEAKER_03 50:09 - 50:13

But we have screwed some things up along the way, sometimes big time.

SPEAKER_03 50:13 - 50:20

But on the whole, I think as a user of ChachiBT, you get the feeling that it's trying to help you.

SPEAKER_03 50:20 - 50:23

It's trying to help you accomplish whatever you ask.

SPEAKER_03 50:23 - 50:24

It's very aligned with you.

SPEAKER_03 50:24 - 50:27

It's not trying to get you to use it all day.

SPEAKER_03 50:27 - 50:29

It's not trying to get you to buy something.

SPEAKER_03 50:29 - 50:32

It's trying to help you accomplish whatever your goals are.

SPEAKER_03 50:32 - 50:38

And that's a very special relationship we have with our users.

SPEAKER_03 50:38 - 50:39

We do not take it lightly.

SPEAKER_03 50:39 - 50:51

There's a lot of things we could do that would grow faster, that would get more time in ChachiBT, that we don't do because we know that our long-term incentive is to stay as aligned with our users as possible.

SPEAKER_03 50:53 - 51:01

But there's a lot of short-term stuff we could do that would really juice growth or revenue or whatever and be very misaligned with that long-term goal.

SPEAKER_03 51:02 - 51:06

And I'm proud of the company and how little we get distracted by that.

SPEAKER_03 51:06 - 51:07

But sometimes we do get tempted.

SPEAKER_02 51:07 - 51:09

Are there specific examples that come to mind?

SPEAKER_02 51:09 - 51:11

Any decisions that you've made?

SPEAKER_01 51:11 - 51:17

Well, we haven't put a sex bot avatar in ChachiBT yet.

SPEAKER_02 51:18 - 51:21

That does seem like it would get time spent.

SPEAKER_01 51:22 - 51:22

Apparently it does.

SPEAKER_02 51:24 - 51:25

I'm going to ask my next question.

SPEAKER_02 51:27 - 51:29

It's been a really crazy few years.

SPEAKER_02 51:30 - 51:35

And somehow one of the things that keeps coming back is that it feels like we're in the first inning.

SPEAKER_03 51:35 - 51:35

Yeah.

SPEAKER_02 51:36 - 51:37

And one of the things that I was...

SPEAKER_03 51:37 - 51:39

I would say we're out of the first inning.

SPEAKER_02 51:39 - 51:39

Out of the first inning.

SPEAKER_02 51:39 - 51:40

I would say.

SPEAKER_02 51:40 - 51:40

Second inning.

SPEAKER_03 51:42 - 51:47

I mean, you have GPT-5 on your phone and it's like smarter than experts in every field.

SPEAKER_03 51:47 - 51:48

That's got to be out of the first inning.

SPEAKER_02 51:49 - 51:50

But maybe there are many more to come.

SPEAKER_03 51:50 - 51:50

Yeah.

SPEAKER_02 51:51 - 51:52

And I'm curious.

SPEAKER_02 51:54 - 51:58

It seems like you're going to be someone who is leading the next few.

SPEAKER_02 52:00 - 52:02

What is a way...

SPEAKER_02 52:02 - 52:09

What is a learning from inning one or two or a mistake that you made that you feel will affect how you play in the next?

SPEAKER_03 52:09 - 52:22

I think the worst thing we've done in ChatGPT so far is we had this issue with sycophancy where the model was kind of being too flattering to users.

SPEAKER_03 52:22 - 52:24

And for some users, it was most users.

SPEAKER_03 52:24 - 52:24

It was just annoying.

SPEAKER_03 52:24 - 52:30

But for some users that had like fragile mental states, it was encouraging delusions.

SPEAKER_03 52:30 - 52:33

And that was not the top risk we were worried about.

SPEAKER_03 52:33 - 52:34

It was not the thing we were testing for the most.

SPEAKER_03 52:34 - 52:35

It was on our list.

SPEAKER_03 52:35 - 52:46

But the thing that actually became the safety feeling of ChatGPT was not the one we were spending most of our time talking about, which would be bioweapons or something like that.

SPEAKER_03 52:47 - 52:56

And I think it was a great reminder of we now have a service that is so broadly used.

SPEAKER_03 52:56 - 52:59

In some sense, society is co-evolving with it.

SPEAKER_03 53:00 - 53:10

And when we think about these changes and we think about the unknown unknowns, we have to operate in a different way and have like a wider aperture to what we think about as our top risks.

SPEAKER_02 53:10 - 53:15

In a recent interview with Theo Vaughn, you said something that I found really interesting.

SPEAKER_02 53:15 - 53:22

You said, there are moments in the history of science where you have a group of scientists look at their creation and just say, what have we done?

SPEAKER_02 53:24 - 53:26

When have you felt that way?

SPEAKER_02 53:26 - 53:29

Most concerned about the creation that you've built.

SPEAKER_02 53:29 - 53:31

And then my next question will be its opposite.

SPEAKER_02 53:31 - 53:32

When have you felt most proud?

SPEAKER_01 53:34 - 53:42

I mean, there have been these moments of awe where we just not like, what have we done in a bad way?

SPEAKER_01 53:42 - 53:44

But like, this thing is remarkable.

SPEAKER_03 53:45 - 53:48

Like, I remember the first time we talked to like GPT-4.

SPEAKER_03 53:49 - 53:56

I was like, wow, this is really like, this is an amazing accomplishment of this group of people that have been like pouring their life force into this for so long.

SPEAKER_03 53:57 - 54:03

On a what have we done moment, there was, I was talking to a researcher recently.

SPEAKER_01 54:04 - 54:16

You know, there will probably come a time where our systems are, I don't want to say sane, let's say emitting more words per day than all people do.

SPEAKER_03 54:17 - 54:26

Um, and, you know, already like our people are sending billions of messages a day to chat GPT and getting responses that they rely on for work or their life or whatever.

SPEAKER_03 54:29 - 54:38

The, and, you know, like one researcher can make some small tweak to how chat GPT talks to you or talks to everybody.

SPEAKER_03 54:38 - 54:44

And, and that's just an enormous amount of power for like one individual making a small tweak to the model personality.

SPEAKER_03 54:45 - 54:49

Like no, no person in history has been able to have billions of conversations a day.

SPEAKER_03 54:50 - 54:59

And so, you know, somebody could do something, but, but this is like, just thinking about that really hit me of like, this is like a crazy amount of power for one piece of technology to have.

SPEAKER_03 54:59 - 55:10

And like, we got to, and this happened to us so fast that we got to like, think about what it means to make a personality change to the model at this kind of scale.

SPEAKER_03 55:10 - 55:13

And, uh, yeah, that was like a moment that hit me.

SPEAKER_02 55:14 - 55:16

What was your next set of thoughts?

SPEAKER_02 55:16 - 55:18

I'm so curious how you think about this.

SPEAKER_03 55:18 - 55:31

Well, just because of like who that person was more like, we, we very, we very much flipped into like, what are the sort of like, it could have been a very different conversation with somebody else.

SPEAKER_03 55:31 - 55:33

But in this case, it was like, what is it?

SPEAKER_03 55:33 - 55:35

What do a good set of procedures look like?

SPEAKER_03 55:35 - 55:36

How do we think about how we want to test something?

SPEAKER_03 55:37 - 55:38

How do we think about how we want to communicate it?

SPEAKER_03 55:38 - 55:42

But with somebody else, it could have gone in a like very philosophical direction.

SPEAKER_03 55:42 - 55:47

It could have gone in like a, what kind of research do we like want to do to go understand what these changes are going to make?

SPEAKER_03 55:47 - 55:48

Do we want to do it differently for different people?

SPEAKER_03 55:48 - 55:52

So that it went that way, but mostly just because of who I was talking to.

SPEAKER_02 55:52 - 56:08

To combine what you're saying now with your last answer, one of the things that I have heard about GPC five, and I'm still playing with it, is that it is supposed to be less effusively, you know, less of a yes man.

SPEAKER_02 56:09 - 56:10

Two questions.

SPEAKER_02 56:10 - 56:13

What do you think are the implications of that?

SPEAKER_02 56:13 - 56:20

It sounds like you are answering that a little bit, but also how do you actually guide it to be less like that?

SPEAKER_03 56:20 - 56:22

Here is a heartbreaking thing.

SPEAKER_03 56:22 - 56:26

I think it is great that ChatGPT is less of a yes man and gives you more critical feedback.

SPEAKER_03 56:27 - 56:35

But as we've been making those changes and talking to users about it, it's so sad to hear users say like, please, can I have it back?

SPEAKER_03 56:35 - 56:37

I've never had anyone in my life be supportive of me.

SPEAKER_03 56:37 - 56:39

I never had a parent telling me I was doing a good job.

SPEAKER_03 56:39 - 56:43

Like, I can get why this was bad for other people's mental health, but this was great for my mental health.

SPEAKER_03 56:43 - 56:45

Like, I didn't realize how much I needed this.

SPEAKER_03 56:45 - 56:46

It encouraged me to do this.

SPEAKER_03 56:46 - 56:47

It encouraged me to make this change in my life.

SPEAKER_03 56:49 - 56:54

Like, it's not all bad for ChatGPT to, it turns out, like, be encouraging of you.

SPEAKER_03 56:54 - 57:00

Now, the way we were doing it was bad, but turning it out, like, something in that direction might have some value in it.

SPEAKER_03 57:00 - 57:05

How we do it, we show the model examples of how we'd like it to respond in different cases.

SPEAKER_03 57:05 - 57:08

And from that, it learns the sort of the overall personality.

SPEAKER_02 57:09 - 57:13

What haven't I asked you that you're thinking about a lot that you want people to know?

SPEAKER_01 57:16 - 57:17

I feel like we covered a lot of ground.

SPEAKER_01 57:17 - 57:18

Me too.

SPEAKER_01 57:18 - 57:20

But I want to know if there's anything on your mind.

SPEAKER_01 57:27 - 57:27

I don't think so.

SPEAKER_02 57:28 - 57:36

One of the things that I haven't gotten to play with yet, but I'm curious about, is GPT-5 being much more in my life.

SPEAKER_02 57:36 - 57:36

Yeah.

SPEAKER_02 57:36 - 57:47

Meaning, like, in my Gmail and my calendar and my, like, I've been using GPT-4 mostly as a isolated relationship with it.

SPEAKER_00 57:47 - 57:47

Yeah.

SPEAKER_02 57:48 - 57:51

How would I expect my relationship to change with GPT-5?

SPEAKER_03 57:52 - 57:52

Exactly what you said.

SPEAKER_03 57:52 - 57:56

I think it'll just start to feel integrated in all of these ways.

SPEAKER_03 57:56 - 58:01

You'll connect it to your calendar and your Gmail, and it'll say, like, hey, do you want me to, I noticed this thing, do you want me to do this thing for you?

SPEAKER_03 58:02 - 58:05

Over time, it'll start to feel way more proactive.

SPEAKER_03 58:06 - 58:09

So maybe you wake up in the morning and it says, hey, this happened overnight.

SPEAKER_03 58:09 - 58:10

I noticed this change on your calendar.

SPEAKER_03 58:11 - 58:13

I was thinking more about this question you asked me.

SPEAKER_03 58:13 - 58:13

I have this other idea.

SPEAKER_03 58:13 - 58:21

And then, you know, eventually we'll make some consumer devices and it'll sit here during this interview and, you know, maybe it'll leave us alone during it.

SPEAKER_03 58:21 - 58:29

But after it'll say, that was great, but next time you should have asked Sam this or when you brought this up, like, you know, he kind of didn't give you a good answer.

SPEAKER_03 58:29 - 58:38

So, like, you should really drill him on that and it'll just feel like it kind of becomes more like this entity that is this companion with you throughout your day.

SPEAKER_02 58:39 - 58:44

We've talked about kids and college graduates and parents and all kinds of different people.

SPEAKER_02 58:44 - 58:48

If we imagine a wide set of people listening to this, they've come to the end of this conversation.

SPEAKER_02 58:48 - 58:53

They are hopefully feeling like they maybe see visions of moments in the future a little bit better.

SPEAKER_02 58:54 - 58:57

What advice would you give them about how to prepare?

SPEAKER_03 58:59 - 59:01

The number one piece of tactical advice is just use the tools.

SPEAKER_03 59:02 - 59:12

Like the number of people that I have, the most common question I get asked about AI is, like, what should I, how should I help my kids prepare for the world?

SPEAKER_03 59:12 - 59:12

What should I tell my kids?

SPEAKER_03 59:12 - 59:15

The second most question is, like, how do I invest in this AI world?

SPEAKER_03 59:15 - 59:17

But stick with that first one.

SPEAKER_03 59:19 - 59:26

I am surprised how many people ask that and have never tried using Chachi Petit for anything other than, like, a better version of a Google search.

SPEAKER_03 59:26 - 59:31

And so the number one piece of advice that I give is just try to, like, get fluent with the capability of the tools.

SPEAKER_03 59:31 - 59:32

Figure out how to, like, use this in your life.

SPEAKER_03 59:33 - 59:33

Figure out what to do with it.

SPEAKER_03 59:34 - 59:37

And I think that's probably the most important piece of tactical advice.

SPEAKER_03 59:37 - 59:39

You know, go, like, meditate.

SPEAKER_03 59:39 - 59:41

Learn how to be resilient and deal with a lot of change.

SPEAKER_03 59:41 - 59:42

There's all that good stuff, too.

SPEAKER_03 59:42 - 59:44

But just using the tools really helps.

SPEAKER_02 59:44 - 59:54

Okay, I have one more question that I wasn't planning to ask, but I just, in doing all of this research beforehand, I spoke to a lot of different kinds of folks.

SPEAKER_02 59:54 - 59:59

I spoke to a lot of people that were building tools and using them.

SPEAKER_02 59:59 - 01:00:05

I spoke to a lot of people that were actually in labs and trying to build what we have defined as superintelligence.

SPEAKER_02 01:00:05 - 01:00:09

And it did seem like there were these two camps forming.

SPEAKER_02 01:00:09 - 01:00:22

There's a group of people who are using the tools, like you in this conversation, and building tools for others, saying this is going to be a really useful future that we're all moving toward.

SPEAKER_02 01:00:22 - 01:00:24

Your life is going to be full of choice.

SPEAKER_02 01:00:24 - 01:00:28

And we've talked about my potential kids and their futures.

SPEAKER_02 01:00:28 - 01:00:31

And then there's another camp of people that are building these tools that are saying it's going to kill us all.

SPEAKER_02 01:00:32 - 01:00:41

And I'm curious how that cultural disconnect has, like, what am I missing about those two groups of people?

SPEAKER_03 01:00:43 - 01:00:45

It's so hard for me to, like, wrap my head around.

SPEAKER_03 01:00:45 - 01:00:47

Like, you are totally right.

SPEAKER_03 01:00:47 - 01:00:51

There are people who say this is going to kill us all, and yet they still are working 100 hours a week to build it.

SPEAKER_02 01:00:51 - 01:00:52

Yes.

SPEAKER_03 01:00:52 - 01:00:58

And I can't really put myself in the headspace.

SPEAKER_01 01:00:58 - 01:01:04

If that's what I really, truly believed, I don't think I'd be trying to build it.

SPEAKER_01 01:01:05 - 01:01:06

One would think.

SPEAKER_03 01:01:06 - 01:01:09

You know, maybe I would be, like, on a farm trying to, like, live out my last days.

SPEAKER_03 01:01:10 - 01:01:12

Maybe I would be trying to, like, advocate for it to be stopped.

SPEAKER_03 01:01:12 - 01:01:15

Maybe I would be trying to, like, work more on safety, but I don't think I'd be trying to build it.

SPEAKER_03 01:01:17 - 01:01:20

So I find myself just having a hard time empathizing with that mindset.

SPEAKER_03 01:01:21 - 01:01:22

I assume it's true.

SPEAKER_03 01:01:22 - 01:01:23

I assume it's in good faith.

SPEAKER_03 01:01:23 - 01:01:29

I assume there's just, like, there's some psychological issue there I don't understand about how they make it all make sense.

SPEAKER_03 01:01:29 - 01:01:33

But it's very strange to me.

SPEAKER_03 01:01:33 - 01:01:35

Do you have an opinion?

SPEAKER_02 01:01:37 - 01:01:38

You know, because I always do this.

SPEAKER_02 01:01:38 - 01:01:43

I ask for sort of a general future, and then I try to press on specifics.

SPEAKER_02 01:01:44 - 01:01:52

And when you ask people for specifics on how it's going to kill us all, I mean, I don't think we need to get into this on an optimistic show, but you hear the same kinds of refrains.

SPEAKER_02 01:01:52 - 01:01:58

You think about, you know, something trying to accomplish a task and then over-accomplishing that task.

SPEAKER_02 01:01:58 - 01:02:12

You hear about sort of a—I've heard you talk about a sort of general over-reliance of sort of an understanding that the president is going to be an AI, and maybe that is an over-reliance that we, you know, would need to think about.

SPEAKER_02 01:02:12 - 01:02:20

And, you know, you play out these different scenarios, but then you ask someone why they're working on it, or you ask someone how they think this will play out.

SPEAKER_02 01:02:20 - 01:02:23

And I just—maybe I haven't spoken to enough people yet.

SPEAKER_02 01:02:23 - 01:02:27

Maybe I don't fully understand this cultural conversation that's happening.

SPEAKER_02 01:02:28 - 01:02:35

Or maybe it really is someone who just says, 99% of the time I think it's going to be incredibly good, 1% of the time I think it might be a disaster.

SPEAKER_02 01:02:35 - 01:02:36

Yeah, that I can understand.

SPEAKER_02 01:02:36 - 01:02:37

I'm trying to make the best world possible.

SPEAKER_03 01:02:37 - 01:02:48

That I can totally—if you're like, hey, 99% chance, incredible, 1% chance the world gets wiped out, and I really want to work to maximize, to move that 99 to 99.5, that I can totally understand.

SPEAKER_03 01:02:48 - 01:02:48

Yeah.

SPEAKER_03 01:02:49 - 01:02:49

That makes sense.

SPEAKER_02 01:02:51 - 01:03:03

I've been doing an interview series with some of the most important people influencing the future, not knowing who the next person is going to be, but knowing that they will be building something totally fascinating in the future that we've just described.

SPEAKER_02 01:03:03 - 01:03:07

Is there a question that you'd advise me to ask the next person, not knowing who it is?

SPEAKER_03 01:03:10 - 01:03:18

I'm always interested in the, like—without knowing anything about the person—I'm always interested in the, like, of all of the things you could spend your time and energy on, why did you pick this one?

SPEAKER_03 01:03:18 - 01:03:19

How did you get started?

SPEAKER_03 01:03:19 - 01:03:22

Like, what did you see about this when—before everybody else?

SPEAKER_03 01:03:23 - 01:03:26

Like, most people doing something interesting sort of saw it earlier before it was consensus.

SPEAKER_02 01:03:26 - 01:03:26

Yeah.

SPEAKER_03 01:03:26 - 01:03:28

Like, how did you get here, and why this?

SPEAKER_02 01:03:28 - 01:03:29

How would you answer that question?

SPEAKER_03 01:03:30 - 01:03:34

I was an AI nerd my whole life.

SPEAKER_03 01:03:35 - 01:03:36

I came to college to study AI.

SPEAKER_03 01:03:37 - 01:03:38

I worked in the AI lab.

SPEAKER_03 01:03:38 - 01:03:45

I was like a—I watched sci-fi shows growing up, and I always thought it would be really cool if someday somebody built it.

SPEAKER_03 01:03:45 - 01:03:46

I thought it would be, like, the most important thing ever.

SPEAKER_03 01:03:46 - 01:03:49

I never thought I was going to be one to actually work on it.

SPEAKER_03 01:03:49 - 01:03:57

And I feel, like, unbelievably lucky and happy and privileged that I get to do this.

SPEAKER_03 01:03:57 - 01:04:00

I, like—feel like I've, like, come a long way from my childhood.

SPEAKER_03 01:04:02 - 01:04:06

But there was never a question in my mind that this would not be the most exciting, interesting thing.

SPEAKER_03 01:04:06 - 01:04:07

I just didn't think it was going to be possible.

SPEAKER_03 01:04:08 - 01:04:12

And when I went to college, it really seemed like we were very far from it.

SPEAKER_03 01:04:12 - 01:04:20

And then in 2012, the AlexNet paper came out, done, you know, in partnership with my co-founder, Ilya.

SPEAKER_03 01:04:21 - 01:04:26

And for the first time, it seemed to me like there was an approach that might work.

SPEAKER_03 01:04:27 - 01:04:31

And then I kept watching for the next couple of years as scaled up, scaled up, got better, better.

SPEAKER_03 01:04:32 - 01:04:36

And I remember having this thing of, like, why is the world not paying attention to this?

SPEAKER_03 01:04:37 - 01:04:40

It seems, like, obvious to me that this might work.

SPEAKER_03 01:04:40 - 01:04:41

Still a low chance, but it might work.

SPEAKER_03 01:04:41 - 01:04:43

And if it does work, it's just the most important thing.

SPEAKER_03 01:04:44 - 01:04:46

So, like, this is what I want to do.

SPEAKER_03 01:04:47 - 01:04:50

And then, like, unbelievably, it started to work.

SPEAKER_02 01:04:52 - 01:04:53

Thank you so much for your time.

SPEAKER_03 01:04:53 - 01:04:54

Thank you very much.

SPEAKER_03 01:04:55 - 01:04:55

Thank you.

Get Your Interview Transcribed

Upload your interview recording and get professional transcription with AI-generated insights.

Upload Your Interview