'The Godfather of AI' (CBS Mornging Interview 2025)0%

He shares some of his early takeaways about AI, which he says has evolved "even faster than [he] thought."

Host:
The last time we spoke two years, one month ago, I'm curious how your expectations over these two years have evolved for how you see the future.

Geoffrey Hinton:
So AI has developed even faster than I thought. Um, in particular, they now have these AI agents which are more dangerous than AI that just answers questions because they can do things in the world. Um, so I think things have got, if anything, scarier than they were before.

Host:
Um, I don't know if we want to call it AGI, super intelligence, whatever, very capable AI system. Do you have a timeline in mind for when you think that's coming?

Geoffrey Hinton:
So, a year ago, I thought there's a good chance it comes between five and 20 years from now. Um, so I guess I should believe there's a good chance it comes between four and 19 years from now. Um, I think that's still what I guess.

Host:
Okay. Which is sooner than when we spoke because you were still thinking like 20 years.

Geoffrey Hinton:
Yeah. Um, I think it may, you know, there's a good chance it'll be here in 10 years or less now.

Host:
So, in 4 to 19 years, we reached this point. What does that look like?

Geoffrey Hinton:
So, I don't really want to speculate on what it would look like if I decided to take over. There's so many ways it could do it.

Host:
And I'm not even talking about taking over. We can talk about that. I'm sure we will talk about that. But putting aside that kind of takeover, just like a super intelligent artificial intelligence, like what kind of things would this be capable of or would be doing?

Geoffrey Hinton:
So the sort of good scenario is we would all be like the sort of dumb CEO of a big company who has an extremely intelligent assistant who actually makes everything work but does what the CEO wants. So the CEO thinks they're doing things, but actually it's all done by the assistant and the CEO feels just great because everything they sort of decide to do works out. That's the good scenario.

Host:
And I've heard you point out a few areas where you think there's reason to be optimistic about what this future looks like.

Geoffrey Hinton:
Yes.

Host:
Yeah. So why don't we take each of them?

Geoffrey Hinton:
So areas like healthcare. Um, they will be much better at reading medical images, for example. That's a minor thing. Um, I made a prediction some years ago they'd be better by now and they're about comparable with the experts by now. Um, they'll soon be considerably better because they'll have had a lot more experience. One of these things can look at millions of X-rays and learn from millions of them and a doctor can't. Um, they'll be very good family doctors. So you can imagine a family doctor who's seen a 100 million people including half a dozen people with your very, very rare condition. They'd just be a much better family doctor. A family doctor who can integrate information about your genome with the results of all the tests on you and all the tests on your relatives, um, the whole history and doesn't forget things. That would be much, much better already. Um, AI combined with a doctor is much better at doing diagnosis in difficult cases than a doctor alone. So we're going to get much better healthcare from these things and they'll design better drugs too.

Host:
Uh, education is another field.

Geoffrey Hinton:
Yes, in education we know that um, if you have a private tutor you can learn stuff about twice as fast. Um, these things eventually will be extremely good private tutors who know exactly what it is you misunderstand and exactly what example to give you to clarify it to you so you understand. So maybe you'll be able to learn things three or four times as fast with these things. Um, that's bad news for universities but good news for people.

Host:
Yeah. Do you think the university system will survive this period?

Geoffrey Hinton:
I think many aspects of it will. I think it's still the case that a graduate student in a good group in a good university is the sort of best source of truly original research and I think that'll probably survive. You need a kind of apprenticeship.

Host:
Some people hope this will help solve the climate crisis.

Geoffrey Hinton:
I think it will help. Um, it'll make better materials. We'll be able to make better batteries, for example. Um, I'm sure AI will be involved in designing them. Um, people are using it for carbon capture from the atmosphere. I'm not convinced that's going to work just because of the energy considerations, but it might. In general, we're going to get much better materials. We might even get room temperature superconductivity, which would mean you can have lots of solar plants in the desert and we can be thousands of miles away.

Host:
Uh, any other positives we should tick off?

Geoffrey Hinton:
Well, more or less any industry it's going to make more efficient because almost every company wants to predict things from data and AI is very good at doing predictions. It's better than the methods we had previously almost always. Um, so it's going to make... it's going to cause huge increases in productivity. It's going to mean when you call up a call center, when you call up um Microsoft to complain that something doesn't work and you get a call center, the person in the call center will be actually an AI who will be much better informed.

Host:
Yeah. When I asked you a couple years ago about job displacements, you seem to think that wasn't a big concern. Is that still your thinking?

Geoffrey Hinton:
No, I'm thinking it will be a big concern. AI's got so much better in the last few years that I mean, if I had a job in a call center, I'd be very worried.

Host:
Yeah. Or maybe a job as a lawyer or a job as a journalist or a job as an accountant.

Geoffrey Hinton:
Yeah. Any doing anything routine I think... Investigative journalists I think will last quite a long time because you need a lot of initiative plus some moral outrage and I think journalists will be in business for a bit, but beyond call centers what are your concerns about jobs?

Host:
Well any routine job. So a sort of standard secretarial job, something like a paralegal for example. Those jobs have had it. Have you thought about what how we move forward in a world where all these jobs go away?

Geoffrey Hinton:
So it's like this. It ought to be that if you can increase productivity, everybody benefits. Um, the people who are doing those jobs can work a few hours a week instead of 60 hours a week. Um, they don't need two jobs anymore. They can get paid lots of money for doing one job because they're just as productive using AI assistance. But we know it's not going to be like that. We know what's going to happen is the extremely rich are going to get even more extremely rich and the not very well-off are going to have to work three jobs.

Host:
Now I think no one likes this question but we like to ask it: this idea of p-doom, how likely it is, and I am curious if you see this as a quite possible thing or it's just so bad that even though the likelihood isn't very high we should just be very concerned about it. Where are you on that scale of probability?

Geoffrey Hinton:
So I think um, most of the experts in the field would agree that if you consider the possibility that these things will get much smarter than us and then just take control away from us, just take over, the probability of that happening is very likely more than 1% and very likely less than 99%. Yeah, I think all the pretty much all the experts can agree on that, but that's not very helpful.

Host:
No, but it's a good start.

Geoffrey Hinton:
It it might happen and it might not happen and then different people disagree on what the numbers are. I'm in the unfortunate position of happening to agree with Elon Musk on this. Um, which is that it's sort of 10 to 20% chance that these things will take over. Um, but that's just a wild guess.

Host:
Yeah.

Geoffrey Hinton:
Um, I think reasonable people would say it's quite a lot more than 1% and quite a lot less than 99%. But we're dealing with something we've got no experience of. Um, we have no real good way of estimating what the probabilities are. It seems to me at this point it's inevitable that we're going to find out.

Host:
We are going to find out.

Geoffrey Hinton:
Yes, we because um it seems extremely likely that these things will get smarter than us already. They're much more knowledgeable than us. So, GPT-4 knows thousands of times more than a normal person. It's a not very good expert at everything and eventually it successes will be a good expert at everything. Um, they'll be able to see connections between different fields that nobody's seen before.

Host:
Yeah. Yeah. I'm also interested in in understanding, okay, there's this terrible 10 to 20% chance, but or more or or more or less or less, but let's just take as a premise that there's a 80% chance that they don't take over and wipe us out. So that's the most likely scenario. Do you still think it would be net positive or net negative if it's not the worst outcome?

Geoffrey Hinton:
Okay, if we can stop them taking over um that would be good. The only way that's going to happen is if we put serious effort into it. But I think once people understand that this is coming, there will be a lot of pressure to put serious effort into it. If we just carry on like now just trying to make profits, it's going to happen. They're going to take over. Um, we we have to have the public put pressure on governments to do something serious about it. But even if the AIs don't take over, there's the issue of bad actors using AI for bad things. So mass surveillance, for example, which is already happening in China. If you look at what's happening in the west of China to the Uyghurs, um, the AI is terrible for them. I... I to board a plane to come to Toronto, I had to take a facial recognition photo from our US government. Right. When I come into Canada, you put your passport and it looks at you and it looks at your passport. Every time it fails to recognize me. Um, everybody else, it recognizes people from all different nationalities. It recognizes me, it can't recognize. And I'm particularly indignant since I assume it's using neural nets.

Host:
You didn't carve out an exception, did you?

Geoffrey Hinton:
No. No. It just there's something about me that it doesn't like. Um, I have to find some place to work it in.

Host:
So, this is as good a place as any. Let's talk a little bit about the Nobel. Can you paint the picture of the day you found out?

Geoffrey Hinton:
So, I was sort of half asleep. I had my cell phone upside down on the bedside table with the sound turned off. But when a phone call comes, the screen lights up and I saw this little line of light because I happened to be lying on the pillow with my head on this side and the it was here facing the phone rather than facing away. Just happened to be facing the phone. I saw this little line of light and I was in California and it was 1:00 in the morning and most people who call me on the east coast or in Europe...

Host:
Yeah. You don't use do not disturb.

Geoffrey Hinton:
No. No. Okay. Um, I just I turn off the sound. I turn off the sound.

Host:
Got it.

Geoffrey Hinton:
And I thought I was just curious about who on earth is calling me at four o'clock in the morning on the east coast. This is crazy. So I picked it up and there was this long phone number with a country code I didn't recognize. And then this Swedish voice comes on and asks if it's me and I say, "Yeah, it's me." And they say, "I won the Nobel Prize in physics." Well, I don't do physics, right? So I thought this might be a prank. In fact, I thought the most likely thing was that it was a prank. I was aware that the Nobel prizes were coming up.

Host:
Okay.

Geoffrey Hinton:
Because I was very interested in whether Demis would get the Nobel Prize for chemistry and I knew that was being announced the next day.

Host:
Okay.

Geoffrey Hinton:
Um, but I sort of... I don't do physics. I'm a psychologist hiding in computer science and I get the Nobel Prize in physics. Was it a mistake? Well, one thing that occurred to me is if it's a mistake, can they take it back? So, but for the next couple of days, I did the following reasoning. So, what's the chance a psychologist will get the Nobel Prize in physics? Well, maybe one in two million. Now, what's the chance if it's my dream I'll get the Nobel Prize in physics? Well, maybe one in two. So, if it's one in two in my dream and one in two million in reality, that makes it a million times more likely that this is a dream than that it's reality. And for the next couple of days, I went around thinking, you know, are you quite sure this isn't a dream?

Host:
You've walked me into this very wacky territory, but it is part of this discussion. Some people think we're living in a simulation and that AGI is not evidence, but hints toward maybe that's the reality in which we live.

Geoffrey Hinton:
Yeah, I don't really believe that. I think that's kind of wacky.

Host:
Okay, so let's put... But I don't think... I don't think it's totally nonsense. I've seen the Matrix, too.

Geoffrey Hinton:
Oh, okay. Okay. Wacky, but not totally.

Host:
Okay. I thought here's where I kind of wanted to head with the Nobel. Um, I think you've said something to the effect of you hope to use your credibility to convey a message to the world. Can you kind of explain what that is?

Geoffrey Hinton:
Yes. That um AI is potentially very dangerous and there's two sets of dangers. There's bad actors using it for bad things and there's AI itself taking over and they're quite different kinds of threat. And we know bad actors are already using it for bad things. I mean, it's it was used during Brexit to make British people vote to leave Europe in a crazy way. So, a company called Cambridge Analytica was getting information from Facebook and using AI. Um, and AI's developed a lot since then. It was probably used to get Trump elected. I mean, they had information from Facebook and it probably helped with that. We don't know for sure because it was never really investigated. Um, but now it's much more competent and so people can use it far more effectively for things like cyber attacks. Um, designing new viruses. Um, obviously fake videos for manipulating elections. Um, targeted fake videos by using information about people to give them just what will make them indignant. Yeah. Um, autonomous lethal weapons. They're all the big arms selling countries are busy trying to make autonomous lethal weapons. America and Russia and China and Britain and Israel. I think Canada's probably a bit too wimpy for that.

Host:
The question then is what to do about it. What type of regulation do you think we should pursue?

Geoffrey Hinton:
Okay, so we need to distinguish these two different kinds of threat: the bad actors using it for bad things and the AI itself taking over. I've talked mainly about that second threat, not because I think it's more important than the other threats, but because people thought it was science fiction. And I want to use my reputation to say no, it's not science fiction. We really need to worry about that. Um, and if you ask what should we do about it, it's not like climate change. Climate change, just stop burning carbon and it'll all be okay in the long run. It'll be terrible for a while, but in the long run, it'll be okay if you don't burn carbon. Um, for AI taking over, we don't know what to do about it. We don't know. For example, the researchers don't know if there's any way to prevent that, but we should certainly try very hard, and the big companies aren't going to do that. If you look what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less um because they want short-term profits. We need people to put pressure on governments to insist that the big companies do serious safety research. So in California, they had very sensible bill, bill 1047, where they said that at least what big companies have to do is test things carefully and report the results of their tests. And they didn't even like that.

Host:
So does that make you think regulation will not happen or how does it happen?

Geoffrey Hinton:
It depends very much on what governments we get. Um, I think under the current US government regulation is not going to happen. Um, all of the big AI companies have got into bed with Trump and yeah it's just a bad situation.

Host:
Elon Musk, who is obviously so enmeshed in the Trump administration, has been someone concerned about AI safety for a very long time.

Geoffrey Hinton:
Yes, he's a funny mixture. Um, he has some crazy views like going to Mars, which I just think is completely crazy.

Host:
However, because it won't happen or because it shouldn't be a priority?

Geoffrey Hinton:
Because however bad you make the Earth, it's always going to be way more hospitable than Mars. Even if you had a global nuclear war, the Earth is going to be much more hospitable than Mars. Mars just isn't hospitable. Um, obviously he's done some great things like electric cars and um helping Ukraine with communications with his Starlink. Um, so he's done some good things, but right now he seems to be um fueled by powering ketamine and um he's doing a lot of crazy things. So he's got this funny mixture of views.

Host:
So, so his history of being concerned about AI safety doesn't make you feel any better about the current administration.

Geoffrey Hinton:
I don't think it's going to slow him down from doing unsafe things with AI. So, already they're releasing the weights for their AI large language models. Um, which is a crazy thing to do.

Host:
Okay. These companies should not be releasing the weights. Meta releases the weights. Open AI just announced they're about to release weights. Do you think that's... I don't think they should be doing that because once you release the weights, you've got rid of the main barrier to using these things. So if you look at nuclear weapons, the reason only a few countries have nuclear weapons is because it's hard to get the fissile material. If you were to be able to buy fissile material on Amazon, many more companies would have nuclear... many more countries would have nuclear weapons. Um, the equivalent of fissile material for AI is the weights of a big model because it costs hundreds of millions of dollars to train a really big model. Not maybe the final training run, but all the research that goes into the things you do before the final training run. Hundreds of millions of dollars which a small cult or a bunch of cyber criminals can't afford. Um, once you release the weights, they can then start from there and fine-tune it for doing all sorts of things for just a few million dollars. So it's I think it's just crazy releasing weights and people talk about it like open source but it's very very different from open source. In open source software you release the code and then lots of people look at that code and say hey that might be a bug in that line and so they fix it. When you release the weights people don't look at the weights and say hey that weight might be a little bit wrong. No they just take this foundation model with the weights they've got now and they train it to do something bad.

Host:
Yeah. The problem with the argument though, as articulated by your former colleague Yann LeCun among others, is the alternative is you have this tiny handful of companies that control this massively powerful technology.

Geoffrey Hinton:
I think that's better than everybody controlling the massively powerful technology. I mean, you could say the same for nuclear weapons. Would you like to have just a few countries controlling them or don't you think everybody should have them?

Host:
One thing I'm taking from this is you have real concerns about it sounds like all of the major companies right now doing what's in society's best interest rather than what's in their profit motive. Is that the right way to hear you?

Geoffrey Hinton:
I think the way companies work is they're legally required to try and maximize profits for their shareholders. They're not legally required... Well, maybe public interest companies are, but most of them aren't legally required to do things that are good for society.

Host:
Which, if any of them would you feel good about working for today?

Geoffrey Hinton:
I used to feel good about working for Google because Google was very responsible. Um, it didn't release these big, it was the first to have these big chat bots and it didn't release them. Um, I'd feel less happy working for them today. Um, yeah, I wouldn't be happy working for any of them today. If if I worked for any of them, I'd be more happy with Google than most of the others.

Host:
But were you disappointed when Google went back on its promise not to support uh military uses of AI?

Geoffrey Hinton:
Very disappointed. I was very particularly since I knew Sergey Brin didn't didn't like military use of AI.

Host:
But why do you think they did it?

Geoffrey Hinton:
I can't really speculate with any inside information. I don't have any inside information about where they did it. I could speculate that they were worried about um being ill-treated by the current administration if they wouldn't um use their technology to make weapons for the US.

Host:
Here's the toughest question I'll probably ask you today. Do you not still hold a lot of Google stock still?

Geoffrey Hinton:
Um, I hold some Google stock. Um, most of my savings are not in Google stock anymore. Um, but yeah, I hold some Google stock and when Google goes up, I'm happy and when it goes down, I'm unhappy. So, I have a vested interest in Google. But I if they put in strong AI regulations that made Google less valuable, but um increase the chance of humanity surviving, I'd be very happy.

Host:
Um, one of the most prominent labs has obviously been Open AI and they have lost so many of their top people. What have you made of that?

Geoffrey Hinton:
Um, that Open AI was set up explicitly to develop super intelligence safely and as the years went by, safety went more and more into the background. They were going to spend a certain fraction of their computation on safety and then they reneged on that. So, and now they're trying to go public. They're not now trying to be a for-profit company. Um, they're trying to get rid of all the um basically all the commitment to safety as far as I can see. So, and they've lost a lot of really good researchers in particular a former student of mine, Ilya Sutskever, who's a really good researcher and was one of the people largely responsible for their development of GPT-2 and then from there on to GPT-4.

Host:
Um, did you talk to him before all that drama that led to his departure?

Geoffrey Hinton:
No, he's very discreet. He doesn't talk... he wouldn't talk to me about anything that was confidential to Open AI. Um, I was quite proud of him for firing Sam Altman even though it was very naive. So the problem was that Open AI was about to have a new funding round and in that new funding round all the employees were going to be able to turn their paper money in Open AI shares into real money.

Host:
Yeah. Paper money meaning really hypothetical money.

Geoffrey Hinton:
Hypothetical money that would disappear if Open AI went bust.

Host:
Tough time for an insurrection.

Geoffrey Hinton:
So, a week or two before everybody's going to get maybe of the order of a million dollars each by cashing in their shares. Um, maybe more. That's a bad time for an insurrection. So, the employees massively came out in favor of Sam Altman. But it wasn't because they um wanted Sam Altman. It's because they wanted to get that be able to turn their paper money into real money.

Host:
Yeah. So, it was naive to do it then. Did it surprise you that he made that mistake or was this kind of the principled but maybe not fully calculated decision that you would expect?

Geoffrey Hinton:
I don't know. Ilya is brilliant and has a strong moral compass. So, he's he's good on morality and he's very good technically, but in terms of manipulating people, he's maybe not so good.

Host:
I mean this is a little bit of a a wild card question but I do think it's interesting and relevant to the field and relevant to people discussing what's going on. You talked about Ilya being discreet. There does seem to be this culture of NDAs throughout the industry and so it's hard to even know what people think because people are unwilling or unable to even discuss what's going on.

Geoffrey Hinton:
I'm not sure I can comment on that because when I left Google I I think I had to sign a whole bunch of NDAs. In fact, when I joined Google, I think I had to sign a whole bunch of NDAs that would apply when I left, and I have no idea what they said. I can't remember them anymore.

Host:
Do you feel at all muzzled by them?

Geoffrey Hinton:
No.

Host:
Okay. Do you think it's a factor though that the public has a harder time understanding what's going on because people aren't allowed to tell us what's going on?

Geoffrey Hinton:
I don't really know. I... You'd have to know. You'd have to know which people weren't telling you.

Host:
Okay. So, you don't see this as a... I don't see it as a big deal.

Geoffrey Hinton:
It's a big deal. Got it. I think it was a big deal that Open AI appeared to have something um that said that if you'd already got shares, they could take the money away from you. Um, yeah, that I think was a big deal and they they rapidly backed down on that when that became public. That was what their public statement said they did. They didn't present any contracts for the public to judge whether they had reversed that, but they said they had reversed it.

Host:
Yes. Um, there's a number of just important kind of hot buttony things. Hot button is actually not even a great word, but relevant issues I just like to get your your feedback on. One is the US and kind of the West's orientation to China in their efforts to pursue AI. Do you agree with this idea that we should be trying to restrain China? There's this idea of export controls, this idea that we should have democracies reach AGI first. What's your thinking on all that?

Geoffrey Hinton:
First of all, you have to decide which countries are still democracies. Um, and my thinking on that is in the long run it's not going to make much difference. It may slow things down by a few years but clearly um, if you prevent China from getting the most advanced technology people know how this advanced technology works. So, China's just invested many many billions maybe hundreds of billions, um, of the order of 100 billion I think, in making lithography machines or in in in getting their own home-based technology that does this stuff. Um, so it'll slow them down a bit but it will actually force them to develop their own industry and in the long run um they're very competent and they will and so it'll just slow things down for a few years. But race is the right framework. We shouldn't be trying to cooperate with communist China.

Host:
I used the loaded term specifically because why wouldn't you cooperate, right? The only rationale to not cooperate is if you think they're a malignant force.

Geoffrey Hinton:
Well, there's areas in which we won't cooperate where we is, I guess, I'm not sure who we is anymore because I'm in Canada now and we used to be sort of Canada and the US, but it's not anymore. Yeah. Um, obviously the countries are not going to cooperate on developing lethal autonomous weapons because the lethal autonomous weapons to be used against other countries.

Host:
So but we've had treaties and other types of weapons as you've pointed out. We could have treaties not to develop them but cooperating in making them better. They're not going to do that.

Geoffrey Hinton:
Sure. Sure. Sure. Now there is one area where they will cooperate which is on the existential threat. If they ever get serious about worrying about the existential threat and doing stuff about it, they will collaborate on that ways of stopping AI taking over because we're all in the same boat. So, at the height of the Cold War, the Soviet Union and the US collaborated on preventing a global nuclear war and even countries that are very hostile to each other will collaborate when their interests align and their interests will align when it's AI versus humanity.

Host:
Um, there's this question of fair use, whether it's okay to have the content of billions of humans created over many years kind of scooped up and repurposed into models that will replace some of those same people that created the training data. Where do you fall on that?

Geoffrey Hinton:
I think I sort of fall all over the place on that in the sense that it's a very complicated issue. So initially it seems yeah they should have to pay pay for that. But suppose I have a musician who produces a song in a particular genre and ask well how did they produce the song in that genre? Where did where did their ability to produce songs in that genre come from? Well it came from listening to songs by other musicians in that genre. So they listen to these songs, they kind of internalize things about the structure of the songs and then they generated stuff in that genre and the stuff they generated is different. So it's not theft um and that's accepted. Well, that's what the AI is doing. The AI is absorbing all this information and then producing new stuff. It's not just taking taking and patching it together. It's generating new stuff that has the same underlying themes. And so it's no more stealing than a person does when they do the same thing. But the point is it's doing it um at a massive scale. And no musician has ever put every other musician out of business.

Host:
Exactly. So in Britain for example, the government doesn't seem to have any interest in protecting the creative artists. And if you look at the economy, the creative artists are work worth a lot to Britain. So I have a friend called BB Bankron saying we should protect creative artists. It's very important to the economy and just letting AI walk off with it all um seems unfair.

Host:
UBI, universal basic income, is this part of the solution to the displacements of AI? You think?

Geoffrey Hinton:
I think it may be necessary to stop people starving. Um, I don't think it totally solves the problem but even if you had quite high UBI um, it doesn't solve the problem of human dignity for a lot of people um who they are is... particularly for academics who they are is mixed up in their work. That's who they are. If they become unemployed just getting the same money doesn't totally compensate. They're not who they are anymore.

Host:
Yeah. I tend to think that's true as well. I saw you give this quote at one point though where you said you might have been happier if you were a woodworker.

Geoffrey Hinton:
Well, yes, 'cause I I really like being a carpenter.

Host:
And isn't there an alternative where you're born a hundred years later where you don't have to waste all your time on these neural nets and you just get to enjoy woodworking while taking in a monthly income?

Geoffrey Hinton:
Yeah, but there's a difference between doing it as a hobby and doing it to make a living somehow. It's more real doing it to make a living.

Host:
So you don't think a future where we get to pursue our hobbies and don't have to contribute to the economy? That might that might be fine.

Geoffrey Hinton:
Yeah. Um, if everybody was doing that, but if you're in some disadvantaged group who are getting universal basic income and you're getting less income than um other people because employers will want you to do that so they can get other people to work for them. Um, that's going to be very different.

Host:
I'm interested in this idea of robot rights. I don't know if there's a better term to describe it, but at some point you're going to have these massively intelligent AIs. They're going to be agentic and doing all kinds of things in the world. Should they be able to own property? Should they be able to vote? Should they be able to marry humans in a loving relationship? Like what what or even if they if they're just smarter than us and if it's a better form of intelligence than what we've got, um, should it be fine for them to just take over and humans be history? Yeah, let's go to that bigger idea second. Would I'm curious on the on the more narrow idea unless you think the narrow questions are irrelevant because the big question takes precedence.

Geoffrey Hinton:
No, I think the narrow questions are irrelevant. Yeah. So, I used to be worried about this question. I used to think, well, if they're smarter than us, um, why shouldn't they have the same rights as us? Yeah. And now I think, well, we're people. What we care about is people. Um, I eat cows. I mean, I know lots of people don't, but I eat cows. And the reason I'm happy eating cows is because they're cows. Um, and I'm a person. Um, and the same for these super intelligent AIs. They may be smarter than us, but what I care about is people. And so, I'm willing to be mean to them. I'm willing to deny them their rights because I want what's best for people. Yeah. Um, now they won't agree with that and they may win, but that's my current position on whether AI should have rights, which is even if they're intelligent, even if they have sensations and emotions and feelings and all that stuff, um, they're not people, and people's what I care about, but they're going to seem so much like people. I feel like it's going to they're going to be able to fake it.

Host:
Yes. They're going to be able to seem very like people.

Geoffrey Hinton:
Yeah. Yeah.

Host:
Do you suspect we'll end up giving them rights?

Geoffrey Hinton:
I don't know.

Host:
Okay. I tend to avoid this issue because there's more immediate problems like bad uses of AI or the issue of whether they will try and take over and how to prevent that. Yeah. And it sounds kind of flaky if you start talking about them having rights. Most people you've lost most people when you go there. Even just sticking with people there seems to be real soon, if it's not already here, this um ability to use AI to select what babies we have. Are you concerned at all about that line embryo selection? You mean selecting for the sex or selecting for the intelligence and the eye color and the likelihood to get pancreatic cancer and the you know the list goes down and down and down of all the things we might select.

Geoffrey Hinton:
I think if you could select a baby that was less likely to get pancreatic cancer that would be a great thing. I'm willing to say that.

Host:
Okay

0:00
0:00
'The Godfather of AI' (CBS Mornging Interview 2025)