Skip to main content

Yuval Harari : 情報の摂取制限(ダイエット)が必要だ。"The Atlantic"を読め

· 64 min read

前置き

Harari の最近の発言を取り上げる。直前の文脈から Facebook (*1)などの SNS から離れろ、という主張になっている。

DeepL の和訳は稚拙だったので、ChatGPT-4o に訳させた。

FasterWhisper AI(large-v2 model) + ChatGPT-4o)

そして最後にもう一点。食べ物の例と同じように、今では多くの人が『情報ダイエット』を必要としている段階に達していると思います。『情報が多ければ多いほど常に良い』という単純な考え方は間違っています。食べ物が多すぎることが必ずしも良くないのと同じように。

「だからこそ、皆が『The Atlantic』を購読すべきなのです。思考実験をしてみましょう。というのも、あなたのシステムが落ち着いて機能するかどうかは、ある意味で『善意』を誰が定義するのか、またこれらの価値を誰が定義するのかに少し依存していると思います。

▼展開

And one last point. I think like in the example of food, we have reached a point when I think most people need an information diet. That this simplistic idea that more information is always good for us is simply wrong, the same way that more food is not always good for us. (0:35:39)

Well, that's why everybody should subscribe to The Atlantic. Let me give you a thought experiment. I mean, because part of what I think is going to... For your system to come down and to work, it's going to depend a little bit on who defines benevolence and who defines these values. (0:35:54)

コメント1

一般人にとって SNS は中毒装置であり、人生の空費でしかないのだが、彼らは既に中毒患者に成り果てているので Harari が幾ら言っても離れるのはもう無理だろう。

コメント2

"The Atlantic"(知識層やエリート層好みの雑誌)を推奨するあたりで、Harari の凡庸な面が浮き出ている。Harari ですら因習的思考に囚われ、それに無自覚。

しかし、あの Harari (*2) がここまで凡庸になり果てるとは…

(*2)

歴史学者 Yuval Noah Harari の凄まじい洞察力 (2017-12-19)

動画(1:20:07)

How is disruptive technology rewiring the world? | Nicholas Thompson and Yuval Noah Harari

www.youtube.com/watch?v=rhyMitpaFkg

動画概要欄

28,200 views 2024/11/01

Join Yuval Noah Harari and Nicholas Thompson, leading tech journalist and CEO of The Atlantic, as they explore the profound impact of AI on democracy, labor markets, and spirituality. Their conversation examines how AI could reshape our understanding of identity and societal norms, drawing parallels to historical shifts that have defined humanity's essence – and they highlight the dangers of opaque, algorithm-driven decisions that threaten our ability to listen to one another.

Filmed in Washington DC on September 16, 2024 and hosted by Politics and Prose.

おまけ:全発言 文字起こし

▼文字起こし 展開

Hello Yuval, how are you? I'm fine, thank you. This is a real pleasure to be here with Yuval. He is not only a great historian, as you all know, he is a very kind man. In the green room, among his many duties, signing books, answering my questions about Israeli politics, he also helped read a bedtime story to my 10-year-old, who is a huge fan of Unstoppable Us. So, thank you, Yuval. It's my pleasure. (0:00:26)

All right, so what I want to do in this book, is I want to go through some of the history, some of the stories you tell. I'm going to ask you a few questions about characters you introduce, the ideas they represent. We'll go through some of the arguments you make about history, about AI, and about democracy. And then I hope there will be some time at the end to go through some of your apocalyptic thought experiments. (0:00:49)

Does that work? Yeah, absolutely. All right, I have a whole series of questions, several thousand that people have sent in. I appreciate that. We'll get to those as well. First question. What is that bird? Oh, that's a difficult question. It's a pigeon. Thank you. What does the bird represent, and why is it on the cover of your book about the history of the world? (0:01:14)

Well, for two main reasons. First of all, in Hebrew there is no difference between a pigeon and a dove. I know that in the English-speaking world, pigeons are often called rats with wings, whereas doves are thought of as these white angels of peace. But in Hebrew they are the same. And there are no doves in the Middle East. Maybe this is why there is no peace in the Middle East. There are only pigeons. (0:01:42)

And in the Bible, in the story of Noah and the flood, he sends a pigeon, not a dove, to see if the flood is over. So we are now living in the midst of a flood of information, and this is kind of my pigeon being sent to see if the flood is over. And the other reason is that one of the chief characters of the book is a pigeon called Sheremi, which 100 years ago was basically the most famous bird in the world. (0:02:16)

And I think he is still on display in the Smithsonian, not far from here. And tell the story of Sheremi, because it's actually quite important to the arguments you make in the book. Yes, so how did this bird become the most famous bird in the world? So during the First World War, when the American Expeditionary Force fought in northern France against the Germans, so an American battalion was caught behind German lines, surrounded by the Germans, and American artillery, which tried to provide them with cover fire, didn't know their exact location, and actually dropped the artillery barrage right on the American soldiers, adding to their problems. (0:02:59)

And they tried to send runners to division headquarters to inform the commanders where the battalion actually is, but none of them could get through the German lines. (0:03:11)


So they turned to the only thing that could, which was a carrier pigeon, Sheremi. And the commander wrote this tiny note on a piece of paper and attached it to the leg of the pigeon, and they released it to the air, and the pigeon flew through the German fire. It was hit several times. It lost the other leg, luckily not the leg with the note. (0:03:37)

It was shot through the breast, but it nevertheless managed to get through, and the artillery barrage was lifted, and help was sent to the right place, and the battalion, which was known as the Lost Battalion, was saved. And the pigeon, which was called Sheremi, was then hailed as the bird that saved hundreds of American soldiers from death or captivity at the hands of the Germans. At least this was the story, which was repeated again and again in Army communiqués, in the newspapers. (0:04:12)

There are movies, there are children's books, so if you want to read a story, there is still a children's book about Sheremi, the brave pigeon. A recent historical research that delved into the archives raised a lot of question marks about this whole story. First of all, it now turns out that the headquarters learned about the right location of the battalion before the pigeon arrived, and then it turned out that nobody is sure that the pigeon was actually Sheremi. That it could have been a completely different pigeon. (0:04:48)

But still, Sheremi was displayed in the Smithsonian for years and became a pilgrimage site for veterans of the First World War, and was the most famous bird in the world. And this is part of what the book is about, about the power of information on the one hand, and about the tension between the truth and the stories we tell, and the effectiveness of stories which are not necessarily always truthful. (0:05:27)

And that's why it caught my attention, because as you may have heard in America, we still have some of these issues, and in fact, often with animals. And so I'm going to read a quote that one of our modern philosophers, dealing with the same issue, J.D. Vance, said yesterday. If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that's what I'm going to do. You are in favor of that, correct? (0:05:59)

I'm not saying I'm in favor, I'm saying that this is what is happening throughout human history. That, again, most information is not the truth. The truth is a rare and costly subset of information. If you want to write a true story, you need to invest a lot of time and effort and money, whereas fiction or fantasy, they are very cheap. You just write the first thing that comes to your mind. (0:06:34)

The truth also tends to be complicated, because reality is complicated, whereas fiction, you can make it as simple as you like it to be, and people usually prefer simple stories. And the last disadvantage of the truth is that the truth is often painful. (0:06:56)


Whether on the individual level, or if you talk about, say, Israeli politics, on the national level, there are many things people don't want to know about themselves, about their nation, about the world. And, you know, fiction can be made as flattering as you would like it to be. So in a competition between something which is cheap, and simple, and flattering, and something which is costly, and complicated, and sometimes painful, it's obvious who is going to win, unless you give the truth some help. (0:07:31)

This is the exact argument I made to my board yesterday while I'm trying to raise money for our fact-checking department. But let me ask you then about the question of a noble lie, because it's clear that history is determined by the people who tell the stories, which have either truth in them or they don't have truth in them, and you are not as dismissive of the idea of a noble lie as I expected you would be when I began the book. (0:07:57)

Explain in what circumstances it is okay for someone who is telling a story to not tell it exactly truthfully for the greater good of some kind. It is impossible for a story to be an exact replica of reality. There is a famous Borges story about an empire that wants to create a totally truthful map, which will be an exact representation of reality. And they end up producing a map with a scale of 1 to 1. Because this is the only map which will be 100% truthful and accurate and will not simplify anything, will not change anything. (0:08:47)

It will have to be a 1 to 1 map. And so the empire was covered by a map of the empire. And the effort of creating this map exhausted the resources of the empire that therefore collapsed. And we are now in a similar situation to some extent. There is a crisis of representation in the world. That no representation seems good enough for us. (0:09:17)

Because no representation can actually be a 1 to 1 map of the world. This is simply impossible. And we are not sure what to do about it. Now, my position is that every story to some extent is fictional. Every story, you can't tell the whole truth. This is simply impossible. And there are cases that, yes, you need to simplify. As somebody who wrote, you know, the history of the world in 500 pages, I know that sometimes you have to simplify. (0:09:50)

But fiction in itself is not necessarily bad. You know, the rules of football or baseball, they are fictional. We invented them. It doesn't mean that they are bad. Fictional literature is not all bad. The key is that fiction should acknowledge its fictionality and not pretend to be reality. And when you think about the cultural or political implications of that, if you want to unite a large number of people, you need to use some stories, some mythologies. (0:10:31)

And this is not necessarily bad as long as you acknowledge what you are doing. (0:10:37)


So if we compare, for instance, two foundational texts of human history, if you compare the Ten Commandments and the U.S. Constitution, so one text acknowledges its fictionality and the other doesn't. The Ten Commandments doesn't acknowledge that it emerged from human imagination. It claims to be the product of divine intelligence, to come down from heaven. And the downside of that is that it has no mechanism to admit and correct its own mistakes. (0:11:16)

And for instance, the Ten Commandments, as they were written sometime in the first millennium BCE, they endorse slavery. Many people don't think about it or don't notice it. But the Ten Commandments actually endorses slavery. It says it's okay to hold slaves because the Ten Commandments says that you should not covet your neighbor's field or ox or slaves. Which implies that God has no problem with people holding slaves. (0:11:50)

God has a problem only if you covet the slaves of your neighbor. No, no, no, that's not okay. That will make God angry. And compare that to the U.S. Constitution, which also, like the Ten Commandments, has served as the basis for large-scale human cooperation, for legal systems, for political systems. Whereas the Ten Commandments start with, I am your Lord God. The U.S. Constitution starts with, we the people. (0:12:19)

We the people invented this document. We invented these laws. And therefore, because it acknowledges that it emerges from the human imagination. It's humans who wrote this document. It also acknowledges the potential that there might be mistakes in the document. And it has a mechanism to amend itself, which was eventually used to amend the U.S. Constitution. Which again, initially endorsed slavery, and was eventually amended to forbid, to ban slavery. (0:12:58)

Whereas with the Ten Commandments, because they claim to be just, you know, they came down from heaven, there is no mechanism to change the text. There is no 11th Commandment which tells people, well, if you don't like something in the 10th Commandment, if you have a two-third majority, you can change the text. No, there is no mechanism. I think you, I would think it might be possible for Pope Francis to change the Second Commandment better than the U.S. Congress could change the Second Amendment, but that's where we are. (0:13:30)

Alright, let's stick with religion. I want you to tell... The story that I think comes up in at least three very consequential parts of the book is the story of the Council of Hippo in Carthage. And the consequences of selecting one Timothy into the New Testament instead of the Acts of Paul and Tecla. And that story, I think, is a very important follow-on to what you were just saying about the power of story. (0:13:55)

And I swear to God, we'll get to AI in just a minute, but this is an important premise for that. We can actually start with AI, which is very relevant to the Church Council of Carthage, which took place in what is today Tunisia in 397 CE. Because AI, one of the first big things that we saw it do, shaping human history, is taking the power of recommendation. (0:14:24)

That if you go on social media, what you see there is the result of recommendations made by social media algorithms. (0:14:35)


And the power to recommend to people what stories to read or what videos to watch is extremely important. And one of the best examples in history for the power of recommendation is the editorial process that created the Bible in the New Testament. The people who created the New Testament are not the authors of the texts. They are the editors who decided what will be in and what will be out. (0:15:09)

Because, you know, there was no New Testament, there was no Bible, in the time of Jesus or in the time of Saint Paul. They never read the Bible. It didn't exist. In the first four centuries of Christianity, Christians produced an enormous number and enormous variety of texts. There were stories about Christ, there were prophecies, doomsday prophecies about the apocalypse, there were letters by Saint Paul, by other church leaders, there were lots of fake letters. (0:15:40)

People wrote things in the name of Saint Paul, like 200 years after the man was dead. So Christian communities were getting flooded by a very large number of texts and a question arose, what should good Christians read? They needed a recommendation list. The same way that today we are flooded by TV series and we need a recommendation list. What to watch? So, in the late 4th century, a committee was set up, a church council, theologians, bishops. (0:16:15)

They met first in Hippo, in what is today, I think, Algeria. Then in Carthage, in what is today Tunisia. And they hammered out a recommendation list, top 27 texts every Christian should read. And this became the New Testament. They didn't write the text. They went over, again, very large numbers of texts that existed at the time and chose what will be in and what will be out. (0:16:42)

And this shaped Christianity and the views of billions of people on numerous issues until this very day. And to give you just one example out of many. So, one very popular text with Christians at the time was the Acts of Paul and Thecla. Paul is Saint Paul. And Thecla was one of the most favorite saints of the time. She was a woman disciple of Paul. And she was a leader of the community. (0:17:15)

She preached. She performed miracles. She baptized. And she was hailed as an example that women can be leaders in the church. And women can preach. And women can baptize and perform miracles. So this was one popular text with one view of women. Then there was another text. A letter, allegedly from Saint Paul to Timothy. Which most scholars today think is a much later forgery. (0:17:46)

It was not written by Saint Paul in the first century. Probably forged in his name sometime in the second century. And in this letter, a completely different view of women and their role in the church. It says that women should be obedient. Should be silent. Should never be leaders. They should fulfill themselves by doing whatever men tell them to do. (0:18:10)


And by having children and raising children. This is their role in life. And the committee in Carthage decided to exclude the acts of Paul and Thecla from the top 27. But to include this letter to Timothy, which is still part of the New Testament around the world, as first Timothy. And this has shaped the views of billions of Christians about women in the church and also in general for more than 1,500 years. (0:18:45)

This is the power of recommendation. And now this power, and this connects to AI, this is the power which is increasingly held by AI algorithms. We have now this kind of huge public debate about social media and the spread of fake news and conspiracy theories and so forth on social media. And you hear people like Elon Musk or Mark Zuckerberg saying that we don't want to censor anybody. (0:19:16)

That this is an issue of freedom of speech. But it's not. The problem with the spread of this type of information on social media is not human users producing certain lies or fictions or fake news. The real problem is corporate algorithms deciding which stories to recommend, which stories to promote. And the power that was held by the bishops in the Council of Carthage and the power that was held by newspaper editors in recent generations, this is now the power in the hands of the social media algorithms. (0:19:59)

And this should be at the center of the debate, which we'll get to. On regulation, it's not about the freedom of speech of humans. It's about the responsibility of the corporate algorithms. Because if a corporate algorithm decides to promote a certain conspiracy, this is not on the person who invented it. This is the decision of the algorithm and the decision of its human corporate masters. (0:20:31)

And this is what should be at the center of the debate. So what I'm hearing is if the Council of Carthage had been slightly different, women would have been empowered much sooner, we would have had a feminist revolution several centuries earlier, probably AI would have been invented and we all would be obliterated by now. Is that correct, Yuval? That's one possibility. I mean, history is extremely surprising. (0:20:55)

So you can never predict the outcome. You can't unspool one thread from the tapestry. Let's go to some of the concerns you have about AI. I want to go through some of the concerns very quickly and then I want to go through your philosophy of how these algorithms should be structured. But very briefly, in just a word or two, explain why modern AI might destroy democracy. (0:21:16)

Very briefly. Well, very briefly, democracy is a conversation. Dictatorship is one person dictating everything. That's dictatorship. Democracy is when a group of people have a conversation in order to decide what to do about any major question. Now, to have a conversation is not an easy thing. And there are technical difficulties. If you have 20 people trying to have a conversation, so they can all gather in a room and talk with each other. (0:21:55)

But how can 20 million people have a conversation? (0:21:59)


You need some kind of technology in order to do that. Now, until the modern age, there was just no technology to facilitate large-scale conversations. Which is why there were no large-scale democracies anywhere in the world. The only examples we have of ancient democracies, they are all small-scale. They are city-states like Athens or Republican Rome, just one city. Or they are even smaller, tribes and bands and villages. (0:22:38)

We have many examples of these small-scale democracies. Not a single example of a large-scale democracy in the ancient world. All large-scale polities are authoritarian. We begin to see large-scale democracies only after the rise of modern information technology. The first crucial technology is the newspaper. And then we have the telegraph and the radio and the television. And suddenly it becomes feasible that other conditions have to be met. (0:23:14)

It doesn't guarantee democracy just that you have a newspaper. You have newspapers and radio also in the Soviet Union. But it becomes possible for the first time in history to have large-scale democracies. And it's important to understand that because it means that information technology is not a side dish. That you have democracy and on the side you have all these issues of information technology. (0:23:41)

No, information technology is the basis of democracy. Democracy is built on top of this technology. So any major change in information technology is likely to cause an earthquake in democracy, which is built on top of it. And this is what we are now seeing all over the world. What we are seeing all over the world is the collapse of the democratic conversation. We have the most sophisticated information technology in human history. (0:24:17)

And people are losing the ability to talk with each other and even more so to listen to each other. And in every country where this is happening, there are these unique explanations of what is happening in our country. Why can't democrats and republicans in the US have a conversation anymore? And you go to Israel, to my country, you hear the unique explanation of what is wrong with Israel. And then you go to Brazil and then you have the unique explanations of what's wrong with Brazilian society. (0:24:49)

But you see the same thing is happening everywhere. The conversation is collapsing and this is not because of some special feature in the history or society or economy of the country. It's a universal earthquake which results from the rise of this new information technology. The developers of the technology promised to us that it will spread the truth and bring tyrannies down and strengthen democracies. (0:25:22)

But it is doing the opposite. And very briefly one way to visualize what is happening is imagine that democracy is a group of humans standing and having a conversation. And suddenly a group of robots enter the circle and start talking very loudly, very persuasively, very emotionally and we can't tell who is who. Who is a human and who is a robot? (0:25:53)


This is what has been happening over the last 10-15 years and the result is that the conversation is breaking down. And again this is not a uniquely American phenomenon. It is happening all over the world. Leading more and more to the rise of dictatorships because dictatorships don't need conversations. Again there is one person dictating everything. Well, let's push on that assumption for a second so I can have a different metaphor. (0:26:24)

Which is not a bunch of robots enter the conversation but a bunch of infinitely intelligent aides join me. And they help me sort through the conversation and they help me prepare for what I am going to ask. And then not only that, if you look at the last year, obviously Venezuela, tragic example. But we had elections are not the same as democracy but we will use them as a proxy for this hypothetical. (0:26:45)

We had reasonably positive elections if you are in favor of democracy in Poland. We had a little bit of progress in Turkey. Serious progress in India, in fact, where an illiberal democracy has actually been challenged. We had an extremely smooth election leading to the election of a Jewish woman in Mexico. We had extremely swift and effective, no deepfakes, maybe don't like the outcome of the election in France. Similar election in the United Kingdom. I mean, the world is doing all right on these elections despite the challenges of social media, AI, everything else. (0:27:19)

And it's not that it's already a done deal. It's not that democracy has collapsed. But if you look at the health of democracies today compared to 10 years ago, 15 years ago, at least the momentum is very worrying. And again, it's not... I don't think... what really surprises me when I look at the examples from around the world is that it's not about some huge ideological gaps. (0:27:46)

Actually, the ideological gaps between the different camps today in a place like the US are much smaller than 50 years ago. What worries is the kind of temperature of the argument. And again, this inability to have a reasoned debate, to have a reasoned conversation. Simply having elections is important, but it is not enough. Elections are not democracy. And even in some of the examples you mentioned, it's not about who wins, which 51% win in the end. (0:28:27)

It's about the relation between the 51% and the 49%. And democracy shouldn't feel like every election is a life and death struggle, that if we lose this, it might collapse. And even more so, it shouldn't feel like a war between enemies. If a country reaches a situation when people view their political rivals as enemies, then democracy cannot survive for long. Because then every election, again, it's like a war. You do anything to win. (0:29:06)

If you lose, you have no incentive to accept the verdict. If you win, you only take care of your tribe. What is happening in this situation, that a nation breaks down into tribes, leading eventually either to tribal warfare and civil war, or to dictatorship. (0:29:28)


I think that the key thing here also has to do not just with democracy, but also with nationalism, and with the breakdown of national communities. Many people think that democracy and nationalism, or democracy and patriotism are somehow opposites. But they go together, they must go together. Democracy functions well only when there is a national community. Only when you feel that you really care about the other people in your country, and that they really care about you. (0:30:07)

If a nation reaches a point when there is no longer a nation, there are warring tribes, and each tribe cares only about itself, then it's only a question of time before democracy collapses. And this is really the worrying trend that we see in many places around the world, irrespective of the results of the latest elections here or there. And this, again, it goes back to the type of communication between people. (0:30:38)

Can we, for instance, listen to people with different views from our own, without thinking that they are enemies? I learned in the green room that one of the ways that Yuval avoids arguing with bots on Twitter, is that he, as he says, is stuck in the 90s and uses email and the telephone. Let me ask you a question then. All right, so we're heading into this age. I will certainly agree that democracy is at risk, and I will certainly agree it is at risk for the reasons you give. (0:31:08)

And I will certainly agree that we will soon have extremely powerful AI algorithms that will underlie a lot of the decisions we make and a lot of the thinking that goes on in our own heads. In a very important section in the book, you lay out four values that you think should be embedded in AI systems. And they are benevolence, AI systems should be benevolent, seems reasonable. (0:31:31)

Decentralized mutuality, you should understand about it, what it understands about you, and the ability to evolve, much as you said about the US Constitution versus the Ten Commandments. So a question for you, what happens when some of these come into conflict with each other? So when I read that section, I thought, well, benevolence kind of is in tension with decentralization. Because if you decentralize these algorithms, and suddenly you have all kinds of algorithms, you have all kinds of options, you have all kinds of different companies, some of which will be benevolent as you define it, some will not be benevolent as you define it. (0:32:04)

How is one supposed to weigh these four principles for designing future AI systems? Because weighing them correctly seems pretty important to get in the world you want. Yeah, with benevolence, I mean something very, very narrow, and something we've known for centuries. Simply that if you get hold of my information, you should use it for my benefit, and not in order to manipulate me. (0:32:31)

Which is a basic principle that we already have with our doctors, and our lawyers, and our accountants, and our therapists. (0:32:39)


And it should be no different with the people who provide us with digital services, like social media or like email. Like my personal physician... But pause for a second. Facebook would not have said it's manipulating us, even when in the sort of the heyday of its algorithm, it would have said it's giving us what we want. That's one way of putting it. And the Facebook algorithm has enormous power over us. (0:33:05)

And Facebook's business model, and the business model of most of these social media companies, it is based on increasing user engagement. And engagement sounds like a nice thing, but who doesn't want to be engaged? But for them, for the companies, it means that we need to spend more time on the platform, because the more time I spend on it, the more money they make. (0:33:34)

Either by showing me advertisements and commercials, or by collecting my data, and then giving it or selling it to third parties. And whether this is what I want or not, that's a very big question. Do I really want to spend more and more time on the platform? Now, what their algorithms do is, by trial and error, they find my weaknesses. And they use my weaknesses to keep me glued to the screen for longer. (0:34:06)

This is the basic idea of hacking. How do you hack a smartphone, or a computer, or a program? You look for the weaknesses in the code. It's the same with human beings. This is how you hack human beings. You use these algorithms to find the weaknesses in our code. Each person with their own weaknesses. It's not one size fits all. What makes me angry? What do I already hate? What do I already fear? Or what I'm greedy for? (0:34:38)

And they give me more and more and more of that. It's like, you know, the food companies that learned that if you pump something full of sugar and fat and salt, people would want more of it. Now, is it really good for us or not? Again, there is a question here. It's not an easy thing to solve. But this is the key of the dilemma, the key of the discussion. (0:35:03)

And the main message is that ultimately it should not be about the profits of the corporation, but whether consuming all this information is really good for me or not. And one last point. I think like in the example of food, we have reached a point when I think most people need an information diet. That this simplistic idea that more information is always good for us is simply wrong, the same way that more food is not always good for us. (0:35:39)

Well, that's why everybody should subscribe to The Atlantic. Let me give you a thought experiment. I mean, because part of what I think is going to... For your system to come down and to work, it's going to depend a little bit on who defines benevolence and who defines these values. (0:35:54)


And so I want to give you a thought experiment from a conversation I had at a bar not long ago. So I met somebody. I'm going to change some of the details because I can't reveal who they are and what exactly they do. But it was an engineer who works in AI, and they make algorithms. And their current job is that they work for the state of Texas, and they're in charge of sentencing algorithm. (0:36:16)

And so they're in charge of an AI algorithm that will determine how long somebody who's guilty will be sent to prison. And so they have hard tasks, right? They have to make sure any algorithm that is trained on historical data will have the biases of that historical data. So you train a sentencing algorithm on historic Texas data, it will be racist, so you have to control for that. (0:36:33)

You may let the women out sooner. You have to make sure that's proper and right. And you go back through and you control through it, and you try to fix that. And so we're having a long conversation, and I'm asking her, well, how do you control for this? And how do you control for that? And how do you control for this? And eventually she says, you know what I do, Nick? She said, I've rigged the algorithm. (0:36:48)

And I've rigged it in such a way that everybody will be sentenced for much less time than the state of Texas did before. And I've done it in such a way the state of Texas will never figure out that I've done that. Is that something that an AI engineer should be doing? That's extremely dangerous. Extremely dangerous, but it's a... it may be a value that many liberals agree with, and they feel like conservative states have sentenced people for too long. (0:37:14)

So it's embedding a value that that person feels is benevolent into a system and leading to what they believe is justice. At the very least, when we are talking about, you know, the law of a country, this should be left to the citizens and to the voters, and not to the, you know, a dictatorship of an engineer. Well, they've given the power to hire whatever contractor they want to the state. (0:37:40)

They've elected, you know, the governor of the state of Texas, and they've hired this person. The state is not Texas where this is happening, so don't Google, like, sentencing algorithm. It's a different thing in a different place. You won't be able to figure it out, but it's the same example. I think that the key point that is raised by this example is the issue of unfathomability. (0:37:59)

To what extent we can still understand the systems that control our lives. I get this question a lot of what really frightens me about AI. And you have this kind of Hollywood science fiction scenarios of the big robot rebellion. And the robots are rebelling and coming to kill us, and this is unlikely to happen anytime soon. (0:38:20)


But what is already happening, this is not a science fiction scenario for the future. What is already happening is that we basically have millions of AI bureaucrats. Millions of bureaucratic algorithms making more and more decisions about our life. We apply to a bank to get a loan, it's an AI deciding. We committed a crime, they send us to prison, it's increasingly an AI deciding for how long. (0:38:49)

And this could rise to the level of key economic and financial decisions about what would be the rate of interest of the Federal Reserve. This could increasingly be a decision made by AIs and not human beings. And there are many good reasons why to give this kind of power to AIs. But what happened down the road when eventually so many of these crucial decisions about us are made in a way that we simply cannot understand. (0:39:28)

We don't know why the bank refused us a loan. We don't know why they sent us to five years and not four years or six years in prison. We don't know why the interest rate is 4% and not 3%. And we don't know, not because somebody is hiding it from us, simply because it's far too complicated for the human brain. That the advantage of AI, kind of the good side of AI, that it can analyze much more data than any human brain, find patterns that we can't, deal with mathematical complexities which are way beyond what we can deal with. (0:40:08)

But the downside of all that is what is the meaning, for instance, of democracy. If increasingly all the decisions, or at least many of the decisions, are made in a way which are not transparent and therefore not accountable to human beings. Let's go through another one which is somewhat similar. So one of the companies that makes AI systems is Anthropic. And they use this system called constitutional AI as they choose how they write their prompts and how they structure their algorithm. (0:40:42)

Which is probably the closest of the major AI companies to, as I understand it, the philosophy of Yuval Noah Harari. And so when it gives an answer, it checks whether the answer would abide by the UN Declaration of Human Rights. It actually follows the US Constitution, the UN Declaration of Human Rights, and Apple's Terms of Service. It's very funny. But in any event, the biggest problem that they have, and the other AI companies, is they don't know. (0:41:07)

Just as you said, they don't know why things make decisions. And so they've been trying to understand what's called interoperability. And so they went in and they said, well, let's see what happens if we go into all of our training data and we add a little extra weight to everything that has to do with the Golden Gate Bridge. So if there's a picture of the Golden Gate Bridge, we'll weigh it double. (0:41:25)

If there's a mention of the Golden Gate Bridge, we'll weigh it double. (0:41:28)


If there's a box score about the San Francisco Giants, we'll weigh that a little more than one about the Philadelphia Phillies. They do all that, and then they ask Claude, tell me a love story. And naturally, the love story takes place on the Golden Gate Bridge. If you mess with the training data and the weights, you get some of these interesting outcomes. So then the question for you would be, if we can do this, why not go into the AI and not weight it towards the Golden Gate Bridge, but weight it towards love, compassion, benevolence? (0:42:01)

Is that a good idea? I'm not sure what it means in a technical sense. But when I look at human history, I know that quite often in history, people talk about love, or they start with love, and very quickly they get to hate and to war. You know, Christianity, which sees itself, and going back to the Council of Carthage, sees itself as the religion of love. (0:42:32)

It's all about love. It was responsible for more violence than any other religion or ideology in history. And they somehow found a way how, out of love, we wage crusades, and we build inquisitions, and we burn heretics at the stake. It's all out of love. And they really believed it. They also gave us Bach. But, again, the way that I think this often happens is that if you think that you are motivated by love, and if you think that you are trying to build utopia, whoever stands in your way must be demonic. Whoever stands in your way must be evil. (0:43:22)

The more good I am than any opposition is, by definition, it's not just somebody who thinks differently. But they are evil. Now, again, I'm not sure how to translate that into how I speak. But, again, one of the lessons from history is that just thinking that because we have these kind of good weights, we've weighted our holy book, we've weighted the code in favor of love. (0:43:51)

Anything that has love, it got extra. Anything that has compassion, it got extra. And somehow from that, you get the inquisition. So, if it happens with humans, I would also be very worried about AI. The basic thing that, again, we learn again and again in history is that we need a self-correcting mechanism. We can't trust that just because something has these good values at its basis, what will come out of it will also necessarily be benevolent and compassionate. (0:44:25)

We saw it again in the modern age with Marxism. Which begins with these wonderful ideas of equality and of compassion and ends with the gulags. And if you're really convinced that you're coming from a good foundation, and also if you think that you're in the process of building utopia, then it gives you an open check to do the most horrible things on the way to utopia. (0:44:55)

And anything that stands in your way is then transformed from political rivalry to kind of demonic possession. (0:45:08)


In some ways, this is one of the scariest things you've said because the people who are building the AI models genuinely do believe they are leading us to utopia. And that's very, very dangerous because this is what gives them the open check. They say, we are building utopia, so anything that we have to sacrifice on the way is worth it. Because when you would look at the bottom line, this was the basic argument also of people like Lenin and Stalin. Yes, we have to murder these millions of people, but in the end, when we build utopia, real existing socialism here on earth, it will turn out that these billions who died in the gulag, this was worth it. (0:45:50)

Man, I had never thought of the Khmer Rouge at the same time I thought of Anthropic, but here we are. But hold on, Yuval. In the next couple of weeks, I think, where were you last? You were last in Toronto. You're probably going to move west, and you're probably going to sit down, and you're going to sit down with all these people because they all read your books, right? (0:46:06)

There's this famous image of Jeff Bezos, and he's giving this interview. And he's got like four books behind him on his bookshelf, and like plants, and you know, and three of them are Yuval's, I believe. So, you're going to be talking to these folks, and so what you're telling me is that if, say, Dario Modi, the guy who runs Anthropic, is there. He's like, you know, we're trying to figure out what to train on, and I've been thinking we should train it on, you know, we should just train it on sort of positive, uplifting, factual stories. (0:46:34)

I really don't think we should, you know, train it on everything, which includes like serial killers' diaries. And you're going to say like, no, no, put the serial killers' diaries in there, because, you know, like, don't just train it on your definition of love. Is that what you're going to argue to them? No, I mean, I would basically say that I don't know how to train AIs. This is not my field. (0:46:52)

But no matter what kind of positive basis you give it, and no matter what positive intentions you have, your number one assumption should be that this thing is not infallible. I'm not infallible. There is a high chance for mistakes, and therefore I need to leave room for correction. The most important thing when you build it is build a mechanism for identifying and correcting the mistakes. (0:47:24)

Again, this is also the advice that I would have given Lenin in 1917. Like, you're going on this huge experiment. You're thinking that you're building a utopia. Start with the assumption that you will make mistakes. Include in the structure, say, of the Soviet Union, mechanisms for identifying and correcting the mistakes of the system, including the mistakes of Lenin and of whoever is going to succeed you. (0:47:49)

Which is the one thing they didn't do. (0:47:52)


Lenin should have read Sapiens. I have an image of him reading it on the train there. All right, let's talk a little bit about cocoons, which is one of the extremely interesting parts of your book. So you make an argument, and you say that there are some fundamental human issues, like the separation of mind and body. (0:48:09)

You tell the story of Martin Luther. It's actually a way of telling the story of Martin Luther I had not read before. But you argue that what could happen in an age of AI is that you end up with some civilizations that have a totally different understanding of what is the mind and what is the body, and therefore totally different judicial systems. So it's one of the most mind-bending parts of the book. I'd like you to explain this to the audience. (0:48:31)

So there is a lot of explaining. I'll try to do it short. First of all, about the cocoons, it's the changing of the metaphors in this age of information revolution. 30 years ago, the dominant metaphor was the web. The World Wide Web. And the web was supposed to connect everything and everybody. And over the years, the web kind of closed in on us, and now it's the cocoons. (0:49:00)

That every person or every group is enclosed within an information cocoon. And sometimes, you know, your next-door neighbors are in a different cocoon than you, and there is just no way to access from one to the other. And an extreme example of where this can go, like today, so okay, people don't agree, like in the United States, about who won the last elections. (0:49:26)

This can go to a place where people don't agree what a human is, or what a person is, and what are the relations between mind and body. One of the recurrent arguments throughout history that we see in many traditions, in Judaism, in Christianity, in Hinduism, in Buddhism, is what exactly is a human being? And what is the relation between mind and body, or between soul and spirit and body? (0:50:03)

So if we again go back to the age of the Council of Carthage and early Christianity. So Judaism, and the first Christians which came out of Judaism, they viewed humans as embodied entities. The body was central. We are bodies. Biblical Judaism did not talk at all about the soul. The idea that there could be a soul that exists separately from the body, unheard of in Biblical Judaism. It's all about the body. (0:50:37)

And also the first Christians, they focus on the body. The whole idea originally is that God is incarnated in the flesh. The flesh is at the center. And after Jesus is crucified, he's supposed to come back in the flesh. And the kingdom of God is supposed to be a material kingdom of fleshy bodies on the earth. But eventually, under influences of Platonic philosophies and other influences, and also for practical purposes, because the kingdom of God was nowhere to be seen. (0:51:19)

And the really big problem for early Christians is that they won. (0:51:23)


You know, when you're persecuted minority, a sect, you can have all these promises. Okay, when we finally gain power, then we'll have the kingdom of God. And then they have one of the biggest disasters that can happen to any religion. They gain power. They become the dominant religion of the Roman Empire. So, okay, so where's the kingdom of God? And there is no kingdom of God. You still have the same wars and corruptions and civil wars and executions and human greed. (0:51:54)

And it's all the same. So they say, okay, the kingdom of God is not on earth. It's on a different level of reality. It's in heaven. And you can't access it in the flesh. After you die, your soul will get to heaven. And many Christians drift towards a very different view of humans as a dualistic view, actually. That my real essence is my soul, which is entrapped in a material, biological, filthy body with all these sexual passions and all these lusts. (0:52:40)

And the hope, the ideal, is that eventually the soul will be released from this earthly, fleshy prison and get to a purely immaterial realm, which is heaven. Where it will exist forever and ever. And throughout the 2,000 years of Christian history, you see this tension. They can never really abandon the body, partly because the Bible is full of it. And again, Christ was incarnated in the body, in the flesh, and rose and came back to life in the flesh. (0:53:15)

And there is very little about this immaterial realm of pure souls in the Bible itself. It all mostly came later. So there is this constant argument that often leads to blows and to wars of religion. And in the early century of the church, one of the biggest arguments was about the nature of Jesus Christ himself. With one camp saying that he was entirely human, made of flesh. (0:53:49)

Another camp saying that he was entirely divine and non-material, a spiritual being. And there was a third camp who said that he was non-binary. And the non-binaries won. This was the eventually official doctrine of the church. He is non-binary. He is both and none at the same time. But huge arguments and also violence around these issues. Now, how does all this relate to AI? (0:54:20)

We are going to have another round of this mind-body debate. We are already in the midst of it. What is your identity? What defines your identity? Is your identity defined by your biological body? Or is your identity defined by what you believe about yourself? By your faith? People like Martin Luther, they said, The only thing that matters is what you believe. And we are now living in a kind of new round of this debate. (0:55:00)

With some people say, if you go online, you can be whatever you want. The biological body sitting in front of the screen should not limit the identities that you can adopt. Other people say, no, no, no, there is a biological body. (0:55:17)


This is the center of your identity. You cannot ignore your biological body. And as everybody knows, this is a very heated debate right now. And this also goes, potentially will influence how we treat AIs. AIs have no bodies. But they will increasingly be able to interact with us. And to press our emotional buttons. And even to pretend to have emotions of themselves. Now, people who give primacy to the body in the identity of a person, will resist treating AIs as persons. (0:56:00)

People who think that identity has little to do with biology, will have a much easier time giving or treating AIs as persons. Even though they have no bodies. And different countries can go different ways. So, you know, arguments about human rights today, between, say, United States and China. Think what it means in 50 years, when you perhaps have billions of entities, which are considered persons with rights in one country. (0:56:40)

But another country doesn't recognize them as persons at all, because they have no biological body. And at least in the US, interestingly enough, there is already a completely open legal path to recognizing non-humans devoid of bodies as legal persons with rights. Because corporations, according to US law, are legal persons that has, for instance, freedom of speech. According to the Supreme Court, since at least 2010. Now, at present, this is a legal fiction. (0:57:15)

Because all the decisions of corporations are made by human beings with biological bodies. So, Google, according to US law, is a legal person. But at present, all the decisions of Google have to be made by some human being. But what happens, in a few years maybe, when you start incorporating AIs as legal persons? You can technically incorporate an AI as a corporation, let's call it Google. And the interesting thing about Google is that it doesn't need any human employees to make its decisions for it. (0:57:56)

The AI can do it by itself. So, the AI, for instance, can open a bank account and can start earning money. It can go on TaskRabbit or Mechanical Turk online and offer its services to people or corporations and earn money. And then it has money. And then it can start investing its money in the stock exchange. And if it's a very intelligent AI, it could potentially become the richest person in the US. So, think about a situation when the richest person in the US is not a human being. (0:58:33)

It's an AI. And again, according to US law, as far as I understand, one of the rights reserved for this non-human person is freedom of speech, which manifests itself, among other things, in making political contributions. So, this AI could donate billions of dollars to politicians in exchange for giving even more rights to AI. So, these are the kinds of science fiction scenarios I think that we should be more concerned with than the Great Robot Rebellion. If a senior AI bot starts to hit on a junior AI bot in this corporation, what should the AI HR department do? (0:59:21)

We have a bunch of flesh and blood bodies here who have to go get pizza and beer. (0:59:27)


So, I'm going to ask just one more question about the apocalypse. And then we're going to go to audience questions. So, I was reading your book on the subway. I was wrapping it up before dinner the other night with my son. And this was my favorite paragraph. I'm just going to read it to you and I want you to explain what it means. (0:59:43)

We have now created a non-conscious but very powerful alien intelligence. Agreed. If we mishandle it, AI might extinguish not only the human dominion on Earth, but the light of consciousness itself, turning the universe into a realm of utter darkness. Very cheerful book here, Yuval. This is the end. It is our responsibility to prevent this. So, one way to prevent it would be to prevent AI from extinguishing us. (1:00:13)

Another would be to try to create consciousness and send it out into the universe. Explain to me... The question that this paragraph raises is what is the thing that humans could do that would allow consciousness to exist even if we extinguish ourselves? Oh, I'm not sure what it is. I mean, the problem is we still don't understand consciousness. We don't know how it emerges in us. (1:00:43)

We don't know how it evolved. That's why we also, with regard to AI, you know, this big question of AI consciousness, I tend to be agnostic about it. I don't think that AIs will necessarily develop consciousness, but I'm not sure that they will never develop consciousness. Could be. So, again, this scenario that AIs destroy human civilization, take over and maybe spread from Earth to the rest of the galaxy and to other galaxies, but in the process they never develop consciousness, this is the dark universe scenario. (1:01:24)

Again, there is huge confusion about these two terms because in humans they go together. Intelligence is the ability to attain goals and solve problems on the way to that goal. Consciousness is the ability to feel things like pain and pleasure and love and hate. Humans solve problems relying on our feelings. In us and in other mammals and animals, consciousness and intelligence go together. (1:01:56)

This is why we confuse them. Now, in computers, so far, we have only seen an advance, a huge advance in intelligence without any advance in consciousness as far as we can tell. In some fields, narrow fields, AI is already far more intelligent than us and still it has no consciousness. When it wins a game of chess, it's not happy. When it loses, it's not sad. (1:02:23)

It doesn't feel anything. Now, in many scenarios, as AI becomes more and more intelligent, at some point it also gains consciousness. But there is no reason to think that this is inevitable. There could be different roads leading to super intelligence. Mammals and humans have been traveling along one road for millions and millions of years, a road which involves the development of consciousness. Computers, AIs, might simply be traveling along a different road, a much faster road, which reaches super intelligence without passing through any phase of developing consciousness. (1:03:12)

And if this happens, and if it gets out of our control, this could spell, again, not just the end of the human dominion on earth, but the end of the light of consciousness itself. (1:03:24)


You can have a galactic empire without any feeling, and nothing feels anything. It's just all dark. But why is that? Explain to me why that is so much a worse outcome than just the obliteration of humans, which is a plenty bad outcome. You know, there are other entities, conscious entities, in the world right now. There are other animals. There is no reason to think that, you know, in the history of billions of years of life, that sapience is definitely the last station. (1:04:04)

Whether through biological evolution, or whether through some kind of combination with AI, it's very likely that if we survive, I don't think that humans like us will still be here in a thousand years or ten thousand years. The technology will be so advanced that there will be sentient beings, but they will have completely different bodies and minds, completely different experiences. And this is not necessarily bad. The same way that, you know, the fact that we are here, and the first human species that existed two million years ago are gone, we don't think about it as a tragedy. (1:04:46)

And most people, at least with regard to their children, they hope that their children will be, at least in some way, a bit more evolved than them. But to think that this will be completely wiped out, that there will still be intelligence, but no consciousness at all, I mean, I think that intelligence is really overrated. The really important thing in life is consciousness, it's not intelligence. I mean, intelligence enables us to do different things, but ultimately, it's all about consciousness. (1:05:21)

You know who's still going to be there, Yuval? These guys. Pigeons are going to be there, and they're going to speak thousands of languages. All right, let's go to some audience questions. In your last book, this is from Martin Gallardo, in Homo Deus, you began your book by arguing that humanity has, quote, sorry, has almost defeated, quote, hunger, plagues, and war. Do you still stand for that idea, or have you reconsidered your argument? (1:05:48)

I think humanity has the capacity to reign in famine, plague, and war, but it depends whether we actually do it depends on our decisions. And we have been making some terrible decisions over the last 10 years, which is why we are seeing the return of these calamities, of these conditions. We are now on the verge of a third world war, which, if it happens, is also likely to be accompanied by famine and potentially by plague. (1:06:22)

And the key thing to understand is that the decline, for instance, of war in recent generations was not the result of a change in the laws of nature. It was not some divine miracle. It was simply humans making good decisions and building good institutions. And if we start making bad decisions and neglecting the institutions that preserved peace, then war returns. This question is from Hamed Alavi. There is growing concern regarding the impact of AI on employment, particularly for low-skilled jobs and low-income communities. (1:07:01)

Considering the rapid advancements and uncertainties associated with AI, what strategies or measures can be implemented to alleviate these negative effects? (1:07:09)


The safest thing is to slow down. I think that humans and human societies are extremely adaptable, but in order to adapt, you need time. So, you know, if 10% or 20% of people suddenly lose their jobs, this is a huge political crisis. If it's kind of more drawn out over several years, we have time to adapt. And the most important thing is what happens on the global level. (1:07:39)

Because when I look at a country like the United States, for instance, I'm not so worried. I mean, you know, many jobs will disappear in the coming decade or two. Other jobs will emerge. The big question is whether people will be able to retrain themselves to fill the new jobs. And for that, they will need support. And the countries that lead the AI revolution, they will have immense resources to support the retraining of the population and also to support those members of society that will not be able to go through the transition. (1:08:19)

The big problem will be in other countries that might face complete economic collapse and will just not have the resources to retrain the workforce and to adapt to the new AI economy. What do you think of the premise of the question that AI will affect employment, particularly for low-skilled jobs and low-income communities? People argue both sides of that. I'm not sure. There are good reasons to think it will definitely impact also kind of high-income jobs, accountants, lawyers, doctors, coders, engineers. (1:08:55)

There is no reason to think that it will focus specifically on low-income jobs. I think it's probably going to wipe out media CEOs and historians first. This is from Mike Sexton. My favorite thing about your historical writing is that it has implications not just for how we live, but how we should live. If you were a lifestyle guru instead, what message or recommendations would you have for your audience? (1:09:19)

I'm not a lifestyle guru, so I don't know. As most of you know, this man meditates two hours a day, takes a month-long retreat. You are a lifestyle guru. But I don't tell all people to do it. I know that it works for me, but I know it doesn't work for other people. If it's for somebody else, it's better to take a hike in the woods, so do that instead. (1:09:43)

The idea that meditation works for me, so it must work for all people, it's not. I tried a month-long silent retreat and it didn't work for my employees and my three kids. How do you think AI and disinformation will impact minority rights, especially for those in the LGBTQ plus community? It depends on our decisions. It's not deterministic. It can work both ways. For my own life, I know that the Internet and social media has been wonderful in many ways to the LGBTQ community. (1:10:15)

I met my husband 22 years ago in one of the first social media websites for gay people in Israel. It was really a revolution, because if you think about minorities in history, there are two types of minorities. (1:10:33)


You have concentrated minorities and you have dispersed minorities. Concentrated minorities, you think like Jewish communities. So, if you're born a Jew, let's say in Europe in the Middle Ages, there aren't many Jews around, but you're surrounded by them. You're born into a Jewish family, in a Jewish neighborhood, in a Jewish ghetto, in a Jewish community, you know lots of Jews. So, you have no problem finding other Jews. But I, for instance, I was born in Israel in the 1970s. I grew up in the 80s and 90s in a very homophobic society. (1:11:11)

And I wasn't born in Tel Aviv. I was born in a small suburb of Haifa. And I didn't know anybody who was gay. And this is a dispersed minority. Most gay boys are not born to a gay family in a gay community. Sometimes it happens, but it's very rare. So, the first question you encounter is, how do I find the others? And a question that Jews don't have to deal with, but gay people had to deal with throughout history. (1:11:38)

And then the Internet came along, and to a large extent solved the problem. But suddenly it became very easy, at least much easier than before, to find each other. So, you know, I often criticize, and in the book there is a lot of criticism of information technology and social media, but they also have enormous positive potential. Could extinguish the light of the universe, but good for the gays. (1:12:01)

All right. Thank you. If you could resurrect one other species of human that is now extinct, which one would you bring back and for what purpose? I think it would be very bad for them. I mean, the way that our species treat ourselves, just because we have a little bit of different language or skin color, I wouldn't like to be a Neanderthal minority, or the niece of a minority in a sapiens world. (1:12:32)

Should AI be raised with human parents along a curated, this is from Kathleen Landers, along a curated developmental trajectory to create a sense of love and attachment and connection on the assumption that it's possible to instill a moral code in the way one is created by humans, we should try. AIs are not organic. I mean, they function along a completely different, in completely different ways. (1:12:58)

They are not, I don't know, chimpanzees or Neanderthals that you can think, okay, I'll raise a chimpanzee in a human family so it will become human-like. And this is the key kind of misconception we often have about AI. Like you have all these people asking, when will AI reach human-level intelligence? The answer is never. It's not on the path to human-level intelligence. It's not human, it's not even organic. (1:13:26)

For me, the acronym AI traditionally stood for artificial intelligence. I think it should stand for alien intelligence. Alien not in the sense of coming from outer space, alien in the sense that, yes, this is intelligence, but it makes decisions, it processes information, it invents ideas in a fundamentally alien way. (1:13:52)


Again, it's not organic. One thing, just one very important thing is that organic beings, we work by cycles. Day and night, winter and summer, growth and decay. And sometimes we're active, sometimes we need to rest. One of the problems we encounter more and more in the world is that now, the world is increasingly run by these non-organic intelligences that never need to rest. (1:14:19)

And they don't have cycles. And they pressure us to be the same. Instead of them becoming like us, they pressure us to be the same. And if you force an organic being to be on all the time, to be active all the time, eventually it just collapses and dies. You know, you think even about something like the financial system. So, traditionally, the financial system is an organic system which sometimes takes breaks. (1:14:50)

Wall Street is... the market is open only, I think, Monday to Friday, 9.30 to 4 o'clock in the afternoon. That's it. Weekend, it's off. Christmas, it's off. And this is good for human beings. But if you give AIs greater control of finance, then the system is always on. And this puts pressure on human financiers and bankers to be always on, which is just humanly impossible. (1:15:15)

And the same thing is happening to journalists, and the same thing is happening to politicians, which consequently collapse. And I often say that the most misunderstood word in the English language, at least in the United States, is the word excited. That people overuse it as a good thing. Like, they meet you and they say, oh, I'm so excited to meet you. You publish a book, oh, it is so exciting. (1:15:42)

And they think that excited means happy, but it doesn't. Excited means that your nervous system and your brain is fully on. And if you keep the nervous system of an organic entity on all the time, it leads to collapse. So, the whole system is far too excited. We need to relax. Like, we meet each other, I'm so relaxed to be here with you today. (1:16:09)

And not excited at all. And just think how good it would be if politics was less exciting. Like, what we need, I think, above all in politics is boring politicians. Like, I would say, vote for boring politicians. Alright, well, I am so relaxed to ask the very last question of the night, Yuval. This is from Misha. Do you think that future reliance on artificial intelligence will impact and compete with religion as a source of spirituality? (1:16:48)

Could be, quite likely. I think that many religions have always fantasized about having access to a superhuman intelligence. And suddenly we have it. Think about texts, about holy texts. The idea of the holy text is that this is coming from a non-human intelligence which is superior to us. Now, the problem with holy texts throughout human history until today was that they couldn't really talk back to us. (1:17:24)

Like, there was something in the text we couldn't understand. (1:17:27)


And the text could not explain itself. Like, what is the correct interpretation of this passage in Scripture? So you always, even though in theory the highest authority in the religion was the holy text, in practice a human institution grew around the holy text. And the real authority was in the hands of the people who interpreted the text. And the same way that you have today this fight in the tech world between people who believe in open source and people who believe that no, it should be closed and just a couple of experts will... It's the same with Catholics and Protestants. Like, the Catholics say no, you have the experts of the church, they should interpret the holy code. (1:18:13)

And you have the Protestants who believe in open source. Anybody can read the code and interpret it by themselves. But what happens when for the first time in history the text can talk back? Whether the text of the traditional holy books... You can train an AI to read every single treatise written by every theologian or bishop or thinker in the 3rd century or in 11th century Byzantium. And that AI will understand the text of Christianity better than any human being. (1:18:48)

Would it be more authoritative than human theologians and bishops? One big question. The other question, what happens if you have new religions with texts written by a non-human intelligence? And this can already be happening right now. With AIs maybe disseminating a new holy text online. Which creates the kernel for the next big religion. Which will have, again, a text that can talk back. (1:19:27)

Coming from a superhuman, non-human intelligence. So I definitely think that some of the most interesting developments in AI will be in the field of theology and religion. And maybe to end with a recommendation that Google and Microsoft, they should hire a few theologians. Because they will need it. Alright, that is the perfect note to end on. Thank you so much to Politics and Prose. Thank you so much to the theatre. (1:19:59)

And thank you most of all to Yuval Noah Harari. (1:20:02)


(*1)

Harari は Mark Zuckerberg とも以前、対談している。

動画(1:33:30)

2,886,000 views 2019/04/27

(2024-11-08)