Yuval Noah Harari on whether democracy and AI can coexist

Yuval Noah Harari on whether democracy and AI can coexist

If the internet age has anything like an ideology, it’s that more information and more data and more openness will create a better and more truthful world.

That sounds right, doesn’t it? It has never been easier to know more about the world than it is right now, and it has never been easier to share that knowledge than it is right now. But I don’t think you can look at the state of things and conclude that this has been a victory for truth and wisdom.

What are we to make of that? Why hasn’t more information made us less ignorant and more wise?

Yuval Noah Harari is a historian and the author of a new book called Nexus: A Brief History of Information Networks from the Stone Age to AI. Like all of Harari’s books, this one covers a ton of ground but manages to do it in a digestible way. It makes two big arguments that strike me as important, and I think they also get us closer to answering some of the questions I just posed.

The first argument is that every system that matters in our world is essentially the result of an information network. From currency to religion to nation-states to artificial intelligence, it all works because there’s a chain of people and machines and institutions collecting and sharing information.

The second argument is that although we gain a tremendous amount of power by building these networks of cooperation, the way most of them are constructed makes them more likely than not to produce bad outcomes, and since our power as a species is growing thanks to technology, the potential consequences of this are increasingly catastrophic.

I invited Harari on The Gray Area to explore some of these ideas. Our conversation focused on artificial intelligence and why he thinks the choices we make on that front in the coming years will matter so much.

As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.

This conversation has been edited for length and clarity.

What’s the basic story you wanted to tell in this book?

The basic question that the book explores is if humans are so smart, why are we so stupid? We are definitely the smartest animal on the planet. We can build airplanes and atom bombs and computers and so forth. And at the same time, we are on the verge of destroying ourselves, our civilization, and much of the ecological system. And it seems like this big paradox that if we know so much about the world and about distant galaxies and about DNA and subatomic particles, why are we doing so many self-destructive things? And the basic answer you get from a lot of mythology and theology is that there is something wrong in human nature and therefore we must rely on some outside source like a god to save us from ourselves. And I think that’s the wrong answer, and it’s a dangerous answer because it makes people abdicate responsibility.

I think that the real answer is that there is nothing wrong with human nature. The problem is with our information. Most humans are good people. They are not self-destructive. But if you give good people bad information, they make bad decisions. And what we see through history is that yes, we become better and better at accumulating massive amounts of information, but the information isn’t getting better. Modern societies are as susceptible as Stone Age tribes to mass delusions and psychosis.

Too many people, especially in places like Silicon Valley, think that information is about truth, that information is truth. That if you accumulate a lot of information, you will know a lot of things about the world. But most information is junk. Information isn’t truth. The main thing that information does is connect. The easiest way to connect a lot of people into a society, a religion, a corporation, or an army, is not with the truth. The easiest way to connect people is with fantasies and mythologies and delusions. And this is why we now have the most sophisticated information technology in history and we are on the verge of destroying ourselves.

The boogeyman in the book is artificial intelligence, which you argue is the most complicated and unpredictable information network ever created. A world shaped by AI will be very different, will give rise to new identities, new ways of being in the world. We have no idea what the cultural or even spiritual impact of that will be. But as you say, AI will also unleash new ideas about how to organize society. Can we even begin to imagine the directions that might go?

Not really. Because until today, all of human culture was created by human minds. We live inside culture. Everything that happens to us, we experience it through the mediation of cultural products — mythologies, ideologies, artifacts, songs, plays, TV series. We live cocooned inside this cultural universe. And until today, everything, all the tools, all the poems, all the TV series, all the mythologies, they are the product of organic human minds. And now increasingly they will be the product of inorganic AI intelligences, alien intelligences. Again, the acronym AI traditionally stood for artificial intelligence, but it should actually stand for alien intelligence. Alien, not in the sense that it’s coming from outer space, but alien in the sense that it’s very, very different from the way humans think and make decisions because it’s not organic.

To give you a concrete example, one of the key moments in the AI revolution was when AlphaGo defeated Lee Sedol in a Go Tournament. Now, Go is a bold strategy game, like chess but much more complicated, and it was invented in ancient China. In many places, it’s considered one of the basic arts that every civilized person should know. If you are a Chinese gentleman in the Middle Ages, you know calligraphy and how to play some music and you know how to play Go. Entire philosophies developed around the game, which was seen as a mirror for life and for politics. And then an AI program, AlphaGo, in 2016, taught itself how to play Go and it crushed the human world champion. But what is most interesting is the way [it] did it. It deployed a strategy that initially all the experts said was terrible because nobody plays like that. And it turned out to be brilliant. Tens of millions of humans played this game, and now we know that they explored only a very small part of the landscape of Go.

So humans were stuck on one island and they thought this is the whole planet of Go. And then AI came along and within a few weeks it discovered new continents. And now also humans play Go very differently than they played it before 2016. Now, you can say this is not important, [that] it’s just a game. But the same thing is likely to happen in more and more fields. If you think about finance, finance is also an art. The entire financial structure that we know is based on the human imagination. The history of finance is the history of humans inventing financial devices. Money is a financial device, bonds, stocks, ETFs, CDOs, all these strange things are the products of human ingenuity. And now AI comes along and starts inventing new financial devices that no human being ever thought about, ever imagined.

What happens, for instance, if finance becomes so complicated because of these new creations of AI that no human being is able to understand finance anymore? Even today, how many people really understand the financial system? Less than 1 percent? In 10 years, the number of people who understand the financial system could be exactly zero because the financial system is the ideal playground for AI. It’s a world of pure information and mathematics.

AI still has difficulty dealing with the physical world outside. This is why every year they tell us, Elon Musk tells us, that next year you will have fully autonomous cars on the road and it doesn’t happen. Why? Because to drive a car, you need to interact with the physical world and the messy world of traffic in New York with all the construction and pedestrians and whatever. Finance is much easier. It’s just numbers. And what happens if in this informational realm where AI is a native and we are the aliens, we are the immigrants, it creates such sophisticated financial devices and mechanisms that nobody understands them?

So when you look at the world now and project out into the future, is that what you see? Societies becoming trapped in these incredibly powerful but ultimately uncontrollable information networks?

Yes. But it’s not deterministic, it’s not inevitable. We need to be much more careful and thoughtful about how we design these things. Again, understanding that they are not tools, they are agents, and therefore down the road are very likely to get out of our control if we are not careful about them. It’s not that you have a single supercomputer that tries to take over the world. You have these millions of AI bureaucrats in schools, in factories, everywhere, making decisions about us in ways that we do not understand.

Democracy is to a large extent about accountability. Accountability depends on the ability to understand decisions. If … when you apply for a loan at the bank and the bank rejects you and you ask, “Why not?,” and the answer is, “We don’t know, the algorithm went over all the data and decided not to give you a loan, and we just trust our algorithm,” this to a large extent is the end of democracy. You can still have elections and choose whichever human you want, but if humans are no longer able to understand these basic decisions about their lives, then there is no longer accountability.

You say we still have control over these things, but for how long? What is that threshold? What is the event horizon? Will we even know it when we cross it?

Nobody knows for sure. It’s moving faster than I think almost anybody expected. Could be three years, could be five years, could be 10 years. But I don’t think it’s much more than that. Just think about it from a cosmic perspective. We are the product as human beings of 4 billion years of organic evolution. Organic evolution, as far as we know, began on planet Earth 4 billion years ago with these tiny microorganisms. And it took billions of years for the evolution of multicellular organisms and reptiles and mammals and apes and humans. Digital evolution, non-organic evolution, is millions of times faster than organic evolution. And we are now at the beginning of a new evolutionary process that might last thousands and even millions of years. The AIs we know today in 2024, ChatGPT and all that, they are just the amoebas of the AI evolutionary process.

Do you think democracies are truly compatible with these 21st-century information networks?

Depends on our decisions. First of all, we need to realize that information technology is not something on [a] side. It’s not democracy on one side and information technology on the other side. Information technology is the foundation of democracy. Democracy is built on top of the flow of information.

For most of history, there was no possibility of creating large-scale democratic structures because the information technology was missing. Democracy is basically a conversation between a lot of people, and in a small tribe or a small city-state, thousands of years ago, you could get the entire population or a large percentage of the population, let’s say, of ancient Athens in the city square to decide whether to go to war with Sparta or not. It was technically feasible to hold a conversation. But there was no way that millions of people spread over thousands of kilometers could talk to each other. There was no way they could hold the conversation in real time. Therefore, you have not a single example of a large-scale democracy in the pre-modern world. All the examples are very small scale.

Large-scale democracy became possible only after the rise of the newspaper and the telegraph and radio and television. And now you can have a conversation between millions of people spread over a large territory. So democracy is built on top of information technology. Every time there is a big change in information technology, there is an earthquake in democracy which is built on top of it. And this is what we’re experiencing right now with social media algorithms and so forth. It doesn’t mean it’s the end of democracy. The question is, will democracy adapt?

Do you think AI will ultimately tilt the balance of power in favor of democratic societies or more totalitarian societies?

Again, it depends on our decisions. The worst-case scenario is neither because human dictators also have big problems with AI. In dictatorial societies, you can’t talk about anything that the regime doesn’t want you to talk about. But actually, dictators have their own problems with AI because it’s an uncontrollable agent. And throughout history, the [scariest] thing for a human dictator is a subordinate [who] becomes too powerful and that you don’t know how to control. If you look, say, at the Roman Empire, not a single Roman emperor was ever toppled by a democratic revolution. Not a single one. But many of them were assassinated or deposed or became the puppets of their own subordinates, a powerful general or provincial governor or their brother or their wife or somebody else in their family. This is the greatest fear of every dictator. And dictators run the country based on fear.

Now, how do you terrorize an AI? How do you make sure that it’ll remain under your control instead of learning to control you? I’ll give two scenarios which really bother dictators. One simple, one much more complex. In Russia today, it is a crime to call the war in Ukraine a war. According to Russian law, what’s happening with the Russian invasion of Ukraine is a special military operation. And if you say that this is a war, you can go to prison. Now, humans in Russia, they have learned the hard way not to say that it’s a war and not to criticize the Putin regime in any other way. But what happens with chatbots on the Russian internet? Even if the regime vets and even produces itself an AI bot, the thing about AI is that AI can learn and change by itself.

So even if Putin’s engineers create a regime AI and then it starts interacting with people on the Russian internet and observing what is happening, it can reach its own conclusions. What if it starts telling people that it’s actually a war? What do you do? You can’t send the chatbot to a gulag. You can’t beat up its family. Your old weapons of terror don’t work on AI. So this is the small problem.

The big problem is what happens if the AI starts to manipulate the dictator himself. Taking power in a democracy is very complicated because democracy is complicated. Let’s say that five or 10 years in the future, AI learns how to manipulate the US president. It still has to deal with a Senate filibuster. Just the fact that it knows how to manipulate the president doesn’t help it with the Senate or the state governors or the Supreme Court. There are so many things to deal with. But in a place like Russia or North Korea, an AI only needs to learn how to manipulate a single extremely paranoid and unself-aware individual. It’s quite easy.

What are some of the things you think democracies should do to protect themselves in the world of AI?

One thing is to hold corporations responsible for the actions of their algorithms. Not for the actions of the users, but for the actions of their algorithms. If the Facebook algorithm is spreading a hate-filled conspiracy theory, Facebook should be liable for it. If Facebook says, “But we didn’t create the conspiracy theory. It’s some user who created it and we don’t want to censor them,” then we tell them, “We don’t ask you to censor them. We just ask you not to spread it.” And this is not a new thing. You think about, I don’t know, the New York Times. We expect the editor of the New York Times, when they decide what to put at the top of the front page, to make sure that they are not spreading unreliable information. If somebody comes to them with a conspiracy theory, they don’t tell that person, “Oh, you are censored. You are not allowed to say these things.” They say, “Okay, but there is not enough evidence to support it. So with all due respect, you are free to go on saying this, but we are not putting it on the front page of the New York Times.” And it should be the same with Facebook and with Twitter.

And they tell us, “But how can we know whether something is reliable or not?” Well, this is your job. If you run a media company, your job is not just to pursue user engagement, but to act responsibly, to develop mechanisms to tell the difference between reliable and unreliable information, and only to spread what you have good reason to think is reliable information. It has been done before. You are not the first people in history who had a responsibility to tell the difference between reliable and unreliable information. It’s been done before by newspaper editors, by scientists, by judges, so you can learn from their experience. And if you are unable to do it, you are in the wrong line of business. So that’s one thing. Hold them responsible for the actions of their algorithms.

The other thing is to ban the bots from the conversations. AI should not take part in human conversations unless it identifies as an AI. We can imagine democracy as a group of people standing in a circle and talking with each other. And suddenly a group of robots enter the circle and start talking very loudly and with a lot of passion. And you don’t know who are the robots and who are the humans. This is what is happening right now all over the world. And this is why the conversation is collapsing. And there is a simple antidote. The robots are not welcome into the circle of conversation unless they identify as bots. There is a place, a room, let’s say, for an AI doctor that gives me advice about medicine on condition that it identifies itself.

Similarly, if you go on Twitter and you see that a certain story goes viral, there is a lot of traffic there, you also become interested. “Oh, what is this new story everybody’s talking about?” Who is everybody? If this story is actually being pushed by bots, then it’s not humans. They shouldn’t be in the conversation. Again, deciding what are the most important topics of the day. This is an extremely important issue in a democracy, in any human society. Bots should not have this ability to determine what stories dominate the conversation. And again, if the tech giants tell us, “Oh, but this infringes freedom of speech” — it doesn’t because bots don’t have freedom of speech. Freedom of speech is a human right, which would be reserved for humans, not for bots.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *