Tuesday, June 27, 2017

Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari, and a review of reviews


Reviewed By Bill Creasy

At a recent meeting of the Human Values Network, we used a review of Yuval Harari's book Homo Deus, called "In a robot showdown, humanity may happily surrender" by Matthew Hutson, Washington Post, March 9, 2017, as a starting point for a discussion. Harari's book is a discussion of the future of humanity in response to advances in genetic engineering and artificial intelligence. The review of the book raised some interesting as well as irritating issues, so I will point out the issues from some of the other reviewers as well as from myself. The reviews of the book have interesting points by themselves. Also, it's easier for me to be critical by quoting someone else. For example, this is a comment by Adam Gopnik in the New Yorker about the book: "with Harari's move from mostly prehistoric cultural history to modern cultural history, even the most complacent reader becomes uneasy encountering historical and empirical claims so coarse, bizarre, or tendentious." I wouldn't be able to top a comment like this.

To be fair, there are many ideas in the book that are sensible and justifiable. Harari's previous book, Sapiens, was a capsule description of the history of human civilization. This book continues that story with a summary of the past, consideration of the present society, and speculation about the future of humanity. According to Harari, most of human history and prehistory has been a fight against the triple problems of famine, pestilence (diseases and plagues), and warfare. To a large extent, these problems have been solved, at least to the extent that we humans decide to solve them. We know what to do to solve the problems, and we aren't at the mercy of random events that we have to attribute to a deity. This is a recent development. The author has a belief in progress and that the progress will be driven by science and technology. The principle of evolution is a starting point for his arguments. 

So the question is, what will people be concerned with in the future that will have the same importance as the struggle against famine, pestilence, and warfare? The book is an effort to ask this question, but the answers are less satisfactory. Part of the problem is the basic issue of describing the past as opposed to trying to predict the future, which is obviously more difficult. However, the book is witty and well-written, and there isn't much technical jargon, so it provides food for thought. I'm particularly interested in the way he talks about the following four issues.

I. Harari's humanism
The major issue for Humanists (with a big “H”) is Harari's ideas about, or definition of, humanism (with a small “h”). Harari wrote that "humanists worship humans" (Chap. 2) . The statement is questionable on its face, since most Humanists would disagree. But this is a statement that is hard to interpret. It appears that Harari means something new for the purpose of his argument.

According to a review by Adam Gopnik in the New Yorker,
"'Humanism,' for instance, ordinarily signifies, first, the revival of classical learning in the Italian Renaissance... to place a new value on corporeal beauty, antique wisdom, and secular learning.... By 'humanism' Harari means, instead, the doctrine that only our feelings can tell us what to do--that 'we ought to give as much freedom as possible to every individual to experience the world, follow his or her inner voice and express his or her inner truth.'”
A reviewer called Flatiron John on Amazon.com is harder on Harari, saying, 
“He really dislikes humanism: he inaccurately states its tenets, and then repeatedly mocks it (for example, as promoting indulgent consumerism and sex). He claims that humanism is what is giving rise to an emerging cybernetic dystopia.... Harari is abusing the word 'humanism,' as a canvas on which to paint his caricature of modern liberal culture ('liberal' in the classical sense, not in the sense of left-wing politics). He is not really interested in what humanist writers and philosophers have actually said, and does not reference their works. He claims that humanism promotes the belief in a supernatural free will (when in fact, humanists value agency and freedom, but have differing opinions on free will). He claims that humanism believes in an indivisible self/soul (when in fact, psychologists since Freud have a different understanding). And he claims that humanism believes that individuals always know best about their own needs (when in fact, many have emphasized the importance of education in our development--he does not even reference John Dewey).”
Harari wrote in addition about humanism, 
"In fact, humanism shared the fate of every successful religion, such as Christianity and Buddhism. As it spread and evolved, it fragmented into several conflicting sects. All humanist sects believe that human experience is the supreme source of authority and meaning, yet they interpret human experience in different ways."
 According to Harari, the three rival branches of humanism are orthodox humanism, socialist humanism, and evolutionary humanism. Then, even more strangely, he reinterprets the history of the 20th Century as a conflict between these three branches. Orthodox humanism represents liberal democracy, socialist humanism is Soviet communism, and evolutionary humanism is Nazism and Fascism. It goes without saying that no modern Humanist (with a capital H) would claim that communism or Nazism are part of humanist thought. Yet Harari's definition is broad enough to encompass them. 

The reviewer Rod Dreher wrote “Three Rival Humanisms” in The American Conservative (March 28, 2017) with this quote from Harari, and there is a long discussion in the letter column following his article that includes thoughtful conservative and Christian humanist points of view. 

Harari avoids jargon from terms with specialized meanings, but instead he redefines common terms to mean something that most people wouldn't agree with. He uses the term “humanism” differently than anyone in the Humanist movement would use it. No Humanist would say that humanism has “factions.” In some ways, his definition seems a little condescending, as if he is trying to distance himself from being a part of humanism. For example, he writes seriously about whether animals have real emotions, but in the chapter on humanism, he only talks about human “feelings” as the measure of importance and meaning. Rationality doesn't seem to have much impact on his humanism. Instead, he uses humanism as a kind of cultural trend to use people's happiness or suffering as measures of good or bad actions. He has some interesting ideas along the way, and he gives an unusual perspective. But he uses very general, broad overview, and avoids the specific. 

It's not easy to know how to interpret Harari's ideas in terms of the movement of Humanism. But perhaps the lesson is simply this: humanism is a important term and an old concept, and if we want to control the term as a designation of the Humanist movement, we have to be careful about controlling the meaning and usage of the term. We have to criticize people like Harari who try to make it mean something else.
 
II. Religion
Harari has some clever words about religion. Again, he uses the term “religion” to mean something that most religious people wouldn't accept, to the degree that humanism can be classified by him as a religion. According to Harari, 
“religion is any all-encompassing story that confers superhuman legitimacy on human laws, norms, and values. It legitimises human social structures by arguing that they reflect superhuman laws.” (Chap. 5).
 Harari redefines religion as a general worldview, but he eliminates a lot of common features of religion, like ritual and church organization. This may be one aspect of religion, but it ignores many other aspects that people think of as belonging to religion. In addition, it implies that there is something about humanism that involves superhuman legitimacy.

There are indications that Harari looks at religion in a flippant or condescending way. Nate Hopper quotes Harari in person in his Time magazine interview,  
“How might Homo sapiens find a sense of self-worth if technology can do their work better than they? One answer from experts is that computer games will fill the void. And they sound scary and dystopian until you realize that actually for thousands of years humans have been playing virtual reality games. Up until now, we simply called them religions.” 
 So his thoughts on religion have to be interpreted cautiously, with an effort to understand whether he is talking about actual religion or his definition of religion. That makes it particularly easy to take quotes out of context.

III. Future human goals
In the last third of the book, Harari describes some future scenarios for goals that humans may have. In general, he suggests that humans will seek after “immortality, bliss, and divinity.” These represent absolutist goals to continue the fight against famine, pestilence, and warfare, where immortality is the progress against death, bliss is the search for ideal happiness and satisfaction of our material needs, and divinity represents power and control over nature. Humans may never get to the ultimate achievement of these goals, but that won't keep people from trying or from making progress.

The book is weaker when discussing the technologies to use to make the progress. These involve some extrapolation of current technologies toward speculative or science fiction ideas: genetic engineering to produce humans with biologically superior physical or mental abilities, and artificial intelligence to produce evolving computers that could surpass human intelligence. Neither of these is a particularly novel idea, and not much is contributed from this book, either in understanding the technologies or in anticipating ethical dilemmas. For example, Ray Kurzweil and Gregory Paul, among others, have advocated for the development of artificial intelligence that may surpass human intelligence. Harari refers to these superior humans as the “Homo Deus” of the title, as if they become literal gods, even if they are perhaps only analogous to the Greek pantheon. But the use of the word “god” is unspecific and misleading, to go along with his definitions of humanism and religion. He proposes that “techno-humanism” will be a new religion, with humans still the center of philosophy and values but with technologically improved humans to replace the current variety. His idea of the goal of the future humans sounds like a theistic goal of bliss, immortality, and divinity, rather than practical progress toward these ideals with real technology.

Ashutosh S. Jogalekar wrote in a customer reviewer on Amazon.com
 “The problem is that Mr. Harari is an anthropologist and social scientist, not an engineer, computer scientist or biologist, and many of the questions of AI are firmly grounded in engineering and software algorithms. There are mountains of literature written about machine learning and AI and especially their technical strengths and limitations, but Mr. Harari makes few efforts to follow them or to explicate their central arguments. Unfortunately there is a lot of hype these days about AI, and Mr. Harari dwells on some of the fanciful hype without grounding us in reality. In short, his take on AI is slim on details, and he makes sweeping and often one-sided arguments while largely skirting clear of the raw facts. The same goes for his treatment for biology. He mentions gene editing several times, and there is no doubt that this technology is going to make some significant inroads into our lives, but what is missing is a realistic discussion of what biotechnology can or cannot do. It is one thing to mention brain-machine interfaces that would allow our brains to access supercomputer-like speeds in an offhand manner; it's another to actually discuss to what extent this would be feasible and what the best science of our day has to say about it.”
 The other possible future religion that Harari proposes is “dataism”, the idea that “the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing” (Chap 11). This is an interesting idea, but it is odd to think that the quantity of data is important, rather than the way it is processed into useful information. We can consider a website like Wikipedia, which is notable not for the quantity of information (even though it is large), but for the fact that it has well-organized, well-written, and comprehensive information. I recently heard a National Capital Area Skeptics lecture by Susan Gerbic, who is organizing a group Guerilla Skepticism on Wikipedia, that is dedicated to increasing the skeptical content of Wikipedia entries. Adding to Wikipedia is certainly a calling or perhaps even an obsession, and it takes a librarian's interest in cataloging information so it's accessible. But it doesn't seem like a religion. (It may qualify as a religion under Harari's definition, but it's hard to tell.) So it isn't clear why “dataism” would appeal to anyone in any sense of the term religion. Why would it satisfy a human need for meaning in life? 

IV. Group Evolution 
 Group evolution can contribute to the questions that Harari addresses. Groups and organizations are important for human evolution, and they will continue to be important in the future. Large groups may become more important in determining the direction of future society than individuals. In some ways, individuals may have to tolerate inconveniences in order to keep society working well. 

Harari mentions the importance of groups and cooperation among humans in producing society. In fact, he discusses an interesting classification of information he calls “intersubjective”, in addition to objective and subjective information. Intersubjective information “depends on communication among many humans rather than on the beliefs and feelings of individual humans” (Chap.3). For example, items like money, language, and law are classified as intersubjective, since they don't exist unless many people use them. Harari makes the mistake of referring to these items as “fictions,” since they aren't objectively real in the same way as physically real objects. 

But this kind of information is the kind that is evolving in group evolution, so it is far from fictional. In fact, it is important to understand how this information is stored, passed along, and selected for. We probably need to know a lot more about that. 

David Runciman says in his review the The Guardian
“Harari thinks the modern belief that individuals are in charge of their fate was never much more than a leap of faith. Real power always resided with networks. Individual human beings are relatively powerless creatures, no match for lions or bears. It's what they can do as groups that has enabled them to take over the planet. These groupings - corporations, religions, states - are now part of a vast network of interconnected information flows.”
 But the importance of groups doesn't imply that individual humans are unimportant, that they don't matter, or that they are powerless to influence the future. Individuals are important, but not in the way that people may commonly think. We aren't cowboys who must fend for ourselves or our families. We are stuck with each other, whether we like it or not. We have to think of the best ways to get along, and there's nothing fictional about that. 

Individuals come up with new technologies and with new kinds of organization. More important, the new inventions only matter because a large number of people adopt them and find them useful. For example, the cell phone was developed and improved by a large number of people, and it influences current culture because almost everyone has gotten one. This doesn't indicate that individuals are powerless; it shows they have similar needs and adopted a new technology that helps to solve them. It also shows that humans pass along the “intersubjective” information that makes group evolution evolve and change. 

Group evolution indicates that the selection process will happen for many kinds of new technology. There may be new biological modifications that can be done on humans, as Harari indicates. The ones that will have the most impact will be the ones that are accepted by a lot of people, perhaps such as the ones that lengthen lifespan. But we can also imagine genetic engineering that will turn people blue or grow wings. But if these changes are not widely accepted, or if they don't solve a social problem, they won't make much difference. Some people may make the change out of vanity, or because they have a lot of money to spend on a luxury, but those with the alterations will be a small minority. This is the kind of selection criterion that group evolution can apply to a plan for the future which Harari should have tried to take into account. 

Artificial intelligence will likely make a significant difference, once the right kind of algorithms are developed. Again, the ones that will make a difference will solves a problem with the group. For example, modern economists are making an effort to understand a country's economy and how the distribution of money affects it. They try to make rules and generalizations to simplify the economy and to figure out how to understand it. However, a large enough artificial intelligence computer will not need to simplify the economy. It will simply keep track of all transactions by brute data processing. If a person loses a job because the job is obsolete, artificial intelligence can identify that person, find a related new job, and make sure the person is trained for it. Does this mean that the person is not in control? Not really, since they can refuse to do the new job. But artificial intelligence will solve a problem for them, if they want to solve it. This will be progress. 

It is likely that the artificial intelligence programs will start to evolve by themselves, since they will be too complex for human programmers. The real problem is setting up the artificial intelligence so that it will evolve toward the socially useful purpose. An AI shouldn't be designed to evolve for finding better ways to kill people; that would be a mistake. It might succeed too well. This isn't a small problem to worry about. The Department of Defense has a lot of money to spend on the problem of targeting “bad guys.” But if an AI gets smart enough, will it notice that it can be really difficult to tell the difference between good guys and bad guys? Will it decide that the bad guys are the ones asking it to kill people? Or will it just notice that there are really too many human beings alive to be supported comfortably on the planet, and things would be better with fewer people? From our perspective, these might be unfortunate conclusions for it to arrive at, if it has the power to do something about it. 

A lot of the current generation of internet technology is designed to keep people online and using the technology. Facebook is trying to keep people on, because that is the way that they make more money from advertising. Television programs, from the original ones in the 1950's to the current generation, are usually paid for by advertising, so they get paid for “eyeballs” of people watching. The programs are designed to keep people watching. Does this solve a real social problem? 

The AI may not need to be designed to act like a human being. We have enough human beings, why build more? But if robots can be built to perform jobs that humans are not really good at, they will probably be built and used. The problem is then finding things for the humans to do to earn a living. This isn't an impossible problem, as long as the robots are producing all the things that humans need. It is just a question of distributing the things, and then telling the humans that they can do whatever they want. Would that be so bad? 

V. Conclusion 
It is difficult to say that Harari's book is not good, since it has a lot of good information, it uses some good assumptions about the future, and there are a lot of interesting ideas. But it has limitations. It defines terms like humanism and religion in a way that isn't accurate and could lead to misunderstandings. The ideas “techno-humanism” and “dataism” are really odd ideas about what humans need to make life matter. Because it doesn't include group evolution, it doesn't have an important criterion for evaluating future changes. But the book provides food for thought, and that is not a bad thing. 
 

No comments: