The Board of Trustees of the Peace Prize of the German Book Trade has chosen the American computer scientist, musician and writer Jaron Lanier to be the recipient of this year's Peace Prize. The award ceremony took place on Sunday, October 12, 2014, in the Church of St. Paul in Frankfurt am Main, Germany. The laudatory speech was held by Martin Schulz.
Statement of the Jury
The German Publishers and Booksellers Association is delighted to award its 2014 Peace Prize to the American computer scientist, musician and writer Jaron Lanier. In honoring Lanier, the association and its members have chosen to pay tribute to a true pioneer in the digital world – one who has always recognized the inherent risks contained in this new world with regard to each individual's right to shape his or her own life.
Throughout his career, Lanier has consistently and effectively spotlighted the threats our open society faces when deprived of the power to control its own progress and development. While acknowledging the gains in diversity and freedom that accompany the growth of the digital world, Lanier has nevertheless always pointed to the dangers involved when human beings are reduced to digital categories. His most recent work "Who Owns the Future?" is an appeal for vigilance in the face of oppression, abuse and surveillance – a call to equip the digital universe with structures that respect the rights of individuals while simultaneously fostering democratic participation.
Lanier's concept of assigning a sustainable and economic value to the creative contributions made by each individual on the Internet reflects his commitment to the enshrinement of the human values that form the very basis of peaceful coexistence – in the real and digital worlds alike.
The Award Ceremony
Will artificial intelligence develop in the service of mankind or will we soon live in the sort of society that even Aldous Huxley or George Orwell could not have imagined?Heinrich Riethmüller - Greeting
President of the German Publishers and Booksellers Association
Thesick butterfly will soon recall the sea,
this stone with the inscription of the fly has given itself into my hand.
Instead of home, I hold the transformations of the world.
It was fifty years ago that the poet Nelly Sachs stood on this very site and read her poetry aloud. In those days, the discussion was still determined by Adorno’s dictum that it would be barbaric to write poetry after Auschwitz. Whether consciously or subconsciously—this endowed Nelly Sachs’s speech with highly topical and lasting relevance.
Recipients of the Peace Prize bear witness to their times. They advocate peace and freedom, they conceive ideas for a peaceful world, they overcome obstacles and break alleged taboos:
There are the recipients of the early 1950s—Max Tau, Albert Schweitzer, and Martin Buber—all of whom hold a mirror to their German heritage; one that was to break the mould of the country’s cultural and social isolation.
There are scientists, political scientists and philosophers such as Carl Friedrich von Weizsäcker, Alexander Mitscherlich and Ernst Bloch, whose utopias turn into hope, who reveal that which is inhumane, who draw up blueprints for a peaceful future.
There are the victims and witnesses of Germany’s ghastly and violent past, such as Janusz Korczak, Fritz Stern, and Saul Friedländer—whom we meet with humility—and who, in spite of all the inflicted suffering, continue to act in the spirit of reconciliation.
There are Astrid Lindgren, the author of children’s books, Yehudi Menuhin, the musician, and Siegfried Lenz, who passed away this week. He was one of the most important writers for me.
All of them, alongside the recipients in most recent years—David Grossman, Liao Yiwu, Boualem Sansal, and Svetlana Alexiyevitch, whose books all deal with the subject of war and oppression—have left a decisive imprint upon the history of the Peace Prize, which is also the history of Germany and its neighbours.
This year, the Börsenverein is awarding the Peace Prize to the American musician, artist and computer scientist Jaron Lanier, one of the fiercest critics of digital capitalism. This choice stands in stark contrast to prior decisions, and yet it is also entirely consistent with them.
Jaron Lanier shows clearly that we are in one of modern mankind’s decisive epochs. We all make use of the seemingly free and all-embracing wealth of information so as to communicate and further our knowledge. For the first time in human history, it takes us only seconds to arrive at conclusions and satisfy our material and virtual desires at just one click. For oppressed individuals and peoples, these new communication platforms hold the promise of liberation; to start revolutions and topple dictators. Democratic societies, meanwhile, profit from informational diversity and new decision-making processes.
This brave new world makes us self-sufficient and independent; it represents progress, security and prosperity. But as we surf the Internet, place orders online, keep up with old friends and voice our opinions, we leave traces and data to be collected and analysed by international corporations. We make ourselves dependent upon the global monopolists who evade not just federal and social scrutiny but, indeed, the very conversation about these matters.
Those who conceived of this new world succumb to the fantasy—indeed, the certainty—that human beings, in their words and deeds, are predictable and calculable, that all the world’s problems can be solved and that there are models for everything. Servers, databases and giant machines rule the world; they are superior to people in many ways. He who owns the greatest data storage, claims Jaron Lanier, is the most powerful. And he who is the most powerful determines the way of the world.
Are human beings in the process of doing away with themselves and are we about to give up on the values we had hitherto deemed important? What is the price that we will have to pay for being so insouciant, so complacent? Will we be able to tame the spirits we’ve summoned? Will artificial intelligence develop in the service of mankind or will we soon live in the sort of society that even Aldous Huxley or George Orwell could not have imagined?
At the heart of the debate sparked by Jaron Lanier some years ago lies the question as to whether humanity will be able to uphold the individuality — and thus the personal freedom—of each single person, without foregoing the advantages of the digital world, or whether we will enter into an ever-increasing dependency on machines, making man himself into an algorithm, into a mathematical model. Jaron Lanier does not content himself with the role of analyst and admonisher, but he develops strategies that may enable us to surmount the danger of becoming altogether dependent on technology and machines.
Everyone who looks into these matters and gives thought to the dangers of our new world runs the risk of being labelled a cultural pessimist, a luddite. There is little pleasure in questioning modernity; complacency’s sweet poison seeps through us. But critical engagement is not to be confused with pessimism, nor with pathological resistance towards progress.
Does the diversity promised by digital life not turn into uniformity if we consent to being reduced to “gadgets” and if we allow our virtual selves to be based on the trail of data we have left online? Are we not relinquishing something that is fundamental to our humanity and our capacity for growth? Do we not, in fact, give up a part of our personality—steeped as it is in our imagination, our capability of abstraction, our creativity, and, not least, in our inadequacies?
It is already possible today to analyse someone’s reading habits on the basis of data stored in e-book readers. It is already possible today to coerce authors to write books based on these insights. It would also be possible today to offer readers a book that meets exactly the wishes and expectations wrested from their e-book readers. But isn’t this alleged gourmet menu in fact just fast food, fully seasoned yet mind-numbingly boring?
Do we want authors to pander to an audience, or do we want artists to give voice, in writing, to their true concerns?
Ein Mensch ist ein Mensch ist ein Mensch; all human beings are full of errors and shortcomings. This is why there is personal growth—in which technology should aid but not command us. Only then can the advantages inherent to it become ours.
When our world is described in terms that are all too beautiful, when those in power—and in our day and age they are often no longer politicians—potently suggest that we do not live in fear but in the belief in progress, then a poem by one of Nelly Sachs’s contemporaries comes to mind. Günter Eich wrote his famous “Wacht Auf” ("Wake Up") over 60 years ago. It reads:
No, sleep not, while the masters of the world get to business!
Be wary of the might to which they claim to be entitled on your behalf.
Take caution that your hearts be not empty when their emptiness is presumed!
Take to folly, sing songs not expected from your mouths!
Be nuisances, be sand, not oil in the wheels of the world.
Jarod Larnier once said in an interview: “People are not computers. People have a mystical quality. If you lose faith in humanity, you lose faith in a society that acts in the service of humanity.” It is his struggle for a society in the service of humanity that connects him with other Peace Prize recipients. In this sense, Lanier’s views on digital humanism are reminiscent of Martin Buber, who considered a real dialogue between two people to be feasible only if neither one sees the other as an object—or, in the parlance of our times—as a gadget.
Ladies and gentleman, if we succeed in not being blinded by the self-proclaimed designers of our world, if we question critically what modernity has in store for us, if we accept ideas that are controversial and that seek to sustain our individuality, then I am not worried about our future. But for such a future we need convincing and optimistic thinkers. Thinkers such as Jaron Lanier.
Translated into English by The Hagedorn Group.
Dieser Text ist urheberrechtlich geschützt. Der Nachdruck und jede andere Art der Vervielfältigung als Ganzes oder in Teilen, die urheberrechtlich nicht gestattet ist, werden verfolgt. Anfragen zur Nutzung der Reden oder von Ausschnitten daraus richten Sie bitte an: email@example.com.
Greeting of the president
Lanier provides a poignant description of how for some in Silicon Valley the belief in a smart internet world has become an ideology, if not a new religion.Martin Schulz - Laudation for Jaron Lanier
The morality of feasibility is not compatible with our ethical standard.
Laudation for Jaron Lanier
We find ourselves on the threshold of the digital age—at the turn of an era that leaves in its wake the long 19th and short 20th centuries; within a process that calls into question our social relationships, the way we run our economies, our constitutive disposition, our values, indeed our culture. We find ourselves in a process that presents societies all over the globe with challenges of an enormity not seen since the Industrial Revolution so powerfully changed the face of our world.
A wealth of articles and books dedicated to the analysis and evaluation of the process of digitisation has appeared in recent months. They examine the opportunities spawned by the technological revolution: increased transparency and the opportunity to participate in decision-making processes, easier access to knowledge, more effective medicine, better services, improved efficiency and much more. But they also deal with the risks inherent to these changes.
Hardly anyone has pointed out such dangers and risks more trenchantly than Jaron Lanier. His criticism, however, is not culturally pessimistic, nor is it luddite; instead, Lanier seeks to caution his readers from the vantage point of a knowledgeable oppositionist who still remains fundamentally loyal to the cause. This is what endows his convictions—which he has presented in books, articles, speeches and interviews—with such an illuminating quality. And this is exactly why he will be awarded the Peace Prize of the German Publishers and Booksellers Association today.
Much has been written about him. He’s been labelled a pioneer of the internet, a vanguard, a visionary. Here is an example: “Jaron Lanier is one of America’s cyber gurus and a protagonist of a new intellectual scene of which Europe doesn’t even seem to be aware, even though it should be, so as to wake up from the haze of the last century.” An intelligent sentence, articulated by a man who was himself a great visionary and humanist and whose tragic and premature death we mourned here at the Paulskirche a few weeks ago: Frank Schirrmacher.
Schirrmacher wrote this sentence as a way to shake Europe to its senses, and he wrote it—and here I would love to make you guess when—in the year 2000. That’s 14 years ago. So it did take a while before the debate on the opportunities and risks of the internet—which has been raging in California for decades—reached the so-called “Old World”. Now, we also engage in these controversial discussions, and we’re able to draw on many things that have already been thought and said; indeed, on much of what Jaron Lanier himself has thought and said.
Today’s recipient is an impressive polymath. He is a writer, musician, scientist, entrepreneur, teacher, activist and inventor. His biography is a dazzling example of a patchwork identity that may appear postmodern at first glance, but is more reminiscent of Humboldt’s educational ideal upon closer scrutiny. The multitude of his talents links Lanier with an ancient, centuries-old conception of being a scholar; when scholars were philosophers, architects, painters and doctors in one and did not shy away from entering into debates of socio-political relevance.
Lanier’s biography has a European stamp on it. While he was born in New York in 1960, his mother grew up in Europe and his father, too, has European roots. His family suffered the persecution of the Jews; his mother survived the worst rupture in civilisation in the history of mankind, the Holocaust; she survived the war and managed to flee, making her way to a new, better world across the Atlantic.
Today, their son vehemently defends the individuality of each person in the digital age. This places him in a great humanist tradition. Lanier cautions us not to put computers and networks above all that is human, not to belittle man and, as he writes, “not to lower our standards so as to make information technology look better”.
Lanier calls on us—as free, self-determined, motivated, and creative individuals—to work towards a better future. And it is thus that this American with European roots leads us back to our own tradition of thought and reminds of our best capabilities. He reminds us that man should never be degraded to an object; not for any idea or ideology, regardless of its aim.
Lanier provides a poignant description of how for some in Silicon Valley the belief in a smart internet world has become an ideology, if not a new religion. Google founder Larry Page once claimed that “human programming”, to use his own words, would require fewer bytes than a simple operating system for computers. But if people were only to become the sum of their data—i.e. a collection of their biodata plus information on all the places they’ve ever been, everything they’ve ever read, heard or said—then we’d be able to save this information-person in his entirety, as a file. According to this logic, our digital twin might even attain immortality. To quote Lanier: “But if you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive.” He then concludes: “Man does not occupy a particularly special position (within such a world.)”
Many who adhere to this belief ascribe to the global network a sort of higher consciousness—one that is superior to the consciousness of man. They believe that the digital consciousness is more reasonable than we are and knows much better what’s good for us. On the most basic level this just means that word processors—whether we want to or not—end up correcting our writing; but soon, our fridges will fill up themselves or we will be sent goods that we didn’t even know we wanted to buy. And not long after that, some algorithm will determine that we need to pay higher health insurance premiums or deserve to be cast out socially because we’ve refused to have our bodies hooked up to cables, because we don’t exercise daily and travel to the wrong countries on vacation.
According to this logic, it’s a good thing that the internet should take so many decisions off our backs, since it looks out for us around the clock, taking care even of our social relations. The internet turns into a doting mother, an alert and strict father. Welcome to this brave new world.
To avoid any misunderstandings: I’m not against digital technologies per se. On the contrary: whenever it constitutes an improvement of our lives I’m all for any kind of innovation. But the belief that we are merely the sum of our data reduces and degrades man; plus, it does not recognise who is the actual creator of culture. For it is the writers, musicians, filmmakers, engineers, programmers, journalists and other creatives themselves who come up with the content that fills up the internet. In short, this content is created by actual human beings, and it is these people who are the first to lend meaning to that which they have created. This is why it is also unacceptable that only very few make billions off these cultural achievements, whilst many creators emerge empty-handed. An act of creation should be honoured and we should not succumb to the illusion that anything on the internet is in fact free.
Because, in the end, we will all have to pay. Lanier writes: “If music is free, then your cell phone bill goes up, however crazy that sounds.” He continues: “There is certainly nothing wrong with that, but since the web is killing the old media, we face a situation in which culture is effectively eating its own seed stock.” Cultural achievement, ladies and gentlemen, should and must have a value and a price. And as a trained bookseller I would like to add: some of the arrogant attacks on the German fixed retail price of books truly irritate me.
We end up paying for everything that is seemingly free on the internet, not least with our data, which global internet giants with gigantic servers suck up so greedily. That’s not nothing. Because data will be one of the most important future resources and digital standards will become a decisive infrastructure. It is for this reason that our data does not belong in the hands of the few. For even in light of all the great opportunities provided by Big Data, this data collection mania makes it all the easier to monitor and control us. Knowledge is power and whoever knows what we buy, where we are, who we’re friends with and the stuff of our innermost dreams and desires knows too much. These are things that we entrust only to those closest to us, to our most intimate friends. Which is why I must insist that the collection and control of all our data is inimical to the very concept of a free and self-determined society. To be quite clear: not everything that is technically possible should also be permissible. Not everything that is more efficient is also better. The morality of feasibility is not compatible with our ethical standard.
Please allow me to make one remark on a recent debate, because it applies to many writers and also to the book market as a whole: diversity is a value in itself! If individual internet platforms—which, in the analogue world, were known to us as department stores, shops and markets—attain the kind of size that enables them to determine not just the price of a product but an artist’s income, bestseller lists, the format of publications, the date of shipment, etc., then there is no diversity but just one all-consuming monopoly. The golden calf of efficiency and of ever-sinking prices invalidates the principle of plurality that lies at the heart of our economy. That is not acceptable and that is why I share the concerns voiced by so many writers against these power-hungry monopolists.
The point of an argument as basic and wide-ranging as Lanier’s is not to make minor readjustments. What Lanier voices is a fundamental critique that precludes the possibility of fixing certain ill-developed aspects—either presently existing or predictable for the future—through technology or engineering. There is no easy way out, no app to download quickly; even if some people may believe this is the case.
But beware: Lanier doesn’t buy into a simple scheme of good and evil. He presents his critique as someone who helped initiate this new world, who eagerly pushed it forward, who himself has acted as a programmer. Lanier is a digital native; in his critique, he also criticises his own work and points towards instances where something might have gone awry. Anyone who thinks that Lanier turned his back on the digital world in frustration is quite wrong. No, he is at the forefront of the debate, pursuing the highly moral agenda of making things better than they are today. In his book “Who Owns the Future?” Lanier writes, “My hope for the future is that it will be more radically wonderful, and unendingly so, than we can now imagine but that it will also unfold in a lucid enough way that people can learn lessons and be wilful.” He continues: “It was about making the world more creative, more expressive, more sensitive, and more interesting. But it wasn’t about escaping the world.” So if we want to take our destiny into our own hands and make the internet a place from which the many and not the few benefit, then we will have to make an effort.
We will need rules for that digital world. Rules that represent our values. We will need a new charter of fundamental digital rights to set out what is permissible and what is forbidden in light of new technological possibilities. Because there have to be restrictions on what companies or states are allowed to know about people. In the same way in which we debate subjects such as human cloning, euthanasia, war and peace and what goes into our food, we will need to talk about the digital world we would like to inhabit in the 21st century. We will need to talk about what is important to us, as a society. The party is over, and now we’ve got to clean up; after all the upheaval, we will have to end this state of virtual disorderliness.
But what could a different, more humane internet look like? Let’s attempt a change of perspective first: it’s not about what “the internet” should look like. The internet could never be human, even if engineers become better and better at simulating a kind of “internet consciousness”. The internet is not a subject. At best, it is an infrastructure serving many people that has the potential to create a lot of good. It can even be fun.
We will have to do away with the idea that there is a clear separation between the analogue and digital worlds. That was a long time ago. We only have this one world and we will have to figure out how to live together in peace. Almost all questions on the subject of internet politics concern the same socio-political questions that we know from the days of the analogue world. Which is why not just internet politicians and activists need to stand up, but also those who are not digital natives; they, too, have a right to participate in the discussion. Because if we were to leave all these questions to the tech experts, the programmers and the nerds, we’d end up living in a self-referential system ruled by engineers and mathematicians; a government of experts in the Platonic sense. But that wouldn’t be a democracy in our sense.
It is clear that there isn’t one right way to set up such rules. But perhaps we Europeans will be able to provide some suggestions and experiences to help with the next push of innovation on the internet. At the moment, the standards enforced by US corporations as well as an increasing number of Asian companies dominate. But it doesn’t have to be that way. Because we could think of different standards, a few of which I will now list:
- A standard that honours creative output and people’s work and does not exploit this as a free and readily available resource.
- A standard that respects privacy. Not every single one of my search terms needs to be stored.
- A standard that guarantees data security. I don’t want anybody to read my letters, to know what music I listen to, which books I read, and who scans my vacation shots on the lookout for something usable, creating a profile on the basis of that information.
- A standard that calls the internet economy’s “the winner takes it all”-ideology into question. Indeed, a constellation of power that is too vast would be at odds with competition and plurality.
In order to establish these new standards we might even use the instruments that have served us well in the past. Juli Zeh, for example, encourages us to develop seals of quality that already guide our decisions in shopping for food, in car safety and in other areas. We could create institutions dedicated to technology impact assessment and nominate an ethics panel to accompany new technical innovations. We could modernise our copyright, data protection, consumer protection and antitrust laws and give them legal security in the context of a global trade law. There are many things we could do and we need to start right now.
I would like to live in a democratic society. In a digital, democratic society. For it is a mistake to believe that we can resist digitisation; social integration and participation in our society are increasingly dependent on our ability to move confidently through the internet. And because this is the case we can’t simply chalk up responsibility to the individual and claim that “you don’t have to join in, just stay offline!” No, our school curriculums need to react to these changes so that our children can reap the benefits of the digital age and are not released into the world without protection.
However, even in this world the individual needs to be able to opt out—even if the majority doesn’t. Even if many give up their data enthusiastically, even if their bodies are hooked up to all kinds of cables, even if they voluntarily save all their biodata in cloud storage—even so, those in the minority should not suffer if they refuse. To put it plainly, the protection of minorities applies equally in the digital and analogue realm!
Today, Jaron Lanier is being awarded an important Peace Prize. He is the rightful winner of this Prize and also acts as a representative for all those who are part of the seminal debate on our digital future. The German Publishers and Booksellers Association would like to invite more people to take part in this debate, regardless of whether they are experts or not. Because this process of negotiation—the question as to which digital vision will dominate the 21st century—is also a question of peace. It concerns us all. It will determine our future freedom, justice and whether we will live in a world of solidarity, pluralism and creativity.
After this year’s recipient was announced in June, Frank Schirrmacher said that Jaron Lanier was to receive “eminently political prize”. As usual, he was right, but I would like to add: thank you, German Publishers and Booksellers Association, for this courageous and eminently political decision. It is good that you made this decision. It is good that Jaron Lanier is the recipient of today’s prize.
I would like to congratulate this year's recipient. I would like to congratulate you, dear Jaron Lanier.
Translated into English by The Hagedorn Group.
Dieser Text ist urheberrechtlich geschützt. Der Nachdruck und jede andere Art der Vervielfältigung als Ganzes oder in Teilen, die urheberrechtlich nicht gestattet ist, werden verfolgt. Anfragen zur Nutzung der Reden oder von Ausschnitten daraus richten Sie bitte an: firstname.lastname@example.org.
Laudation for Jaron Lanier
I do sometimes wonder if we’ve outsourced our democracies to the tech companies simply in order to not have to face it all. We deflect our own power and responsibility.Jaron Lanier - Acceptance speech
High tech peace will need a new kind of humanism.
This storied award cannot be given just to me. I can only accept it on behalf of the global community of digital activists and idealists, even though many of us disagree profoundly with each other.
I also accept this award in honor of the life of the late Frank Schirrmacher, who was a fountain of light in our times. He will be terribly missed.
Even though I’d like to give a talk that is mostly positive and inspiring, in order to be a realist I must sometimes be a little dark. When one trusts in realism enough, one can burn through the indulgences of darkness. It often turns out that there is light waiting on the other side.
Ours is a confusing time. In the developed world we have enjoyed affluence for long enough to have a hard time appreciating it. We especially love our gadgets, where we can still find novelty - but we also have strong evidence that we would be peering over the edge of a precipice if we opened our eyes more often.
It pains me to intone the familiar list of contemporary perils: Climate change first of all; population and depopulation spirals utterly out of sync with our societies; our inability to plan for the decline of cheap fossil fuels; seemingly inescapable waves of austerity; untenable trends of wealth concentration; the rise of violent extremisms in so many ways in so many places… Of course all of these processes are intertwined with one another.
Given this big picture, it certainly came as a surprise to many of us (to me most of all) that this year’s Peace Prize of the German Book Trade was given to a figure such as myself who is associated with the rise of digital technologies. Aren’t digital toys just a flimsy froth that decorates big dark waves?
Digital designs have certainly brought about noisy changes to our culture and politics.
Let’s start with some good news. We have gotten a first peek at what a digitally efficient society might be like, and despite the ridiculousness of the surveillance economy we seem to have chosen so far, we must not forget that there’s a lot to like about what we have seen.
Waste can be systemically reduced, it turns out, just when we must become more efficient to combat climate change. For instance, we have learned that solar power performs better than many suspected it would, though it must be combined with a smart grid to be enjoyed with reliability. This is just the sort of positive option that my colleagues and I had hoped might come about through digital networking.
But the practical hopes for digital networks have also been accompanied by a symbolic, almost metaphysical project. Digital technology has come to bear the burden of being the primary channel for optimism in our times. This, after so many Gods have failed. What an odd fate for what started out as a rather sterile corner of mathematics!
Digital cultural optimism is not insane. We have seen new patterns of creativity and perhaps have even found a few new tendrils of empathy transcending what used to be barriers of distance and cultural difference. This sort of pleasure has perhaps been over-celebrated by now, but it is real. For a trivial but personal example, how lovely that I now am in touch with oud players around the world, that I can rehearse a concert over the ‘net. It really is great fun.
I just mentioned some of the good stuff, but we have also famously used digital toys to acquiesce to cheap and casual mass spying and manipulation; we have created a new kind of ultra-elite, supremely wealthy and untouchable class of technologists; and all too often we now settle into a frenzy of digitally efficient hyper-narcissism.
I still enjoy technology so much that I can hardly express it. Virtual Reality can be fun and beautiful. And yet here I am, so critical. To avoid contradictions and ambiguities is to avoid reality.
It is a question pondered by online commentators many thousands of times a day. To render opinions on Internet culture can seem as useless as dripping water from an eyedropper onto a sidewalk in a rainstorm. Anyone who speaks online knows what it’s like these days. You either huddle with those who agree, or else your opinion is instantly blended into grey mush by violent blades.
Thesis and antithesis, one hand and the other, no longer lead to a higher synthesis in the online world. Hegel has been beheaded. Instead there are only statistical waves of data, endlessly swirled into astonishing fortunes by those who use it to calculate economic advantages for themselves.
The Peace Prize of the German Book Trade is associated with books, so in this era of digital takeover we must ask, “What is a book?”
The Internet is used to comment on the Internet as much as it is used for pornography or cat pictures, but it is really only media external to the Internet – books in particular - that can provide perspective or syntheses. That is one reason the Internet must not become the sole platform of communication. It serves us best when it isn’t both subject and object.
Thus a creature of digital culture such as myself writes books when it is time to look at the big picture. There is a chance that a reader will read a whole book. There is at least an extended moment that I and a reader might share.
If a book is only a type of manufactured object made of paper, then it can only be celebrated in the way we might celebrate clarinets or beer. We love these things, but they are only particular designs, evolved products with their own trade fairs and sub-cultures.
A book is something far more profound. It is a statement of a particular balance between individual personhood and human continuity. Each book has an author, someone who took a risk and made a commitment, saying, “I have spent a substantial slice of my short life to convey a definite story and a point of view, and I am asking you to do the same to read my book: Can I earn such a huge commitment from you?” A book is a station, not the tracks.
Books are a high stakes game, perhaps not in terms of money (compared with other industries), but in terms of effort, commitment, attention, the allocation of our short human lives, and our potential to influence the future in a positive way. Being an author forces one into a humanizing form of vulnerability. The book is an architecture of human dignity.
A book in its very essence asserts that individual experience is central to meaning, for each book is distinct. Paper books are by their nature not mushed together into one collective, universal book. We have come to think it is normal for there to be a single Wikipedia article about a humanities topic for which there really can’t be only one optimized telling; most topics are not like math theorems.
In the print era there were multiple encyclopedias, each announcing a point of view, and yet in the digital era there is effectively only one. Why should that be so? It is not a technical inevitability, despite “network effects.” It is a decision based on unquestioned but shoddy dogma that ideas in themselves ought to be coupled to network effects. (It is sometimes said that the Wikipedia will become the memory for a global artificial intelligence, for instance.)
Books are changing. Some of the metamorphosis is creative and fascinating. I am charmed by the thought of books that will someday synchronize to virtual worlds, and by other weird ideas.
But too much of the metamorphosis is creepy. You must now, suddenly, subject yourself to surveillance in order to read an eBook. What a peculiar deal we have made! In the past we struggled to save books from the flames, but now books have been encumbered with duties to report your reading conduct to an opaque network of high tech offices that analyze and manipulate you. Is it better for a book to be a spying device or ashes?
Books have always helped us undo problems we bring upon ourselves. Now we must save ourselves by noticing the problems we are forcing upon books.
Beyond books, a “peace prize” is obviously associated with peace, but what do we mean by peace?
Certainly peace must mean that violence and terror are not used to gain power or influence, but beyond that, peace must also have a creative character.
Most of us do not want to accept some sort of static or dull existence, even if it is free of violence. We do not want to accept the peaceful order that authoritarian or imposed solutions claim to offer, whether digital or old fashioned. Nor should we expect that future generations will accept our particular vision of a sustainable society forever, no matter how smart we are or how good our intentions might be.
So peace is a puzzle. How can we be free and yet not veer into the freedom to be nasty? How can peace be both capricious and sustainable?
The resolutions between freedom and stability that we have come to know have tended to rely on bribery - on ever-increasing consumption - but that doesn’t appear to be a long term option.
Maybe we could stabilize society with virtual rewards, or at least that’s an idea one hears around Silicon Valley quite often. Get people to reduce their carbon footprints by wooing them with virtual trinkets within video games. It might work at first, but there’s a phony and patronizing quality to that approach.
I don’t believe we know everything we need to know yet about solutions to the long term puzzle of peace. That might sound like a negative comment on first hearing, but it is actually an overtly optimistic statement; I believe we are learning more and more about peace as we go.
My darkest digital fear concerns what I call the “pack switch.” This is a thesis about a persistent aspect of human character that is opposed to peace.
People are like wolves, according to this theory; we are members of a species that can function either as individuals or in packs. There is a switch inside us. We are prone to suddenly fall into pack thinking without even realizing it.
If there is one thing that terrifies me about the Internet, this is it. Here we have a medium which can elicit “flash mobs” and routinely creates sudden “viral” popularities. So far, these effects have not been evil on an epochal level, but what is there to prevent that? When generations grow up largely organized and mediated by global corporate cyber-structures like proprietary social networks, how can we know who will inherit control of those designs?
Traditional definitions of “peace” are often only of peace within the pack or clan, so clannishness might be the most pernicious of our sins. It undermines us at our core.
Hive identity is almost universally perceived as a virtue. The Book of Proverbs in the Old Testament lists a set of sins, including lying, murder, pride, and so on, but also “sowing discord among brethren.” Similar injunctions exist in every culture, political system, or religion I have studied. I do not bring this up to suggest an equivalency between all cultures or creeds, but rather a common danger within us, in our nature, that we all face and must learn to deflect. Becoming a loyal part of a pack is confused with goodness again and again, even – especially! - when the people fancy themselves to be rebels. It is always pack against pack.
It is as true for those who identify with pop styles or a particular approach to digital politics, as it can be for traditional ethnicity, nationality, or religion. Within digital culture, one can be vilified for not adhering strictly enough to the dogma of the “open” movement, for instance.
Again and again, our crude “sins” like greed or pack identity obsession emerge rudely but stealthily from our carefully cultivated patterns of perfect thinking – in fact, just when we think we’re close to technical perfection.
The lovely idea of human rights is being confounded by gamesmanship during our present algorithmic era. After generations of thinkers and activists focused on human rights, what happened? Corporations became people, or so said the Supreme Court in the United States! A human right is an absolute benefit, so sneaky players will connive to calculate multiples of that benefit for themselves and their pack-mates. What are we to do with our idea of human rights in America? It's been inverted.
For another example, it is just when digital companies believe they are doing the most good, optimizing the world, that they suddenly find themselves operating massive spying and behavior modification empires. Consider Facebook, which is the first large public company controlled by a single individual, who is mortal. It governs much of the pattern of social connection in the world today. Who might inherit this power? Is there not a new kind of peril implicit in that quandary?
Of course this topic has special resonance in Germany. I would like to say something profound about that angle, but honestly I don’t fully understand what happened. My mother was from Vienna, and many of her relatives were lost to the evil and the shiny mega-violence of the Nazi regime. She suffered horribly as a young girl, and almost perished as well. Were I not so close to those events, were the impact more muted for me, I might be more ready to pretend that I understand them more fully, as so many scholars pretend to do.
In all honesty I still find it terribly hard to understand the Nazi era, despite much reading. At the very least, the Nazis certainly proved that a highly technical and modern sensibility is not an antidote to evil. In that sense, the Nazi period heightens my concerns about whether the Internet could serve as a superior platform for sudden mass pack/clan violence.
I don’t think outright repudiation of pack/clan identity is the best way to avoid falling into the associated violence. People seem to need it. Countries more often than not resist losing identity in larger confederations. Very few people are ready to live as global citizens, free of national association. There’s something abstract and unreal about that sort of attempt to perfect human character.
The best strategy might be for each individual to belong to enough varied clans that it becomes too confusing to form coherent groups in opposition to one another. Back in the digital beginning, decades ago, I held out exactly this hope for digital networks. If each person could feel a sense of clan membership in a confusing variety of “teams” in a more connected world, maybe the situation would become a little too tangled for traditional rivalries to escalate.
This is also why I worry about the way social networks have evolved to corral people into groups to be well-targeted for what is called advertising these days, but is really more like the micromanagement of the most easily available options, through link placement.
I always feel the world becomes a slightly better place when I meet someone who has ties to multiple sports teams and can’t decide which one to cheer at a game. Such a person is still enthused, but also confused: suddenly an individual and not part of a pack. The switch is reset.
That kind of reset is interesting because it is a change in outlook brought about by circumstances instead of the expression of ideas, and that type of influence is exactly what happens with technology all the time.
In the past an idea in a book might have been persuasive or seductive, or might in some cases have been forced into belief and practice by the means of a gun or a sword held near. Today, however, ideas are often implicit in the computer code we use to run our lives.
Privacy is an example. Whatever one thinks about privacy, it’s the code running in faraway cloud computers that determines what ideas about privacy are actually in effect.
The concept of privacy is multifaceted, widely varying, and always hard to define, and yet the code which creates or destroys privacy is tediously – banally – concrete and pervasive. Privacy is hardly a personal decision anymore, which means it’s no longer even something that can be thought about in the old sense. Only fanatical scholastics waste time on moot questions.
The only useful thinking about privacy is that thinking which leads to changes in the code. And yet we’ve mostly “outsourced” our politics to remote corporations, so there is often no clear channel between thinking and coding, meaning between thinking and social reality. Programmers have created a culture in which they expect to outrun regulators.
We ask governments to tip toe into the bizarre process of attempting to regulate how cloud-based corporations channel our communications and coordinated activities with one another. But then programmers will sometimes contravene whatever the company has been forced to do, rendering government action into an absurdity. We have seen this pattern with copyright, for instance, but also in different ways with issues like the right to be forgotten or in certain arenas of privacy, particularly for women online. (Current architectures and practices favor anonymous harassers over the women they harass.)
In each case, many of the most creative and sympathetic activists don’t want people to be able to contravene the “openness” of the network. But at the same time many digital activists have a seemingly infinite tolerance for gargantuan inequities in how people benefit from that all-seeing eye.
For instance, big data fuels the algorithmic concentration of wealth. It happened first in music and finance, but is spreading to every other theater of human activity. The algorithms don’t create sure bets, but they do gradually force the larger society to take on the risks associated with profits that benefit only the few. This in turn induces austerity. Since austerity is coupled with a sharing economy (because certain kinds of sharing provides the data that run the scheme), everyone but the tiny minority on top of the computing clouds experiences a gradual loss of security.
This, in my view, is the primary negative consequence that has occurred thus far through network technology. To observe that is not to dismiss another problem which has gained much more attention, because it is sensational. A side effect of the rise of the algorithmic surveillance economy is the compelled leakage of all that data into the computers of national intelligence services. We know much more about this than we would have because of Edward Snowden’s revelations.
Curbing government surveillance is essential to the future of democracy, but activists need to keep in mind that in the big picture what is going on at the moment is a gradual weakening of governments in favor of the businesses that gather the data in first place, through the mechanisms of wealth disparity and austerity. That is only true for democracies, of course; non-democratic regimes take control of their own clouds, as we see, for instance, in China.
I do sometimes wonder if we’ve outsourced our democracies to the tech companies simply in order to not have to face it all. We deflect our own power and responsibility.
Here I feel compelled to foresee a potential misunderstanding. I am not “anti-corporate.” I like big corporations, and big tech corporations in particular. My friends and I sold a startup to Google and I currently have a research post in Microsoft’s labs. We must not put each other through purity tests, as if we were cloud algorithms classifying one another for targeted ads.
The various institutions that people invent need not annihilate each other, but can balance each other. We can learn to be “loyal opposition” within all the institutions we might support or at least tolerate, whether government, business, religion, or anything else. We don’t always need to destroy in order to create. We can and ought to live with a tangle of allegiances. That is how to avoid the clan/hive switch.
Learning to think beyond opposition can yield clarity. For instance, I disagree equally with those who favor a flat distribution of economic benefits and those who prefer the winner-take-all outcomes that the high tech economy has been yielding lately. The economy need not look like either a tower overlooking a sea of foolish pretenders, or a salt flat where everyone is forced to be the same by some controlling authority.
One can instead prefer a dominant middle block in an economy. An honest measurement of anything in reality ought to yield a bell curve. If an economy yields a bell curve of outcomes, not only is it honest, but it is also stable and democratic, for then power is broadly distributed. The focus of economic justice should not be to condemn rich people in principle, but to condemn a basin in the middle of the distribution.
The conflict between the Left and Right has been so acute for so long that we don’t even have an honest vocabulary to describe the honest mathematics of the bell curve. We can’t speak of a “middle class” because the term has become so fraught. And yet that impossible-to-articulate middle is the heart of moderation where we must seek peace.
As boring as it might seem to be at first, moderation is actually both the most fascinating and promising path forward. We are constantly presented with contrasts between old and new, and we are asked to choose. Should we support old-fashioned taxis and their old-fashioned benefits for drivers or new types of services like Uber that offer digital efficiencies?
These choices are false choices! The only ethical option is to demand a synthesis of the best of pre-digital and digital designs.
One of the problems is that technologists are often trapped in old supernatural fantasies that prevent us from being honest about our own work. Once upon a time, scientists imagined coming up with the magic formulas to make machines come alive and become self-sufficient. After that, artificial intelligence algorithms would write the books, mine the fuels, manufacture the gadgets, care for the sick and drive the trucks. That would lead to a crisis of unemployment, perhaps, but society would adjust, perhaps with a turn towards socialism or a basic income model.
But the plan never worked out. Instead, what looks like automation is actually driven by big data. The biggest computers in the world gather data from what real people – like authors - do, acting as the most comprehensive spying services in history, and that data is rehashed to run the machines.
It turns out that “automation” still needs huge numbers of people! And yet the fantasy of a machine-centric future requires that those real people be rendered anonymous and forgotten. It is a trend that reduces the meaning of authorship, but as a matter of course will also shrink the economy as a whole, while enriching those who own the biggest spying computers.
In order to create the appearance of automatic language translations, for instance, the works of real translators must be scanned by the millions every single day (because of references to current events and the like.) This is a typical arrangement.
It’s usually the case that an appearance of automation is actually hiding the disenfranchisement of the people behind the curtain who do the work, which in turn contributes to austerity, which in turn rules out the possibility of socialism or basic income as a way to compensate for all the theatrically simulated unemployment. The whole cycle is a cosmic scale example of smart people behaving stupidly.
“Disrupt” might be the most common word in digital business and culture. We pretend it’s hard to differentiate “creative destruction” – a most popular trope in modern business literature - from mere destruction.
It really isn’t that hard. Just look to see if people are losing security and benefits even though what they do is still needed. Buggy whips are obsolete, but the kinds of services being made more efficient by digital services lately are usually just being reformatted, not rejected.
Whenever someone introduces a cloud service to make some aspect of life easier, like access to music, rides, dates, loans, or anything else, it also now expected that innocent people will suffer, even if that is not strictly, technically necessary. People will be cut off from social protections.
If artists enjoyed copyright, that will be lost in the new system. If workers were in a union, they will no longer be. If drivers had special licenses and contracts, they no longer will. If citizens enjoyed privacy, then they must adjust to the new order.
The familiar expectation that one must incinerate old rights, like privacy, or security through the labor movement, in order to introduce new technological efficiencies, is bizarre. Techie idealists often focus on how the old protections were imperfect, unfair, and corrupt – all of which was often so – but we rarely admit to ourselves how the new situation offers spectacularly inferior protections and astoundingly greater levels of unfairness.
If you are a technology creator, please consider this: If you need to rely on dignity destruction as a crutch in order to demonstrate a new efficiency through digital networking, it only means you’re not good at the technology. You are cheating. Really efficient technological designs should improve both service and dignity for people at the same time.
We humans are geniuses at confusing ourselves by using computers. The most important example is the way computation can make statistics seem to be an adequate description of reality. This might sound like an obscure technical problem, but it is actually at the core of our era’s economic and social challenges.
There is an exponentially increasing number of observations about how gigantic “big data” is these days; about the multitudes of sensors hiding in our environment, or how vast the cloud computing facilities have become, in their obscure locations, desperate to throw off their excess heat into wild rivers.
What is done with all that data? Statistical algorithms analyze it!
If you would, please raise the tip of your finger and move it slowly through the air. Given how many cameras there are in our present-day world, some camera is probably looking at it, and some algorithm somewhere is probably automatically predicting where it will be in another moment. The algorithm might have been set in place by a government intelligence operation, a bank, a criminal gang, a Silicon Valley company, who knows? It is ever-cheaper to do it and everyone who can, does.
That algorithm will probably be correct for at least a little while. This is true simply because statistics is a valid branch of mathematics.
But beyond that, the particular reality we find ourselves in is friendly to statistics. This is a subtle aspect of our reality. Our world, at least at the level in which humans function, has an airy, spacious quality. The nature of our environment is that most things have enough room to continue on in what they were just doing. For contrast, Newton’s laws (i.e. a thing in motion will continue) do not apply in a common tile puzzle, because every move is so constrained and tricky in such a puzzle.
But despite the apparent airiness of everyday events, our world is still fundamentally like a tile puzzle. It is a world of structure, governed by conservation and exclusion principles. What that means is simple: My finger will probably keep on moving as it was, but not forever, because it will reach the limit of how far my arm can extend, or it will run into a wall or some other obstacle.
This is the peculiar, flavorful nature of our world: commonplace statistical predictability, but only for limited stretches of time, and we can’t predict those limits universally. So cloud-based statistics often work at first, but then fail.
We think we can use computers to see into the future, but then suddenly our schemes fail. (Good scientists who work with theory, beyond statistics, understand this problem and also model the wall that interrupts the progress of your finger. That level of effort is rarely expended in cloud business, however, since billions are still made without it.)
This is the universal and seductive pattern of intellectual failure in our times. Why are we so easily seduced? It is hard to describe how intense the seductive quality is to someone who hasn’t experienced it.
If you’re a financier running cloud statistics algorithms, it feels at first like you have the magic touch of King Midas. You just sit back and your fortune accumulates. But then something happens. You might run out of people to offer stupid loans to, or your competitors start using similar algorithms, or something.
Some structural limit interrupts your amazing run of perfect luck, and you are always shocked, shocked, shocked, even if it has happened before, because the seductive power of those early phases is irresistible. (A baseball team where I live in California was celebrated in the book and movie ‘Moneyball’ for using statistics to become winners, and yet now they are losing. This is utterly typical.)
There is also an intense power-trip involved. You can not only predict, but you can force patterns into the ways users express themselves, and how they act.
It is common these days for a digital company to woo some users into a service that provides a new efficiency through algorithms and cloud connectivity. This might be a way of distributing books to tablets, a way of ordering rides in cars or finding places to sleep while travelling, a way of keeping track of family members and friends, of finding partners for sex and romance, or a way of finding loans.
Whatever it is, a phenomenon called “network effect” soon takes hold, and after that, instead of a world of choices, people are for the most part compelled to use whichever service has outrun the others. A new kind of monopoly comes into being, often in the form of a California-based company.
The users will typically feel like they are getting tremendous bargains. Free music! They seem to be unable to draw a connection to their own lessening prospects. Instead they are grateful. If you tell them, through the design of algorithms, how to date, or how to present themselves to their families, they will comply.
Whoever runs one of these operations, which I call Siren Servers, can set the norms for society, such as privacy. It is like being king.
That is the raw economic snapshot that characterizes so many aspects of our society in recent times. It was the story of music early on. Soon it will be the story of manufacturing (because of 3D printers and factory automation), health care (because of robotic nurses), and every other segment of the economy.
And of course it has overtaken the very idea of elections in the United States, where computational gerrymandering and targeted advertising have made elections into contests between big computers instead of contests between candidates. (Please don’t let that happen in Europe.)
It works over and over and yet it also fails over and over in another sense. Automated trading crashes spectacularly, and then starts up again. Recorded music crashes, but then the same rulebook is applied to books. Billons are accumulated around the biggest computers with each cycle. The selfish illusion of infallibility appears over and over again - the serial trickster of our era – and makes our smartest and kindest technical minds become part of the problem instead of part of the solution. We make billions just before we slam into the wall.
If this pattern is inevitable, then politics don’t matter much. Politics, in that case, could at most delay a predetermined unraveling.
But what if politics can actually matter? In that case, it is sad that current digital politics is so often self-defeating. The mainstream of digital politics, which is still perceived as young and “radical,” continues to plow forward with a set of ideas about openness from over three decades ago, even though the particular formulation has clearly backfired.
As my friends and I watched the so-called Twitter or Facebook revolution unfold in Tahrir Square from the comfort of Silicon Valley, I remember saying, “Twitter will not provide jobs for those brave, bright young Egyptians, so this movement can’t succeed.” Freedom isolated from economics (in the broad sense of the word) is meaningless.
It is hard to speak of this, because one must immediately anticipate so many objections. One can be convinced, for instance, that traditional social constructions like “jobs” or “money” can and should be made obsolete through digital networks, but: Any replacement inventions would need to offer some of the same benefits, which young people often prefer to not think about. But one cannot enter into only part of the circle of life.
This is a tricky topic and deserves a careful explanation. The “sharing economy” offers only the real time benefits of informal economies that were previously only found in the developing world, particularly in slums. Now we’ve imported them into the developed world, and young people love them, because the emotion of sharing is so lovely.
But people can’t stay young forever. Sometimes people get sick, or need to care for children, partners, or parents. We can’t “sing for our supper” for every meal. Because of this reality, the sharing economy has to be understood ultimately as a deceptive ritual of death denial. Biological realism is the core reason formal economies came into being in the first place. If we undermine both union protections, through the sharing economy, and trap governments in long term patterns of austerity and debt crisis, through that same economy, who will take care of the needy?
Sometimes I wonder if younger people in the developed world, facing the inevitable onslaught of aging demographics, are subconsciously using the shift to digital technology as way to avoid being crushed by obligations to an excess of elders. Most parts of the developed world are facing this type of inverted demographic cataclysm in the coming decades. Maybe it’s proper for young people to seek shelter, but if so, the problem is that they too will become old and needy someday, for that is the human condition.
Within the tiny elite of billionaires who run the cloud computers, there is a loud, confident belief that technology will make them immortal. Google has funded a large organization to “solve death,” for instance. There are many other examples.
I know many of the principal figures in the anti-death, or post-human movement, which sits at the core of Silicon Valley culture, and I view most of them as living in a dream world divorced from rational science. (There are also some fine scientists who simply accept the funding; funding for science these days often comes from oddly-motivated sources, so I cannot fault them.)
The arithmetic is clear. If immortality technology, or at least dramatic life extension technology, starts to work, it would either have to be restricted to the tiniest elite, or else we would have to stop adding children to the world and enter into an infinitely stale gerontocracy. I point this out only to reinforce that when it comes to digital technology, what seems radical – what at first seems to be creative destruction - is often actually hyper-conservative and infinitely stale and boring once it has a chance to play out.
Another popular formulation would have our brains “uploaded” into virtual reality so that we could live forever in software form. This despite the fact that we don’t know how brains work. We don’t yet know how ideas are represented in neurons. We allocate billions of dollars on simulating brains even though we don’t really know the basic principles as yet. We are treating hopes and beliefs as if they were established science. We are treating computers as religious objects.
We need to consider whether fantasies of machine grace are worth maintaining. In resisting the fantasies of artificial intelligence, we can see a new formulation of an old idea that has taken many forms in the past: “Humanism.”
The new humanism is a belief in people, as before, but specifically in the form of a rejection of artificial intelligence. This doesn’t mean rejecting any particular algorithm or robotic mechanism. Every single purported artificially intelligent algorithm can be equally well-understood as a non-autonomous function that people can use as a tool.
The rejection is not based on the irrelevant argument usually put forward about what computers can do or not do, but instead on how people are always needed to perceive the computer in order for it to be real. Yes, an algorithm with cloud big data gathered from millions, millions of people, can perform a task. You can see the shallowness of computers on a practical level, because of the dependency on a hidden crowd of anonymous people, or a deeper epistemological one: Without people, computers are just space heaters making patterns.
One need not specify whether a divine element is present in a person or not, nor precisely whether certain “edge cases” like bonobos should be considered human beings. Nor must one make absolute judgments about the ultimate nature of people or computers. One must, however, treat computers as less-than-human.
To talk about specific ways out of our stupid digital economics pattern is to enter into a difficult argument. I have mostly explored and advocated one approach, which is to revive the original concept for digital media architecture, dating back to Ted Nelson’s work in the 1960s.
Ted suggested a universal micropayment scheme for digital contributions from people. Once again, this was not a radical reaction, but the historical starting point for all digital media investigations.
I have looked into extending Ted’s idea in order to support the way people’s lives are presently read into big data schemes. For instance, as I pointed out earlier, free language translation services actually depend on scanning the work of millions of real human translators every day. Why not pay those real people? It would be fair and truthful.
If we just admitted that people are still needed in order for big data to exist, and if we were willing to lessen our fantasies of artificial intelligence, then we might enjoy a new economic pattern in which the bell curve would begin to appear in digital economic outcomes, instead of winner-take-all results. That might result in sustainable societies that don’t fall prey to austerity, no matter how good or seemingly “automated” technology gets.
This idea is controversial, to say the least, and I can’t argue it fully in this short statement. It is only an idea to be tested, at any rate, and might very well turn out to be untenable.
But the key point, the essential position from which we must not compromise, is to recognize that there is a space of alternatives. The pattern we see today is not the only possible pattern, and is not inevitable.
Inevitability is an illusion that leeches freedom away.
The more advanced technology gets, the harder it will be to distinguish between algorithms and corporations. Which is Google today, or Facebook? The distinction is already esoteric in those cases and soon will be for many more corporations. If algorithms can be people, then so will be corporations, as they already are in the USA. What I declare here today is that neither an algorithm nor a corporation should be a person!
The new humanism asserts that it is ok to believe that people are special, in the sense that people are something more than machines or algorithms. This proposition can lead to crude mocking arguments in tech circles, and really there’s no absolute way to prove it’s correct.
We believe in ourselves and each other only on faith. It is a more pragmatic faith than the traditional belief in God. It leads to a fairer and more sustainable economy, and better, more accountable technology designs, for instance. (Believing in people is compatible with any belief or lack of belief in God.)
To some techies, a belief in the specialness of people can sound sentimental or religious, and they hate that. But without believing in human specialness, how can a humanistic society be sought?
May I suggest that technologists at least try to pretend to believe in human specialness to see how it feels?
To conclude, I must dedicate this talk to my father, who passed away as I was writing it.
I was overcome with grief. I am an only child, and now no parent is left. All the suffering my parents endured. My father’s family suffered so many deaths in pogroms. One of his aunts was mute her whole life, having survived as a girl by staying absolutely silent, hiding under a bed behind her older sister, who was killed by sword. My mother’s family, from Vienna, so many lost to the concentration camps. After all that, just little me.
And yet I was soon overcome with an even stronger feeling of gratitude. My father lived into his late nineties, and got to know my daughter. They knew and loved each other. They made each other happy.
Death and loss are inevitable, whatever my digital supremacist friends with their immortality laboratories think, even as they proclaim their love for creative destruction. However much we are pierced with suffering over it, in the end death and loss are boring because they are inevitable.
It is the miracles we build, the friendships, the families, the meaning, that are astonishing, interesting, blazingly amazing.
Dieser Text ist urheberrechtlich geschützt. Der Nachdruck und jede andere Art der Vervielfältigung als Ganzes oder in Teilen, die urheberrechtlich nicht gestattet ist, werden verfolgt. Anfragen zur Nutzung der Reden oder von Ausschnitten daraus richten Sie bitte an: email@example.com.
Chronicle of the Year 2014
+ + + Latvia becomes the 18th country to join the euro area. + + + The Ukrainian Parliament removes President Viktor Yanukovych following Euromaidan protests. + + + The affair surrounding SPD politician Sebastian Edathy triggers a government crisis in Germany and leads, among other things, to the resignation of Federal Minister of Agriculture Hans-Peter Friedrich. + + + In the course of the crisis in Ukraine, there are demonstrations and occupations of buildings by pro-Russian separatists and supporters of federalization in oblasts in the east and south of the country, which culminate in warlike skirmishes with troops of the central government in Kiev. + + + An aircraft of the Malaysia Airlines with 239 people on board disappears without a trace in the Indian Ocean area. + + +
+ + + The Parliament of the Autonomous Republic of Crimea declares its independence from Ukraine. + + + The German Bundestag sets up the NSA Committee of Inquiry on behalf of all parliamentary groups. + + + The Islamist organization ISIS conquers the northern Iraqi city of Mosul as part of a major offensive, which results, among other things, in the destruction of several historical sites. + + + Germany wins the Football WM + + + Israel undertakes a military offensive in the Gaza Strip. During heavy bombardments and shelling more than 2100 Palestinians are killed and more than ten thousand injured, most of them civilians. + + + The Islamic State commits genocide against the Yazidi in Sinjar, Iraq. + + + Ebola fever is spreading in West Africa and reaching epidemic proportions. + + + The Islamic state commits massacre of Yezidi in Kocho. + + + The number of refugees in Europe reaches its highest level since 1949 + + + In connection with the death of Michael Brown and further deaths due to police violence, there are continuing demonstrations in Ferguson (Missouri) as well as in other places in the USA + + + + A demonstration of the citizens' movement PEGIDA with about 10,000 participants in Dresden triggers a social discussion about this group and its supporters. + + Official end of the combat mission and (partial) withdrawal of the "International Security Assistance Force" from Afghanistan; 13,500 soldiers from several states will remain in Afghanistan + + +
Biography of Jaron Lanier
Jaron Lanier is a well-known Internet pioneer and one of the most important contributors to the emergence and growth of the digital universe. He is credited with coining the phrase "virtual reality" and spearheading work in that field over the course of the past several decades, both as an entrepreneur and leading researcher. Today he works as a Lead Scientist overseeing a coalition of research universities studying advanced applications for "Internet 2." He is also a scholar at large at Microsoft Research. Lanier's remarkable life story and numerous tech innovations and insights have earned him the reputation of a visionary, with some media observers even calling him a "net intellectual."
Shortly after his birth on May 3, 1960 in New York City, Lanier and his family moved to the vicinity of El Paso, Texas. His mother Lillian was a pianist, painter and dancer who had survived the Holocaust and emigrated from Vienna to New York at the age of 15. His father Ellery was the son of Ukrainian Jews who had fled the pogroms: once in the United States, he worked as an architect, painter, writer, elementary school teachers and radio host. Lanier's eventful childhood was marked by his mother's early death in a car accident, his own growing enthusiasm for music, many moves with his father and numerous jobs. At the age of 14, after having dropped out of high school, Lanier followed the advice of his neighbor, the scientist Clyde Tombaugh (who had discovered the planet Pluto in 1930), and began attending classes in mathematics and chemistry at New Mexico State University.
There, he gained his first significant insights into computer technology. At the age of seventeen, Lanier briefly attended Bard College in New York State. In 1983, after leaving Bard and returning to New Mexico, he moved to Santa Cruz. California, where he developed a video game called "Moondust." This gained him a job at Atari, where he also got to know Tom Zimmerman, the man responsible for constructing one of the first "wired" gloves for virtual interaction. In 1985, the two men joined with friends to found the company VPL Research with the goal of developing further technology for the virtual world. In the subsequent years, Lanier would go on to construct virtual cameras, 3D graphics for feature films and the first avatar – an artificial representative of a real person in the virtual world. In this era, he also pressed ahead with the development of Internet-based networks. In addition, the applications he developed for three-dimensional representations in Web2 programs enabled the use of virtual space for a wide variety of spheres, including the medical-surgical field. In 1999, he sold his company to Sun Microsystems. Since then, he has headed up and overseen work on a number of groundbreaking projects.
Today, in addition to lecturing in computer science at different universities in the US, Lanier is also an internationally respected musician, composer and visual artist. He received his first piano lesson as a child from his mother and later taught himself to play many other instruments. With the help of his collection of more than one thousand rare and old musical instruments, Lanier has composed the music for several concerts, ballet performances and the prize-winning films "Three Seasons" (1999) and "The Third Wave" (2009). He has also played together with Philip Glass, Yoko Ono, Sean Lennon, Ornette Coleman, George Clinton and many other musicians drawn from a wide variety of genres. His paintings, drawings and art installations have been shown in numerous museums and galleries in the USA and Europe. Lanier had his first solo exhibition in 1997 at the Museum of Modern Art in Roskilde, Denmark. He also collaborated on the production of the virtual backdrop for Steven Spielberg's science-fiction film "Minority Report" (2002).
Among the group of individuals known as the "inventors" of the Internet, Jaron Lanier has consistently spoken with a highly unique and clear voice. Indeed, he has always been a pioneer for whom idealistic beliefs – such as the democratization of education, transparent political processes and scientific innovations – were of equal importance as those digital advances and gadgets that have most fascinated human beings. In keeping with this commitment, Lanier has devoted ever-increasing attention since the turn of the century to the growing discrepancy between human beings and machines, between actuality and virtual reality, and between financial capitalization and the misuse of knowledge and data. This increasing outspokenness came as a direct reaction to the thesis put forth by leading neuroscientists, who posited that the human brain is nothing more than a highly complex computer. In contrast, Lanier argued that technology should not seek to replace human beings but rather to improve their lives and foster communication among them.
His two books "You Are Not a Gadget" (2010) and "Who Owns the Future?" (2013) – as well as numerous articles examining negative developments in the digital world – have made Lanier one of the most important critics of the digital world in our time. He has focused his criticism primarily on those developments that inhibit Internet users in terms of their freedom, individuality and self-determination. First among these developments is the "information wants to be free" trope of the Internet, which sees users providing knowledge and data free of charge without any compensation. As Lanier argues, only a few companies – those which own the large data-collecting servers – profit financially from this process. He warns that when work and production are taken over to an ever-increasing extent by computer-controlled technologies, and when the goods known as "information" traded in the digital world continue to bring no financial value to the people who provide it, then this so-called "digital revolution" has the potential to lead to a collapse of the middle class.
Lanier also thematizes the misuse of data by corporations, intelligence agencies and governments and points to the dangers we all face as individuals when we engage on an intellectual level that is, as a general rule, quite low. According to Lanier, the radical reduction of our personalities online can and will ultimately affect our inner selves.
What sets Lanier apart from other critics are the concrete solutions he provides alongside the critiques contained in his books and articles. With a call for a "digital humanism," he advocates the self-determination of individuals both in the real world and in virtual reality. In other words, in addition to protecting intellectual property and preventing the inhibition of our creative output as human beings, he also calls for a system in which each individual has control over his or her own online data.
Jaron Lanier lives with his wife and daughter in Berkeley, California, where he also works at UC Berkeley. He has received two honorary doctorates for his inventions and designs. In 2001, he was the recipient of the CMU’s Watson Award. In 2009, he received a Lifetime Career Award from the IEEE, the world's largest professional organization of electrical and electronics engineers. His latest book "Who Owns the Future?" was awarded Harvard's Goldsmith Book Prize in 2014.
2014 Peace Prize of the German Book Trade
2009 Lifetime Career Award of the IEEE
2001 CMU’s Watson Award
several honorary doctorates
Anbruch einer neuen Zeit. Wie Virtual Reality unser Leben und unsere Gesellschaft verändert
Aus dem amerikanischen Englisch von Heike Schlatterer und Sigrid Schmid, Hoffmann und Campe, Hamburg 2018, 448 Seiten, ISBN 978-3-455-00399-4, 25,00 €
Zehn Gründe, warum du deine Social Media Accounts sofort löschen musst.
Aus dem Englischen von Karsten Petersen und Martin Bayer, Hoffmann und Campe, Hamburg 2018, 208 Seiten, ISBN 978-3-455-00491-5, 14,00 €
Wenn Träume erwachsen werden
Ein Blick auf das digitale Zeitalter. Essays und Interviews 1984–2014. Hoffmann und Campe, Hamburg 2015, 448 Seiten, ISBN 978-3-455-50359-3, 10,00 €
Wem gehört die Zukunft? Du bist nicht der Kunde der Internetkonzerne. Du bist ihr Produkt.
Aus dem Amerikanischen von Dagmar Mallett und Heike Schlatterer. Hoffmann und Campe, Hamburg 2014, 480 Seiten, ISBN 978-3-455-50318-0, 24,99 €
Gadget. Warum die Zukunft uns noch braucht
Aus dem Englischen von Michael Bischoff. Suhrkamp, Berlin 2010, 247 Seiten, ISBN 978-3-518-42206-9, 19,90 €
Martin Schulz, born on 20 December 1955 in Hehlrath, is a trained bookseller. In 1982 he founded his own bookstore. In 1994 he was elected to the European Parliament as a member of the SPD. In 2012 Martin Schulz took over the office of parliamentary president there.
Martin Schulz has received several honorary doctorates for his commitment to European integration as well as to the preservation of democracy and civil liberties, including from the universities of Kaliningrad, Istanbul and Jerusalem.