Alan N. Shapiro, Hypermodernism, Hyperreality, Posthumanism

Blog and project archive about media theory, science fiction theory, and creative coding

How to Regulate the Media when they are ubiquitous and have gone viral: from utopian science fiction to practical European policy

Comments Off on How to Regulate the Media when they are ubiquitous and have gone viral: from utopian science fiction to practical European policy

How to Regulate the Media when they are ubiquitous and have gone viral:
from utopian science fiction to practical European policy

Alan N. Shapiro

This is the text of the keynote speech that I gave at the European Union conference “Pluralism and Responsibility: Media in the Digital Society” in Berlin on July 7, 2020. I was invited to speak by Monika Grütters, the German Federal Government Commissioner for Culture and the Media.

You can watch the video of my speech here:

Kick-Off Event

My name is Alan Shapiro. I am honored to speak at this EU media conference. I thank Monika Grütters, the German culture and media minister, for inviting me. I was invited because I am an out-of-the-box thinker. I am an American media theorist and transdisciplinary thinker, and I am particularly interested in science fiction studies. I published a book on Star Trek and its principles of a future utopian society. Star Trek predicted many technologies and scientific areas of research – from mobile phones and speech interfaces to quantum teleportation and wormhole physics – which later became reality. I am from New York City. I have lived in Europe for about 30 years, first in France, then in Italy, and then for a long time in Germany, where I worked first as a software developer, and later as a university lecturer and professor.

We are gathered here to discuss what the EU should do about media in our current digital society. I will first address what I understand by the term “media.” We are in a situation of interconnectivity that is global, all-encompassing, and viral. Information, messages, and other things we value spread through the networks because they are contagious or infectious for us. This propagation knows no boundaries and is promiscuous, as evidenced in phrases like “going viral” and “viral media.” Media are everywhere. The coronavirus crisis is real and deadly, yet it serves as well as a metaphor for the borderless and replicating nature of the media.

As a future designer, I believe that we are living in a science fiction, that science fiction has become our reality. As a thinker about informatics, I believe that the concept of “regulation” – with its emphasis on rules – needs to be significantly revised. Informatics has already lived through a paradigm shift from rule-based to pattern-based. Software code in the past was a series of instructions and if-then-else conditional statements, a laying out of rules and machine states. Now software is based on Deep Learning algorithms. The artificial neural network looks for patterns in the available Big Data. Software programming sets up layers of inter-contextual intelligence to extract higher-level knowledge from raw data and multiple databases.

Let me offer you a concrete example to illustrate what will shortly be this paradigm shift from rule-based to pattern-based in a circumstance of regulation: The software that I use to determine if my students have plagiarized in their written work still functions by instituting a rule-based search of millions of pre-existing texts to find exact matches in a massive database corpus of text. But in no time flat my students will be using “creative” AI software that learns the patterns of the relevant subject-specific set of writings and machine-generates a new text that evades discovery. The students will always be one step ahead of me. Similarly, if the EU continues to regulate the media within the rule-based paradigm, then the real pattern-based Deep Learning algorithmic developments in the media will always be one step ahead of us.

There are two meanings of the word “media.” The media are (according to a dictionary from 1991): “the means of communication, as radio, television, newspapers, and magazines.” This definition corresponds to the older “analog media” and “mass media.” “Communication” implies a positive function, associated with free speech; the dissemination of information; the transmission of content; the media as a tool that upholds rational democratic discussions.

The second definition pertains to media research: investigating, as in the academic fields of media studies and media theory, the impact that the newer digital media technologies have on our lives, and on society and the economy. Media theory is arguably the successor to 20th century social and political theory. Yet media theory already questions if it should half-move beyond calling the object of its research “media.” The media are no longer specifically identifiable or localizable – they are super-positional, as in quantum chemistry or physics. They are hyper-textual, interactive, and immersive. They traverse Augmented Reality, video games, Internet of Things, robots, 3D printers, and brain-computer interfaces – the list goes on and on.

The first concept of the media – which I am arguing is by now partly obsolete – is rooted in the inherited political theory idea of the separation of the spheres of “private” and “public.” This is a two-level model of the freedom of the individual agent – the citizen or corporation in civil society – and the moral imperatives and restrictions – in short, the regulations – which are deliberated on and imposed onto that individual free agent by the governing body of the state. The assumption – dating from the liberal social contract theories of the 17th century – is that the free individual or organization cannot be trusted to behave morally and must be regulated from the outside and from above. This dualistic model of the liberty of the individual, on the one hand, and the moral responsibilities prescribed by state regulation, on the other hand, can today happily be transcended by something better, thanks to the golden opportunity of digitalization.

The positive opportunity which Deep Learning algorithms or artificial intelligence offers us is to embed moral responsibility into the partly free and partly monitored and controlled conduct of the person and the company, to design a compromise between freedom and the democratically-guided coded algorithms. We want to get policy and law written into behavior on a micro level of detail by deploying decentralized software code. This sovereign union of freedom and responsibility is called autonomy. Autonomy has been a major area of reflection in European philosophy from Aristotle to Immanuel Kant to a contemporary thinker like Michel Foucault. Autonomy will become increasingly important in online digital media.

Compared to America and China, Europe lags somewhat behind in fourth industrial revolution technologies. But this time gap presents Europe with the chance to develop a better digitalization concept and vision that does justice to its more civilized version of democratic capitalism. America’s Silicon Valley has given us “surveillance capitalism” – the concentrated power of a few monopoly platforms of social media and commerce. China is executing its zealous implementation of the Social Credit System – everyone will be graded with a reputation score for trustworthiness and correct social behavior based on ubiquitous camera observation and algorithmic analysis. The power of simulation of the American media plunges that former democracy into the crisis of “post-truth” as rhetoric replaces consensus agreement about facts. The state capitalism of China treats its population as subjects of its rule rather than as citizens.

The coronavirus track-and-trace app, introduced very recently by the German government, has social and health benefits, and is an example of conscious design in the direction of decentralization and autonomy. The app tracks down new clusters of infections and warns the user of risks to her health. Using Bluetooth data exchange technology, the app saves its data only locally on the user’s handheld device.

We will need peer-to-peer media platforms that promote good ethical values for health management, education, solving the climate crisis, mutually respectful political debate, and support for victims of abuse. These applications should be designed with privacy protection.

We need to carefully design the details of a mutual relationship between human morally steered institutions and artificial intelligence. Morality should be an inherent component of software, not moral rules as input and moral consequences as output. How will moral decisions be made? How can AI be granted some autonomy without giving it too much power?

We have the negative examples of America’s surveillance capitalism and China’s surveillance “communism.” These are science fiction dystopias come to life. The architecture of the Chinese Social Credit System can be stood on its head based on the values of the European Enlightenment. The European Union should work together with European software companies to develop new computer science concepts for moral algorithms and autonomy. And, of course, the digital applications for our future that would follow.

Comments are closed.