Alan N. Shapiro, Hypermodernism, Hyperreality, Posthumanism

Blog and project archive about media theory, science fiction theory, and creative coding

From Sociology to Media Studies to Software Studies, part two

Comments Off on From Sociology to Media Studies to Software Studies, part two

Friedrich Kittler’s Media Archaeology

What has, to the contrary, been very successful and influential in German universities is the media theory of Friedrich Kittler. Kittler’s media archaeology and media historiography have led to the rise of the Berlin School of media theory, of which Wolfgang Ernst is at the forefront. Kittler opposes the so-called discourse analysis of the study of media practiced in much of the humanities, which he sees as deriving its methods from hermeneutics and literary criticism. He instead advocates a technical materialism of data storage devices, data transmission, processors, automatic writing systems, and so forth that examines what is claimed to be technologies from within. There is much to respect about Kittler’s work. It is not a question of right and wrong, and a plurality of methods is most desirable. Within my own teaching of media theory to art and design students, I understand Kittler’s body of writing as a valuable contribution to posthumanism and the post-humanities – a gesture that goes beyond the anthropocentric prejudice of placing humans at the center of history and narratives of the future.

Yet when it comes to thinking about software, Kittler’s approach is precisely the opposite of mine. This is evident from the title of Kittler’s famous essay about software called “There is no Software.” As soon as Kittler considers the revolution of the computer and how it has engendered the development of an informatic society and the historical shift from old media to new media, he posits that the rational-calculating and numerical (Alan Turing) logic at the heart of the computer must necessarily extend itself to all levels of the system: to all of the higher-level programming languages; to all the interfaces, applications, and cultural patterns which surround the kernel. The combinatorial, robotic, and digital-binary essence of the hardware deterministically spreads itself everywhere in this cybernetic totality.

My thesis regarding the past and future of informatics is that all layers of the software above the kernel can indeed be anything. Regarding the history and the science fiction futurism of computing, it is a question of studying the past and being open to the future of a succession of cultural paradigms. A technical layer of conversion between the computational-digital-binary center and the more poetic or human-language discursive applications and interfaces at the periphery and at the outer zones is possible. These mechanisms of translation can exist at a certain specific level of the network architecture. I argue against a dualistic opposition between machinic-computational and poetic-linguistic expression. Our resistance in ideas to the freeing of software from the kernel of rational-calculating logic is paradoxically an outmoded humanist clinging to our belief in the specialness of humans – the sublime qualities of the soul and consciousness (the Cartesian cogito) that humans allegedly have and which we claim that machines do not have.  Kittler’s “There is no Software” is a brilliant essay, yet it is a steering of media theories of software in the diametrically wrong direction.

Friedrich Kittler: The Numeric Kernel Is Decisive

Kittler’s position is that everything in computing boils down to the digital code of the hardware, that there is no going beyond the pervasive logic of the binary. But, in my view, digital-binary logic is not the only possibility for computing. Digital-binary logic is not universal and forever. Software theory as an academic field, deriving from both media theory and computer science, can lead to new paradigms and possibilities such as quantum computing, Creative Coding, software that reverse-engineers the human brain, and software as semi-living entities in the sense of Artificial Life, not only inert or dead mechanistic things to be manipulated by the dominating programmer-subject.

In “There is No Software,” Kittler fancies himself as writing about the end of history and the end of writing. In its own way, it is a sweeping postmodernist or poststructuralist thesis. In a contemporary writing and cultural scene of endlessly expanding and exploding signification, there is ironically an implosion into the no-space and no-time of microscopic computer memory. The effect of information technology on writing, according to Kittler, is that we supposedly do not write anymore. The idea that software code might be a form of writing – or could evolve into a form of writing (software code as the writing of the twenty-first century) – a form of écriture in Jacques Derrida’s deconstructionist sense (an intervention or inscription into language that is more fundamental and effective than speech), does not occurs to Kittler. He assumes that the computer must bring about the programmatic automation of reading and writing. “This state of affairs…,” he writes, “hides the very act of writing. We do not write anymore.” Kittler believes that writing done on a computer is no longer an historical act because the writing tools of the computer “are able to read and write by themselves.” For Kittler, computing will forever be the Universal Turing Machine that Alan Turing conceputualized in 1936 – “formalized as a countable set of instructions operating on an infinitely long paper band and the discrete signs thereon.”

Kittler writes: “The all-important property of being programmable has, in all evidence, nothing to do with software. It is an exclusive feature of hardwares, more or less suited as they are to house some notation system.” My view is the opposite of Kittler’s. I think that there can be a shift from programming as the programmability of a microprocessor device to programming as creativity, creative expression, and writing in the deepest sense of effectuating social change. There can be cultural formulations of the alternatives, active rewritings of the mainstream narratives. There can be moral algorithms and dialogical Artificial Intelligence.

For Kittler, the software space is just virtuality or simulation. It is not possible, according to him, to establish a new relationship to the world through software programming, any aesthetically coded transformation. Art/aesthetics/design and informatics have no possible bridge between them. The miniaturization of hardware is, for Kittler, the proper dimension of simulation, of our postmodern writing scene which is no longer a scene of writing. Baudrillard thought something similar when he wrote in 1981 that “genetic miniaturization is the dimension of simulation.” Yet Baudrillard transcended this conservative-nostalgic position with respect to technology when he practiced his photography as a meditation on the technical imaging media itself, and wrote about photography as the writing of light (a practice of an exemplary media technology as a form of writing). His photographic praxis of seduction and reversibility is one inspiration for my design project of taking the side of (software) objects, assembling the the first iteration of code and diagrams that demonstrate the feasibility of human moral institutions (in which humans democratically participate) interacting on micro levels of detail with software agents.

For Kittler, hardware always precedes and determines software. He writes: “There are good grounds to assume the indispensability and, consequently, the priority of hardware in general… All code operations come down to absolutely local string manipulations, that is, I am afraid, to signifiers of voltage differences… The so-called philosophy of the so-called computer community tends systematically to obscure hardware with software, electronic signifiers with interfaces between formal and everyday languages.” The combinatorial logic always wins out and the software can do nothing more than tweak bits and bytes. The software industry is one giant conspiracy to hide the machine from its user. The solution, according to Kittler, is to write programs in low-level assembler code to maintain awareness that the underlying hardware is what the program always resolves itself into in the end.

The Computable and the Expressive

The original conception of the digital-binary computer was made in the 1930s and around the time of the Second World War by major figures in the history of ideas such as Alan Turing, John von Neumann, and Claude Shannon. Turing first conceptualized the computer in his 1936 paper “On Computable Numbers with an Application to the Entscheidungsproblem.” He developed the idea of the Turing machine: the mathematical model of computation where a mechanism moves above an infinitely long tape, stops over one cell, reads the symbol written in that cell, and changes the symbol to another symbol chosen from a small set of possible symbols. The control mechanism then moves to another cell to carry out the next operation, manipulating symbols according to a table of rules, simulating the logic of any algorithm that is thus proven to be computable or calculable. The thought experiment of the Turing machine is contemporaneous with the idea of representing instructions and data as finite sequences of binary numbers. von Neumann is generally credited with the innovation of the stored-program concept. Shannon achieved the breakthroughs in electrical engineering that Boolean algebra can be deployed to realize digital electronic circuitry and that binary electrical switches can support fast algebraic calculations and digital computer design.

Alan Turing’s achievement can be divided into two parts. One part is scientific, and the other part is cultural. While it is true that I want to take down a notch the mystique of science, but I want to do this in a more moderate way than it has been done in the science and technology studies (STS) or humanities approach that says that everything is cultureThe science part of the invention of computer science: (the hardware and) the algorithms and the data can all be encoded into lengthy binary strings (i.e., stored as computable numbers). The cultural part of the invention of computer science: how one does this (i.e., the relationship between the code and the data) is a cultural decision. When Turing and von Neumann decided to run algorithms on data for the purpose of calculation, this was a cultural decision (influenced by the historical situation of the time and its institutionally needed military applications). They put into practice a certain precise relationship between program and data in their specific deployment of computers during and after the Second World War.

Lev Manovich, The Language of New Media

In 2001, The MIT Press published the book The Language of New Media by Lev Manovich. This is a milestone work in the academic theorization of New Media whose importance cannot be underestimated. Manovich investigates cultural software and cultural interfaces, visual culture and especially moving images, and the historical transition from film to digital video and computer games. He develops insightful theses concerning conventions and artefacts of software applications and new media user experiences covering the areas of: interactivity, telepresence, immersion, distance and aura, digital compositing and montage, computer animation, databases, algorithms, the storing and manipulation of information, and the navigating of digital and virtual spaces. For Manovich, the cultural and aesthetic forms of new media are both a continuity with and a break from older media such as the cinema.

In the chapter “What is New Media?” Manovich enumerates five principles which characterize new media: (1) numerical representation (artefacts exist as data or can be stored as numbers); (2) modularity (different elements exist independently); (3) automation (artefacts can be created and modified by automatic processes); (4) variability (artefacts exist in multiple versions); and (5) transcoding (the digital-binary logic and its instances influence us culturally – from technical codes to cultural codes). In summary, new media objects are based on program code, on the limitless re-programmability of the binary structure and the electronic impulses. Software Studies (Lev Manovich, Matthew Fuller, Benjamin Bratton, and other authors in the same-named MIT Press book series) in effect contests the thesis of Friedrich Kittler that there is no software. This movement in ideas instead points to the primacy of software as a hybrid of technical and cultural patterns that is potentially both critical of society and future designing in a pragmatic-utopian sense.

In his 2013 book Software Takes Command, Manovich takes the further step of extending media theory to include software theory. Thinking with Manovich, my assertion is that a major challenge for media theory is to consider how websites, social networks, video games, web television, online forums, podcasts, personalized advertising, and mobile apps transform the essence of what media are. One possible avenue for researching this question is to engage in the study of transmedia, which was initiated by Henry Jenkins of MIT with his 2006 book Convergence Culture: Where Old and New Media Collide.

Another set of questions: How do the trends of software and digital media affect the design process? Is the essential nature of design or Gestaltung altered by the fact that it is now practiced almost everywhere with the tools of simulation and software which are built on top of technical design patterns which are object-oriented? Object-orientation is the informatics paradigm that is based on the concept of virtual objects which encapsulate both data and the operations on that data. Object-oriented design devises a system of multiple interacting objects to create a software environment. What is the relation between the object-oriented and business logic (a software development term denoting the middle layer of a system) patterns of software design and the patterns of other kinds of design in the inherited classification system of design – architectural, graphical, fashion, communications, industrial, and product design? And what does media become after it has come under the control of programmatic and AI software?

Software Studies: The Expressivity of Code

In the book Speaking Code: Coding as Aesthetic and Political Expression (also published in the MIT Press Software Studies series), Geoff Cox and Alex McLean elaborate a coherent hybrid discourse of software code writing and humanities critical theory. Merging text and code, and musing on code as both script and performance, they locate the arena for understanding the signifying import and linguistic resonance of programming language code in its practical operations online and in the networks. The study of code by Cox and McLean is an existentialist view of software programs as having open-ended possibilities, rather than emphasizing their social and organizational impact as instituting fixed structures and processes. Cox and McLean examine the live-coding scene (displaying source code during an artistic performance) and peer production (self-organizing community efforts such as open-source software projects). Cox and McLean see code as an expressive and creative act, related to the conjuncture of those two activities what have traditionally been called art and politics.

The Italian autonomist thinker Franco “Bifo” Berardi writes in his foreword to Speaking Code:

“If we can say that code is speaking us (pervading and formatting our action), the other way around is also true. We are speaking code in many ways… We are not always working through the effects of written code. We are escaping (or trying to escape) the automatisms implied in the written code… Hacking, free software, WikiLeaks are the names of lines of escape from the determinism of code… The linguistic excess, namely poetry, art, and desire, are conditions for the overcoming and the displacement of the limits that linguistic practice presupposes.”

Many such projects – and more generalized in their transformation of what code is – are possible. Poetic, musical, and semiotically signifying language will re-emerge within software code to counteract – in a revolutionary paradigm-shifting way – the original historical and scientific assumption (that was made at the time of the original invention of the computer) that code is a series of instructions to a machine, an exercise in formal logic, and the reduction of language to information. Text and code come together as an embodied cyborg cooperation (Donna Haraway) or as a relationship of uncertainty and indeterminacy where each partner in the human-machine exchange is reciprocally transformed. This can happen in the body of the code, in programmer comments, or in the double frame of reference of code as both readable as directions for the processor and as elegant expression for the human programmer.

In Speaking Code, Cox and McLean refer to the concept of double description as mutual causation or circularity that was elucidated by the thinker of second-order cybernetics Gregory Bateson in his 1979 book Mind and Nature: A Necessary Unity. Starting from this notion, the authors then speak about double coding: a composite of formal logic and linguistic creativity in Codeworks (the term given to the mixing of creative writing and software code by the poet Alan Sondheim) or pseudo-code (informal description of the steps of a program or algorithm, often a phase of software development immediately preceding the writing of code), a hybrid articulation that is both rigorously systematic and carries the force of writing.

 

Comments are closed.