Alan N. Shapiro, Hypermodernism, Hyperreality, Posthumanism

Blog and project archive about media theory, science fiction theory, and creative coding

Time-Memory-Experience (part 4 of 4), by Anja Wiesinger and Alan N. Shapiro

Comments Off on Time-Memory-Experience (part 4 of 4), by Anja Wiesinger and Alan N. Shapiro
Timeliness of the Computer
from the Time-Memory-Experience project
(this is part 4 of a 4-part essay)
by Anja Wiesinger and Alan N. Shapiro
Let us now take a look at the specific modes of time of the computer. In what comes next we would like to look into the timeliness of the computer itself, the rhythm created by the machine and by the user, and, lastly, the perception of time produced in the interaction between user and machine.
Basically computers don’t have a concept of time as we know it. It might be helpful to be reminded first and again what data are. Data are bits of information which have been translated from continuous signals into discrete digital signals. One could say the same thing about the construction of time in the Western tick-tock culture (where the lived fiction-reality of time is a hybrid of scientific-objectively real and socio-culturally constructed). Continuous (human-Western) time needed to be reconstructed on / transferred to the computer as discrete time. This is what happened to narrativity in the technological age. Hence the invention – by Alan Turing et al. around the time of the Second World War – of The Computer 1.0. Algorithms and programming languages process data in a discrete succession of discrete moments. The Computer 1.0 – which is essentially a hybrid of text medium and calculation machine – depends on a precise programmatic temporality. To keep this media going, programmers program all sorts of programs — techniques like databases, timestamps, schedulers and dispatchers, filter and time beams. The Computer 2.0 would implement the hybrid fiction-reality of Western time and its multi-dimensional expanded successor as fiction-reality in a conscious and explicit way rather than semi-consciously and half-in-denial as in The Computer 1.0, which pretends to be aligned with an objectively real discrete forwards-running time.
Recursive functions and time loops return after execution to their original state. The repetition of data calculation allows for an automatic and exact reproduction of queries. The repeatability of the work of the executable instance is the stuff of the extant computer. Data processing systems of businesses and governments all over the world now rely for every detail of social-technological operationality on this endless reproducibility of the subroutine’s “carrying out.” Other kinds of algorithms which may “learn” about their own behaviors, apply certain self-evolving filters, and add more lively kinds of data move already in the direction of an alteration of the core construction of the computer, retaining what The Computer 1.0 is yet superceding it in the Hegelian sense.
Recursive functions and if-then-else loops return to their initial point after execution. They repeat. But how could they be transformed or even grow? The point is that, in search algorithms, for example, queries deliver the same results unless the data has changed in the meantime. This preciseness in combination with the storage capacity in digital archives is crucial for the functionality of the digital archive. Yet the question before us is how the structure of the database as technological artefact could be upgraded to a relationship of pattern, similarity, and musical resonance at the grade of quality of the user experience that one can already have today in a “surface” way in a media-software environment or “computer game” that gathers its information from a brilliantly improvised dynamic programmed navigational structure that in fact prefigures new kinds of databases, software, and “beyond the digital” computer at a more fundamental architecture-design-and-implementation level.
Since in a database, each item of data is required to be an identifier, making sure that it is uniquely identified in its value and location, the data remain atomic, consistent, isolatable and durable. This also means that the user does not necessarily have to know the physical location of the data. This is different from a conventional real-world library, where the library card provides information about the location of a book via conventionally agreed-upon coordinates, such as keyword, shelf number, or indexical number. Online search engines, on the contrary, have already done this work for the user in pre-processing, and they instantaneously return the desired information to our fingertips.
This immediacy by which information is accessible is truly amazing. It saves time in a research process. This is a total rationalized automated economical practice for efficiency. Efficiency is a prerequisite to human liberation, helping to move from an economy of scarcity to an economy of freedom and abundance, as utopian thinkers like Herbert Marcuse and Murray Bookchin surely would have agreed. The Computer 1.0 can therefore be characterized by its acceleration or intensification of our Western notion of time – and the rational-productivist use of time – whereby work gets accomplished based on the assumption that the information remains technically-operationally valid for the duration of the procedure. The question then becomes: how can this acceleration or intensification of Western time be radicalized further (while at the same time becoming-truly-mainstream), how can this breakthrough break through to new possibilities for time which go far beyond what we have lived in the everyday-workaday-humdrum-quotidian social reality, but which we have already imagined a thousand times in our collective science fictional imagination?
How might we consider aspects of Artificial Intelligence or Computer Science 2.0 that deal with complex problem-solving, applying what we have learned about memory in Historiographic Studies (for example, of the Holocaust-Shoah or of victim-sacrifice (Opfer) political situations or the preconditions for human empathy) to The Computer 1.0 to create self-learning machines and true pattern-recognition algorithms?
And let us bring embodiment into play. Here we are concerned with the time-frame of memory and experience in the interaction with the machine. What does it really mean for the user when certain work steps are passed on or delegated to the machine? When the user does not seek out the information any longer, but instead the information comes to the user? When the user doesn’t hold a book in her hands any longer, but scans a text on the screen with the eyes? What kind of textuality as materiality has been invented and is to be invented?
Walter Benjamin will help us here. To really think with Benjamin and not only to academically cite and invoke him for the 4,096th time. What was Benjamin’s elaboration of the machine?
In Tune with the Machine
For Walter Benjamin, a foundational human experience in relation to all objects is formed by a gaze — a gaze that is reciprocal. There is a little homuncular man within each object, something that is human about each object, something that returns the gaze. It is that small moment of return-recognition-feedback-acknowledgment  that allows for the self-assurance of the subject. Perception and experience are constituted by the relation of the subject to its objects, for example, in the relation of the cameraman to his camera and to the camera-ized world.
The relationship between a worker and his machine (when the machine is considered as a tool and not as techne in the sense of Martin Heidegger’s “The Question Concerning Technology” or as android in the sense of Alan N. Shapiro’s “Toward a Unified Existential Science of Humans and Androids”) can be characterized by the rhythm which is predetermined by the machine. The foundational human experience of the reciprocal gaze, as described by Walter Benjamin, cannot happen. There is no gaze and no acknowledgment. There is domination of man by the machine, which is paradoxically the result of man’s instrumental use of the machine to dominate nature and other humans in the classical paradigm of technology initiated by Stanley Kubrick’s mutated ape-men.
Fritz Lang’s classic science fiction movie Metropolis (1925) illustrates this relationship between worker and machine well. The worker suffers from the “shocks” (literally the physical hits to which the worker must respond) that are imposed by the mechanically consistent rhythm of the machine. The worker is left with no choice but to adjust to the tempo of the machine. Lang’s machines were made of steel and were very heavy. The machines produce heat, and the worker has to mobilize all of his muscle strength to cope. The worker perspires profusely. If the worker falls out of tune with the machine, he – and symbolically all of humanity – loses control. Humanity is sent to the underground, as in Terry Gilliam’s Twelve Monkeys (1995). Production stagnates and descends into a comical or tragic mode.
These shocks generated by the machine release stimuli, according to Benjamin, which the worker has no capacity to fend off. The worker is isolated and distanced from other workers in the factory. Each worker is responsible for managing or monitoring but one small work-step of the machine. The machine rules and YOU ARE NOTHING.
In the well-known essay “Der Erzähler” (1936), Benjamin shows how in pre-industrial times knowledge and experience were passed from generation to generation through the telling of stories. Since pre-industrial work was mainly hand-crafted production, practice and experimentation were crucial for training and knowledge reproduction. This also meant spending a lot of time in the company of the objects.
Production was not as subordinate as it is today to the practice of efficiency and rationalization (in time-saving as cost factor) in the modern factory, office and mass production. Alexander Kluge and Oskar Negt, in the book Public Sphere and Experience: Toward an Analysis of the Organization of the Bourgeois and Proletarian Public Sphere, showed that cultural transformations at the scale of the society as a whole follow a temporality which is similar to that of pre-industrial times, slower than the speed of industrialization. In the essay “Toward an Analysis of the Organization of the Bourgeois and Proletarian Public Spheres” (1972),  Kluge and Negt further stated that, in contemporary advanced capitalist society, there is more than one co-existing public sphere, although some of them exist marginally and without official recognition or even notice. Various collective experiences within society may follow a temporality which is similar to that of pre-industrial times. Such experiences may include sports, play, games, and refusal of work.
Hence contemporary society as a whole might be slower than the velocities induced by machinic and mechanical acceleration. And at the same time faster.  BTW, although Paul Virilio wrote a lot about speed, it is indeed more scientifically correct, following physics, to speak about velocity.
Rhythm and Automatism of the Computer 

Benjamin already saw the flip into its opposite of second-nature technics into first-nature bare life via the domination of technology over human beings. This resembled the archaic top-bottom relationship of nature over humans, in which nature is attributed a mythical superiority. This dysfunctional balance of human beings in relation to their environment was illustrated by the example of the machine and the factory worker, whose actions are determined by the machine.
Compared to this archetypal modernist example which haunted Benjamin and Lang, operating a data processing machine is a game for kids. Handling a computer requires some knowledge, but the machine takes the upper hand only when it breaks down or is infiltrated by a virus or rogue software.
User-Machine Interaction, Human Memory
In the interaction between the user and the machine, distance is reduced. The user makes use of the tools which he or she acquires via the keyboard, mouse and/or touchscreen controls. The user rules the rhythm of work. The machine acts as a servant for the user. It has been programmed to serve and has been domesticated by the human user. It is like a pet. Contrary to the industrial mechanical machine, the “personal computer” is designed for individual — professional as well as personal — usage. As a quasi-universal machine which simulates many singular specific machines, the PC empowers the user to complete a nearly infinite variety of tasks. We at the fictional non-existent company Shapiro Technologies have a positive view of mobile technologies and intelligent objects/surfaces/interfaces, and we look forward to their future development as android companions.
Rhythm is created by both the user and the machine. It is aperiodic depending on the speed of connection, how much CPU is being taken up by other processes, and the choices that the user makes. This begs the question: what potential experiences are possible? How does the perception of information unfold in such a dynamic environment? Do we only search, scan and filter information that has, in effect, already been queried and filtered for us by the computer? Does the model still precede the real in the information era? If the dynamic image build-up of the screen, and the automatism under which this happens, precedes experience — then is this similar to Benjamin’s optical unconscious?
Or is it closer to what Rosalind Krauss (see Part 3 of 4 of this essay) described as the curious libidinal view? What are the effects on human memory, both short- and long-term, of computer usage?
In terms of classical cultural theory, does the personal computer trigger a curiosity (Schaulust), or rather other kinds of experiences? What new forms of knowledge are possible? We wish to study the computer and social media in relation to memory, embodiment, practices of the body, and our phenomenological relation to digital technologies.
Re-Configuration of the Storage Memory 

Benjamin’s discernment of the catastrophe was contingent on the acceleration of time in modernism, in which the past was sent into oblivion and the glimpse into the future was blocked. In a similar manner, Sean Cubitt concludes that strictly rationalized computers have already calculated the future. For Aleida Assmann, the presence of fluxes of data constantly rewrites the past. Assmann claims that the metaphor of the trace fails to illuminate today’s digital archives, because all traces are lost.
As I – Anja Wiesinger – have tried to show in my study of the digital image archive ARTstor, a new archival paradigm has been instituted in which nothing is deleted anymore. Due to the immense memory capacities of today’s computers, everything is saved, even the movements of data in add, update, and delete operations of records of the archive itself. The history of the archive is inscribed into the archive.
One could attribute to the ARTstor library the role of the Speichergedächtnis,  and to the individual collections created by the user the role of the Funktionsgedächtnis. The digital archive is not solely a Speichergedächtnis in the sense of the traditional archive. It presents and merges both types of memory on one platform. Historiography is active in the image collections. Teachers and students save material for research, seminars and publications in which the material actively “works.” Images are inserted into different contexts, depending on the research question and/or research interest. Images produces different meanings and and get interpreted in multiple ways.
The organization into collections has different effects. Since knowledge production is no longer an individual but a more collaborative process (it happens not outside the archive, but rather on the platform itself), perspectives and points of views more immediately stand out and are open for discussion. Placing historiography and memory at the forefront also means to become more conscious about one’s own position as a writer who has an explicit cultural and historical perspective. It is important that not all processes get automated, engineered, or pressed into algorithms. When designing for digital technologies, it is good to think about where it makes sense to apply automation, which and how many processes should be passed onto the machine.
The trace, I would claim, does not disappear. And not because it is overwritten by a presence, but because past and present are equally present. There is no more “grand narrative” (Lyotard), but since the selection of sources and memory is multiple, neither is there a “grand selection.” A new metaphor that replaces the trace has yet to be found.
It is up to us, up to the designers, to decide which processes are handed off to the computer for efficiency. An experience of the digital archive that is more and more immersive and multi-media will enable the inclusion of alternate forms of knowledge, of multiple views, of more subjective takes on the material, of more narratives and storytelling, more sharing, and more involvement of emotions in the archive.
Ironically, all this critique of modernism, deriving from the lineage of those thinkers whom I – Anja Wiesinger – have discussed, who left their imprint on the 20th century, this critique of rationality, time, history, and objectivity gets overturned by something that first appears under the guise of the ultimate metaphysical machine: the computer.
In the archive, all stories — including the story of the archive — get recorded. They contribute to the assembling of a more powerful memory (computer- and user-memory) in which there is a plurality of pasts and presents which will shine a light onto a brighter future.

Comments are closed.