The assumption that technology is a bigger part of our lives than ever before must be questioned. It seems true because we are an industrious species with a short memory. Many of us can’t even remember what everyday life looked like before we had mobile phones. It might be a fact that new inventions are adopted quicker because digital capitalism and global mass consumption are an accelerator of technological evolution. But while technology evolves faster, I am not sure it is a bigger part of who we are than before.
I am not even sure our obsession or fascination with technology is stronger than before. Take the notorious Manisfesto of Futurism (1909). Marinetti felt that the technological progresses of his time, for example the household propagation of electricity or the invention of automobiles, was a “new sunrise on earth”. He looked at speed as a new ontological absolute, the “splendor of the world.” Yet the Ford T, which was being produced since 1908, had a top speed of only 40 miles per hour.
Technology is what the Greeks called a pharmakon (a term popularized by Derrida and Stiegler), among others, both the cure and sometimes a new disease, never to be taken without a political horizon. For example, studies tend to show that so-called “digital natives” have a short attention span: is this compatible with deep thinking or political consciousness? Probably not. But it might help to think faster, or differently. My first job – actually my “military service” – was at the French economic expansion bureau in New York, in 1995: from my office in Manhattan, I did sometimes communicate with David, a French friend of mine, via fax, because emails were still to be democratized. David and I would write letters that we faxed to each other across the Atlantic in minutes. The general meaning of our dialogue would have remained the same if we had used emails: enjoying and cultivating friendship. But the phenomenological experience was different. For example, I had to run to the fax machine in order to be the first to receive my correspondence rather than have it glanced at by someone else in the office. Today we receive emails in what we feel is an instantaneous intimate bubble, as if we were having an internal monologue with the rest of the world. Of course, we might consider that our workplace emails are spied on, but most of the time, we don’t think about digital security. It feels that it’s just us and the messages: the mechanical work we have to perform to receive messages, and the time lapse, tend to be minimal, creating an illusion of near telepathy.
I would distinguish between a technology that is visible, thickly embedded socially, and openly artificial, as opposed to a technology that is almost invisible, almost asocial, and natural-like: a quasi-instantaneous message announcing the death of a friend via automated email might feel like Persephone herself whispering in your ear, or it might feel more abstract. But it would be wrong to affirm that quasi-natural technology is new. Take an institution like marriage. It’s a reproductive cultural technology that was long presented as natural. It regulated social perpetuation as a social algorithm, defining who could marry whom, who could procreate with whom. Marriage was (and still is) a psychological technology that regulated social class, loneliness, physical and mental health, etc. We know for example that people who tend to remain single late in life have a worse health condition because they don’t eat as properly, they take more everyday risks because they go out more often, and they don’t benefit from the immunological affective effects of having children, and living in an environment of mutual care (in case of a non-stressful marriage).
We have been anthrobots since we have started to cooperate and organize our tasks. As anthrobots, we are a hybrid unity made of flesh and protocols, creation and creature. It would not make much sense to speak of a human species without technology. Ancient tribes had their own technology: rituals, prayers, traditions, languages, more or less extended divisions of labor. We realize that we have always been a technological species when we understand that a technology is an enabling set of algorithms that don’t need to be enclosed in a computer. A technology can be an institutional ritual. As social beings we follow protocols, codes, techniques that define us as an anthrobotic species, a species that has relied on forms of automation since the building of the Great Pyramids and probably before. This is what historian of technology Lewis Mumford called the “megamachine.”
More recently, philosophers Deleuze and Guattari defined humans as “desiring machines”: after all, we have electricity in our bodies, and we are connected to many social protocols. There is no such thing as our bodies and minds, separately, nor are they totally separated from technology. We are anthrobotic units, not just robotic ones: as anthropos, we are more than robots and protocols, because we can co-create or at least ordinate new protocols and novel social machines. The development of more accessible technological devices for all, like personal computers, is without a doubt a positive factor politically.
Another important point is that the underlying process of our technological condition is: to can is to do. Which means that if we have new possibilities, we will use them. Which also means that with great power can come greater loss of time and responsibility. The more techno-capitalist societies multiply points of contact between each of us and machines, the more we will be constantly mobilized to produce data. This can become alienating, because we are not here on earth to merely produce data. We are here to shape it into meaningful worlds. What is positive is the — at least virtual — democratization of world-shaping via technology.
An anthrobot is not a robot. This means that we are more than the sum of our automatisms. We have a special narrative relation to our emotions, to care, to desire, to co-creation, to conflict, to harmony. The human danger, the “negative” aspect is to identify with our technologies, as if they were natural, for example to think of the technology of marriage as something absolutely and eternally necessary, or to think of social networks as a territory outside of which there is no survival. Or to think that a given technology is good for everyone because it serves a need. There is not such thing as a universal social need, even if there are biological needs. Here we should remember Sartre’s existentialist philosophy: we are “condemned to be free”, which means that we are a plural species with many possible worldviews and possible world-forming techniques and values, possible life-worlds or forms of life.
I define a robot as any algorithmic enabler, which allows me to suppose that we have always been surrounded by robots of some sort. According to Lewis Mumford, a megamachine is an invisible structure composed of living human parts, each assigned to a special office, role, and task, in order to make possible the immense work-output and designs of a collective organization. The famous example is the building of pyramids, in which thousands of slaves were organized into a vast human machine, each subgroup performing a simple task as a cog.
Artificial intelligence is nothing new. Take languages for example: grammar and vocabulary are an artificial extension that produces knowledge and action — more than the sum of its parts — when organized in specific ways and uttered in specific circumstances. This is called today the extended theory of mind by philosophers like Andy Clark, but sociologists like Emile Durkheim already studied these forms of collective consciousness in the nineteenth century. We are not only a cognitive species, we are a cog-native species. We organize our worlds as social machines.
Of course, I am ready to admit that we do feel that today there is a rise of robots and artificial intelligence. What new phenomenology will this perception and feeling produce? Let’s use an example, perhaps a metaphor. We all know that there are public rooms in which the light turns on when sensors perceive a movement. This is a very minimal artificially intelligent device. It enables us to save the energy of turning the light on, and saves some energy by reducing electricity consumption, since the light will turn off eventually when the room is interpreted by the detector as being inactive. Now, imagine I am sitting alone in such a public room, say a common dining room, and I am thinking. I am not moving, so after a certain period of time the light turns off. This is why some public spaces now use acoustic detectors. Recently I found myself in such a vast room alone silently writing on my computer. It was just me and the sound detector, with which I developed an ephemeral relationship. That’s the other side of the coin of our tendency to naturalize human inventions, for example transforming marriage into a natural law, or a sound detector into a spiritual entity that would listen to us. We tend to anthropomorphize anything that we interact with on a regular basis, for example attributing personality characteristics to our car or our bike. This animistic mode of thinking — the idea that most things around us have a soul — is probably hardwired in our reptilian brain or limbic system: it’s probably a survival advantage to presuppose that anything could be an agent and perform an attack on us.
Conversely, it might be convenient, in order to avoid being constantly afraid of everything, to assume that we can domesticate objects to the point that they become our friends. Now take the openly anthropoid robots that will soon invade our households. Clearly, we will be more inclined to consider them as spiritual beings, especially if we live alone, as predicted in many movies and novels. Take the movie Her. The real theme of such a movie is not that technology will replace humans, it’s the pathology of being without a world, without a collectively shared symbolic space that allows us to develop our individuation by co-creating ourselves and our peers in a conscious and dialogic manner. More and more urban inhabitants, despite the fact that they live in vast cities, are devoid of belonging to any embodied mind-sharing community. We must reinvent meaningful corporations, or symbolic value-producing associations that respect personal freedom and our need for well-belonging.
Intelligent systems around us can create time for us humans to become what we can do best: world-formers, community builders, co-creators of values, provided that other new technological devices are not channeling our free time towards futile tasks of data production or blind entertainment. Most of our daily hours are not spent producing anything new, they are spend reproducing the reality we leave in, our social capital: the most dangerous robot is perhaps hiding within us: it is the automatic reproduction of thoughts and beliefs that maintain the world as we know it, the same social reality yesterday, today and tomorrow. Of course, some traditional values are useful to be maintained and even recalled, and reapplied when forgotten. But the main danger of technology is the unquestioned automatic reproduction of embedded beliefs and values as if they were eternal and natural: crystallized biases.
The question is: will technological development catch up with our full potential? We have to stop thinking that robots and artificial intelligent systems are the next new thing, above or ahead of us. In fact, they are behind us. Being fully human is the real ultimate achievement, not a regression. A certain nostalgic humanism has caused much damage in supposing that humanity in each of us is a given, and that we don’t have to do much to be human. I prefer a more active view of what it takes to be human: a fully realized human being is part of a collective worldforming, a co-creative ensemble, and we still need to fully invent the human species, which is an elaborated but confused draft. Perhaps better to take a new white page, because the previous draft has become almost unreadable for many.
Leave a Reply