In writing a book on artificial intelligence I have found that it is impossible to fully separate the technology itself from the sometimes bizarre ideas that we have for it and about it. I am fascinated by how technologies are not only anthropomorphised, but how some are considered menacing and others amazing, and some both at once. Whether the aliens are reading your thoughts through radio waves and dental implants or Google is reading your thoughts through email, the borderlands between schizophrenia and sensible vigilance – delusion and deduction – are disputed territory. Perhaps it comes down to a level of discomfort not just with how some people see technologies, but with how some people think technologies see them. The occult artifacts of planetary-scale computation form a cracked mirror, and the reflections seen there are given to a bipolar political-psychiatric imaginary.
Machine aesthetics, high in the Alps
This summer while teaching in Saas-Fee, Switzerland, I gave a public talk that included an extended rumination on apophenia, or the perception of causes and patterns that are not really there. The term is attributed to Klaus Conrad and derived from a 1958 paper on schizophrenic confusion between illusion and revelation. Faces in clouds and gambler’s luck are classic examples of psychological apophenia, and conspiracy theory – the assignment of first cause agency to some master force that causes over-personalised historical effects – is a kind of political apophenia. A few nights later, Hito Steyerl gave a talk in the same lecture series and, not having seen my talk, also discussed apophenia and its importance for understanding our contemporary condition. Coincidence? (Yes, of course.) My own interest in the term comes from discussions with Ed Keller as we browsed a book on exotic psychopathologies he had just bought. In my talk, I discussed apophenia as both symptomatic of the sorts of superstitious faulty reasoning that curtail our ability to design our societies intelligently, and as an irreducible by-product of our ability to abstract patterns from circumstance. It is also not limited to humans (more on that below). In Steyerl’s talk, apophenia was considered as an alternative way of seeing complexity, less the fabrication of false patterns than the uncensored irruption of new links, irregular inference and problem-solving insights. Both of these versions of apophenia are the right one: as you read on, hold them together in your mind side by side.
While we were all holed up in the Alps, a post from Google on their Research Blog about a project called Deep Dream was the main topic of breakfast debate for a few days, and for good reason. The project involved using feature-recognition algorithms to interpret various photos, provoking them into seeing things that aren’t actually there and altering the original images accordingly. This machine paranoia uses an initial “find edges” filter that is familiar to any Photoshop user, and which makes the discrete outlines of features and colour fields in each image more pronounced. This way of seeing eliminates ambiguities between this and that; it makes things look more like what they are already seen to be. Next a feature-recognition algorithm (or cluster of algorithms) is put to work to “find” what the algorithm is disposed to find (faces, animals, buildings, etc.). With one pass the process yields inconclusive results, but after feeding the resulting image back into the process, and then again, and again, and again, the outcomes are genuinely amazing.
A swirling, acidy bath of seen and barely seen apparitions appears. The images are new, exciting and disturbing in ways that painting, for example, has not managed in some time. You don’t know quite what you’re looking at, but still see things you don’t quite believe yourself to be seeing. More precisely, one doesn’t– at first – know how one is seeing what one is looking at, but sees it nevertheless. That is, the images are remarkable enough on their own, irrespective of providence, but once the viewer makes the partial empathetic transference into the Other Mind of the feature-recognition algorithms and their way of seeing, the effect is of trying on a new mode of perception and making sense of the world through those eyes: that is the real payoff.
The false positive as evolutionary quirk
A character in Peter Watts’ recent novel, Echopraxia, argues that false pattern recognition is hardwired into homo sapiens’ evolutionary success. The story goes like this. Tens of thousands of years ago, two guys are hiding in the tall grass. Quietly they are looking all around for both predator and prey. One of them sees a faint but distinct anomaly in the way the light breaks through the grass moving left to right. He recognises this pattern as that of a looming tiger and runs away back to the village, leaving his friend to be eaten. The guy who ran away was able to reproduce. His pattern-recognition genes ensure his and their own survival, and the guy who did not possess these genes did not reproduce. However, there was a third guy sitting in similar grass perhaps the morning earlier. He too saw a weird pattern in the grass. He too thought there was a tiger and got up and ran back to the safety of the village. But in his case there was no tiger – it was all in his mind. He was not cunning; he was paranoid. Yet, he too was able to reproduce and his faulty pattern-recognition genes (which are perhaps actually the same as the first guy’s accurate pattern-recognition genes?) were able to reproduce themselves as well. The lesson is that evolution has greatly rewarded human pattern recognition, but bound inextricably with that bounty and bargain it has also rewarded hallucination and error.
Moving now from fake evolutionary biology to fake political science, we compare this with Karl Popper’s writing on conspiracy theory and narcissistic attribution errors. Popper wrote that members of informationally and socially isolated groups (which today includes most internet subcultures) tend towards a kind of paranoid cognition. They become suspicious and mistrustful of society and susceptible to “sinister attribution errors”. As Cass Sunstein puts it, “This error occurs when people feel that they are under pervasive scrutiny, and hence they attribute personalistic motives to outsiders and overestimate the amount of attention they receive. Benign actions that happen to disadvantage the group are taken as purposeful plots, intended to harm.” And thusly a million bullshit memes bloom.
The Urban Dictionary defines “Google-paranoia” as an “intense fear that Google wants to misuse info about your shitty boring life”. Patient Zero of Google-paranoia may as well be Julian Assange, who in addition to several less-impeachable accomplishments, published in 2006 a political philosophical treatise called “Conspiracy as Governance”. A not too off-target summation of this strange and melodramatic text might be Antonio Gramsci meets Carl Schmitt meets William Cooper’s conspiracy-theory tome Behold a Pale Horse. Authoritarian power, which apparently for Assange at this point in his thinking is more or less all public or private governance, is based essentially in conspiracies of secret operatives who, “in collaborative secrecy, work to the detriment of a population”. To counteract this web of lies and unpunished molestation of the innocent, Assange encourages those who value “truth, love and self-realisation” to sever the conspiracy’s information flows and to divulge the primal scene of the otherwise hidden, illegitimate, ubiquitous surveillance/mindreading. One can of course hold different opinions about Assange’s means and ends.
I find this way of thinking to be less than helpful. The apophenia of conspiracy-theory geopolitics invents an otherwise absent pilot as the true first cause of all the things that have actually evolved according to chance, complexity and contradiction. In this way, it is a form of secular Creationism, or at least of intelligent design. It holds that systems can’t evolve according to irregular and imperfect processes; rather, some invisible authority (either Good or Evil) must have caused all this to take place. Google or the NSA, or Bush or Obama, or the Jews or Goldman Sachs, “drones” perhaps: the absent and abstract Oedipal first mover must be the source of this confusion and misery. There is, for example, an unfortunate gravity field drawing together the overestimation of WikiLeaks’ actual accomplishments and Truther websites. This suggests to me that apophenia may have risen to the level of a political movement, and that the Influencing Machine (see below) is not only a psychiatric curiosity but also a geopolitical phenomenon. To be clear, I am not suggesting whatsoever that the surveillance programs that came to light by Snowden’s leaks or the war footage by Manning’s, for example, are somehow “not true” in the same way that Obama’s African birth is not true. To the contrary, it is the warping of deduction and delusion by the apophenic gravity field that makes the adjudication of real and banal evil more politically difficult. Fredric Jameson’s discussion of conspiracy theory as a form of pop politics of globalisation, a way of apprehending totalities, is well taken. However, this kind of political apophenia means something very different when scaled from the relatively impotent convictions of autodidacts, or popcorn movie audiences, to a global political language that sees planetary-scale computing infrastructure through the lens of some sort of Protocols of the Elders of Silicon Valley.
It is impossible to overlook the coincidence (or is it?!) of Google being the particular research institution that composed and publicised Deep Dream. The corporation claims a universal mandate to “organise the world’s information and to make it universally accessible and useful”, so it would be remarkable if it were not Influencing Machine Number One among today’s discerning borderline personalities. (“On the Origin of the ‘Influencing Machine’ in Schizophrenia” is an influential 1933 article by psychoanalyst Viktor Tausk. Per the Wikipedia entry: “The delusion often involves their being influenced by a ‘diabolical machine’, just outside the technical understanding of the victim, that influences them from afar. It was typically believed to be operated by a group of people who were persecuting the individual, whom Tausk suggested were ‘to the best of my knowledge, almost exclusively of the male sex.’”) Google is obviously a very important and powerful project, and its machinations shouldn’t go uncriticised, but for some of us the dark currency of Google’s “meaning” is something more. Recently, people have had Google Glasses ripped from their faces by affordable-housing advocates, because… Google Buses. Hobbyists have had their drones smashed to bits by strangers who are certain that they are personally the target of the flying camera. People who are not Laura Poitras place tape over the microphones on their Android phones because they know that Google and the government are using it to listen to them right now. In some circles, it is possible to directly compare Google AdWords with the East German Stasi with a straight face. It is not that online surveillance is not real, or that it is not a big deal, but it cannot possibly be dealt with in such a way that will grow into a viable geopolitics of computation if it is understood according to mystical parables.
We are all dogs
That said, it is not as if Google’s algorithms are not themselves paranoid; after all, they see psychedelic dog faces in everything. The reason that Deep Dream hallucinates dogs is not spooky. Its object recognition faculties were trained using the ImageNet data set, which is based on various breeds of dogs. It will therefore hallucinate dogs where there are none and identify people as dogs if it is told to look at them hard enough. The latter may seem insulting, but it shouldn’t. Diogenes’ proto-cosmopolitanism is based on the glorious dog-like commonality of all. (However, some machine-vision identification and categorisation systems have tagged African-Americans as “gorillas” and some can’t recognise dark-skinned faces as being faces at all. This is something quite different, quite symptomatic and altogether not OK.) If the Deep Dream images are artifacts of a computer’s hallucinations of phantom conclusions, then the conspiratorial figure cut by Evil Google Stepfather is that same paranoid vision turned inside out and back on itself. Perhaps Google, the AI, is as paranoid in how it sees us as some of us are in how we see it. It is not surprising that as AI matures its own pattern-recognition faculties would reach a plateau of creative apophenia. From Antonin Artaud to Seymour Cray, many have perceived homologies between abnormal human psychology and various machine behaviors. Beyond simply anthropomorphising intentions (“the Xerox is mad today”), our deduction and induction of mechanic intelligence trace along the paths of understanding of our own intelligence. It is one of the few such homologies our humanism will allow. Still, the trove of allegories available from this history of machine psychiatry holds considerable interpretive and even creative potential despite itself. (In Kubrick’s 2001: A Space Odyssey, for example, HAL was obviously a deeply paranoid creature. Tormented by some fearful obsessive-compulsive disorder, the AI decided that the mission to Jupiter could not withstand the dark conspiracy of humans and so jettisoned them.)
Is machine vision paranoid? Is our popular understanding of machine vision paranoid? What is Google up to? Are they watching you, and if so who (or what) are they seeing? Is our understanding of Google’s paranoid machine vision itself paranoid, making the paranoid AI that much more paranoid in response? In a word, yes. Is this how artificial intelligence should evolve, in relation to and autonomously from human intelligence? If the evolution of human pattern recognition is any indicator – based on both productive and destructive apophenia, deception and delusion – then yes and no. §