Rorschach Computing
Figure 1: "Gathering is peculiar, because you see nothing but what you're looking for. If you're picking raspberries, you see only what's red, and if you're looking for bones you see only the white." – Tove Jansson
Davos 2026 was a pantomime, an "Oh yes it is! Oh no it isn't" difference of opinions on "AI". On one side there are folks who've invested maybe a hundred trillion in what's probably a lemon. On the other are scientists claiming an existential threat to life on Earth.
Given such strong biases, a concise way to understand the great "AI" CON is that "we see what we want to see, we hear what we want to hear".
Already, perceptual psychologists, magicians and confidence tricksters will be sitting up and guessing where I'm going with this…
The emperor's new computer
"AI" in the guise of language models (LLMs) is a new type of computing… psychologically.
It is not technically different. It still relies on boring old transistors. Deterministic state transitions happen in matrix processors following the age-old laws of electronics, and getting hot for it. It rests on very traditional hardware. There's nothing magical, spooky, or "quantum" going on.
A deterministic computer is, at heart, a simple series of logic gates, and given a well formed problem like adding two numbers it completes in a finite time and always gives the same answer.
Indeed this extraordinary reprodicbility is the basis of our modern narrative mythology of computers. Computers are "always right". Indeed thousands of people's lives were ruined in the Horizon Post Office Scandal as a result of the repeal of section 69 of Police and Criminal Evidence Act 1984 which sensibly assumed they may not be.
When we come to the question of correctness in computer science there are really two positions; a computer can produce something we can understand or something we don't understand. Here, "understanding" means testing against a pre-ordained set of expectations.
This is where Turing touches Quine and Shannon in terms of the philosophical question; "If we give a computer a problem, would we recognise the right answer if it gave it to us?". How would we know the output wasn't an error? The Halting Problem is a special case where the outcome might not be determinable at all, but we have no way of knowing, except to wait forever.
Once again Douglas Adams was way ahead of us all on this. As devotees will know, The Earth was the replacement computer for Deep Thought after its creators misunderstood the coding of the Ultimate Question of Life, the Universe, and Everything, and got the answer 42.
This is not so much a GIGO - Garbage In Garbage Out problem as a Not Knowing What The Fuck You're Doing With A Computer Problem.
And there's a lot of that going about at the moment.
To explain this let's start with something we can all relate to about watching films.
Sound design
In my first studies of sound design back in the early 1990s, I found a lot of deep expert knowledge couldn't be encoded in expert systems. This latent, tacit or "ineffable" knowledge can't be written down. When I asked prominent sound designers - who had spent tens of thousands of hours in study, practice and immersion in their culture of film, radio and music - they couldn't tell me how they do it. On the surface it seems they don't know, and yet obviously they do know! Ineffable knowledge must be teased out obliquely or simply osmosed by proximity - understudying masters (so I went into the music and film business and understudied great masters for 10 years).
It's worth remarking here that, from an "AI" research perspective, that was always the problem with expert systems. The cost of knowledge acquisition and encoding is extremely high.
And yet every single one of us already has that knowledge deep within us, and we use it when we watch a movie.
Visualise a beach. The sun is shining and we pan over to the sea. Water is already a symbol of what is hidden or emergent. So, as the shot lingers on the waves, we already have a question in our minds, "What will emerge from the water?".
If the music is dreamy, sexy, James Bond music, we already expect a bikini-clad goddess. But replace that with restless staccato cello in grating semitones (Jaws music), and we'll expect a shark fin to emerge menacingly. The same visual input is "prompted" by two distinct interpretations that speak to our own ineffable knowledge.
EVP
A fellow sound artist, Joe Banks, introduced me to the idea of Rorschach Audio and Disinformation with his investigation into "Electronic Voice Phenomena" (EVP).
EVP is a parapsychological pseudo-science related to ghost hunting. Usually white noise, distant radio signals, perhaps from outer space (SETI), crackly records or other ambiguous signals are claimed to contain the voices of lost souls, aliens, ancestors and spirits. They have messages for us.
James Randi did some good lectures on this too, around the craft of cold-readers, spirit-mediums, hypnotists and other tricksters where the mark (victim) is told what to expect and then hears it. Terms for this are "priming", or "leading".
Listening in readiness
However there's a really solid and fascinating scientific basis to all this.
You are asleep. It's midnight, but you hear a scurrying in the corner of the room. Even while we are still asleep the brain can process sound, and at the hint of certain threats - like a scratching claws in the periphery of consciousness - our neurons burst into action. "What was it?!"
We may dream of a rat, or wake in an alert state. Either way we enter a state called listening in readiness, highly attuned to any further scurrying sounds. Meanwhile any very different noise, like the slow rumble of a passing car outside, becomes psychologically inaudible. This ability to focus attention within a signal space is related to the similar "cocktail party effect" by which we select voices in a crowd.
When I was working on "machine listening" and reading auditory neuroscience, Schnupp, Nelken and King (MIT Press) was the latest and greatest text. What scientists discovered is that auditory perceptual systems are a bit like advanced radio receivers. Our brains contain many negative (suppressing) and positive (enhancing) feedback loops such that as we listen we apply a sliding template in which expectation is very important.
How does an advanced radio work using filters? In digital systems a statistical Kalman selector or linear predictor, or in analogue systems a super-hetrodyne, allows extraordinarily sensitivity to certain signals so long as they are what we've seen before or expect. In other words it's easier to find what you're looking for. The cochlea is remarkable, not simply a passive transducer which the long-held traditional model presumed to be a spectrographic filter-bank, but an adaptive amplifier. Our spiking neurons create a time-frequency map, better understood as a wavelet. This allows it to have affective neurons that tune the cochlea to what we anticipate!
Figure 2: "I've been expecting you…" – Stromberg (Curd Jurgens)
We fill in the spaces of expectation. This means that perception is not simply sensory input, but we co-construct reality. That 'reality' lies on a spectrum from objective, justifiable measured truths, to total fantasy and hallucination. Using sound as an example consider four levels;
- objective: we hear what is actually there. It is verifiable and within unambiguous signals.
- directed: we hear what we're told to hear. The signal has some amount of ambiguity, accidental or deliberate, and perception is accompanied by some direction, priming or steering/correction.
- intentive: we hear what we want to hear. The signal has a lot of ambiguity. We are not primed but have a prior expectation or wish. This might be termed self-confirmation bias.
- residual: there is no signal as such, just noise. We hear whatever might be meaningful amidst complete ambiguity and no clear context. This is close to EVP we make sense of noisy signals.
Rorschach Computing
Seek and ye shall find - Matthew 7:7.
Patternicity is all around in how we construct reality.
In my field of expertise, digital signal processing, we synthesise sounds from noise. Perfect noise contains all possible signals - on a long enough timeline. Subtractive synthesis, where we construct a filter to find desired signals, is kind of the opposite but equivalent of additive synthesis where we build what we want from small components.
This actually has profound philosophical implication when applied to law, evidence, intelligence gathering and computing.
Give a bunch of paranoid radio operators the task of monitoring chatter and eventually you'll hear what you want; conspiracies, terror plots, troop movements, whatever! Every slightest signal becomes a threat.
If you're looking for terrorists in Iraq and Afghanistan, just round-up a bunch of innocent people, tell them what you want to hear and then torture them until they tell you back what you want to hear. If, in epistemological (information) craft, there was a furthest point from rational scientific method, this is it.
Mirror, Mirror, on the Wall
This is where we're heading with the project of language models as computing utilities.
You provide an a priori set of biases, assumptions, cultural norms, prejudices, hidden desires and hopes to prime the machine. It mixes them with another set of biases, "guardrails" and political agendas, invisibly baked in by it's owner/creator, and it spurts out what it "thinks you need to hear".
That's not computing. That's not even close to computing. Impressive as language model inference may be, it's some strange instrument that functions on the border between propaganda, influence, advertising, perception management, reinforcement of orthodoxy and dogma, dressed up as an objective oracle. Deployed widely its a perfect psyops weapon. But only if people can be tricked into taking it seriously. Like the lie behind the lie detector it's a great bit of pseudoscience that works so long as people believe in it.
Those who understand the mathematics of language models (Markov chains, attention driven transformers etc) still struggle to explain "How can models 'answer' a question when they don't have any actual knowledge encoded?". The standard retort is that the knowledge is "implicit" in the training examples and so is "found" by statistical association. My way of expressing it, at the moment is; LLM don't answer questions.
They fulfil the expectations implied in your question.
I think this formulation, in terms of expectation, is more useful and powerful. LLMs perform a dance to try to entertain you and satisfy what it is they "think" you want to hear.
Figure 3: "Great Expectations" – Dickens
That's why, as Bruce Schneier agrees with us, the fact that OpenAI has started adding explicit advertising to ChatGPT is moot. Advertising is already implied in the very structure of "AI". With a slight adjustment to the interpretation of your question, ChatGPT is already waiting to offer all kinds of helpful product recommendations, or more subtly steer you away from ideas that are not in the interests of its masters.
What they avoid, is what you probably wanted, the invisible, ineffable structure. A good description of latent knowledge is given by Leonard Cohen. It is what "everybody knows" but we don't want or need to say out loud. It's what everyone's thinking. Most critically, "AI" cannot locate any of this ineffable knowledge. It cannot process what philosopher Rick Roderick called the body of folk or "feminine" knowledge. That's a problem, because - going back to sound design and art - most of the really interesting knowledge is ineffable. It's not encoded via language, because language is weak.