What "AI" means (part 2)
Figure 1: "Hell is truth seen too late." – Hobbes
Serously, explain to me what "AI" is…
Kate asked me for a quick layman's summary of what "AI" really is, why the world is turning against it, and why it's a problem that professionals should avoid.
That's actually a helluva challenge! "AI" is a meaningless term to us computer scientists. We get a kind of academic paralysis when asked this straight and simple question, because in a way, the more you know about this subject the harder it is to speak clearly yet simply amidst all the hype and noise.
I thought it might help to list by taxonomy a set of algorithmic ideas according to task, then relate those to negative effects by way of example. Here goes. We'll consider a small set of these broad ideas;
- object recognition
- find all the badgers in this photo 1
- classification
- what kind of thing is it?
- pattern recognition
- is this like something we're seen before?
- identification
- find exactly this thing
- extrapolation
- what comes next?
- prediction
- interpolation
- what would be in the missing jigsaw piece?
- filling-in
- analytical regression
- why did that happen?
- causal analysis
- selection
- find the best holiday
- constraints
- planning
- in order to achieve this what steps are needed?
- optimisation
- find the best arrangement or way of doing this
- agency
- make this complex thing happen
Machine learning
In order to achieve any of these tasks we could get a computer programmer and a subject expert to sit down together and create an application. The programmer takes the knowledge and experience and encodes it as conditions, if-this-then-that, until the program gets as good as it can be. Two problems with is are; it takes a lot of time and money from valuable people - skilled programmers and experts - and those experts and programmers don't always communicate perfectly. It's hard to explain professional knowledge. Sometimes the knowledge is ineffable. It's hard to turn that knowledge into code.
The alternative is to get rid of the programmer who is seen as a bottleneck in the process, and have experts directly instruct a machine that can modify its own code. The machine learns. For this to happen we don't even need the expert to be present, just observable.
Broadly there are two kinds of machine learning. Supervised learning requires input and guidance to correct the machine until it behaves well. This might be used to train a robot to make cars. Unsupervised learning allows the machine to figure things out for itself and learn in the field. This might apply to a recommendation assistant that figures out your preferences for clothes, restaurants or dates.
Object recognition
Why did people spend collectively billions of hours doing all those CAPTCHA puzzles to find road signs?
It comes down to the problem pondered by philosophers from Plato and Aristotle to Berkeley and Hume;
What is a chair?
To cognize something is to understand it as being some "named thing" due to its qualities. To re-cognize something new and previously unseen is to consider its qualities and suppose it must be that same some nameable thing.
But a chair is more than a thing with four legs, a seat and back. Bar stools, hanging cradles and beanbags are also "chairs". Is the function of supporting a human body something that is cognisable from its form? If so why is a bed not a chair?
When a computer algorithm sets out to "find all the road-signs in this photo" it is staring with an a priori set of qualities. Objects are defined if they meet those qualities.
A problem with a priori is that we can always find signals in the noise if we turn the selector gain up high enough. We get hallucinations or false positives. Drop enough acid and you'll think your own hands are road-signs.
Classification
Classification (and categorisation) is like recognition backwards. We explore the qualities in the thing until a likely object emerges that fits one or more predefined forms. It usually presents as a problem like "Put all these things into one of these boxes". In many applications classification and recognition seem superficially identical. Indeed many algorithm "experts" get confused too. An obvious problem occurs when an object fits into more than one box. Or all. Or none. Another twist is estimation of some quality, like age, say from the sound of a voice, or height, say from a visual feed relative to a background. Here the output is quantitative (numerical).
Pattern recognition
A pattern is a form in time or space with distinctive features but is not an object. A serial killer might have a pattern of victims. A financial fraud might have a series of commonly observed steps. A skin tumour or leaf blight might have a recognisable colour, texture, or distribution.
Identification
To find all the known qualities and find exactly some unique thing is what we mostly think of when people say "face recognition". That's a misnomer. Indeed "face recognition" is found in cheap digital cameras to decide what things in a photo are faces (to apply red-eye correction etc). Identification might also apply to vehicle NPR cameras to track parking or road use by licence plates. It is at the root of many privacy violations and misunderstandings. The question of uniqueness is complex. "AI" that attempts to map identity has to be very good and get it right every time, but in fact it has a high error rate and gets a lot wrong.
Extrapolation
Given the number sequence 1, 2, 3, 4, 5… what comes next? This is also about prediction. It's a big deal for financial traders. Stock market or weather forecasting tries to create a "model", by fitting mathematical functions to various "parameters" we suppose control the overall behaviour. In sports gambling, if Arsenal have won 7 straight games this season, the next game looks a done deal!
Interpolation
Given the sequence 1, 2, 3, ?, 5, 6, 7, what would be the missing number? Filling-in the gaps in data is interpolation. "AI" can do amazing things like draw the contents of a missing jigsaw piece. Not just by continuing the colours and curves, but by training on millions of images an "AI" could recognise part of a face is missing and restore the eye and hair exactly as expected based on so many photos of faces. And it can do it with vases, and landscapes. It "understands" a model of vases, faces, trees and clouds. Error correction on your phone can fill in the dropouts and clicks when you speak in a noisy place because an "AI" audio codec knows what speech should ideally sound like.
Analytical regression
Why did that happen? If we have a good model and some known outcome we can trace backwards and do faultfinding. Given that this aircraft wing fell off mid flight, what was the likely cause of the mechanical failure? Root cause analysis is great for engineers. But it has dubious utility outside very rigorous and well mapped models. If it turns out the actual root cause of the plane crash was a cut in maintenance budget, that 6-why step can easily be massaged.
Selection
To find the best holiday, or car, we "solve constraints" or "cluster" data according to weighted preferences. Like many of the data and signal processing stuff I've touched on here this is really old craft. It can be done with paper and pen. It goes back to the 1970s or earlier. There's not much modern or "AI" about this tech except its speed and scale. That allows a billion people to do optimal shopping on billions of products every second.
Planning and optimisation
Some optimisation problems are really hard. Box packing, timetabling and suchlike are notoriously befuddling. For large parameter sets (like resource allocation for a building with 2,000 rooms and 10,000 employees it's pretty much intractable. "AI" offers some advances against classical algorithms. But not always. Some of these must be adaptive by nature. Running a railway, allocating engines, carriages, lines, and station platforms is also contingent on weather, accidents, delays… To find the best arrangement or way of doing this is an ongoing dynamic problem best done by humans with a lot of experience.
Agency
This isn't so much "AI" as control system engineering that combines multiple facets of "AI". Given a high level description of some desired outcomes, goals for planning etc, we enable a system to make complex affects, trades, messages… Some see it as the ultimate form of "AI", an autonomous "intelligent" assistant or slave who can act on our behalf. Modern "AI" agents can search websites, find out things, select goals, targets, pay for things… Agency is the wet dream of lazy megalomaniacs (or "high powered businessman"). They hope, for thier digital agents to run the business, make money, and take over the world! All while it's "visionary" owner (whose only real skill is having the money to own the machine) sleeps on a beach.
Taxonomy of problems
Given this brief overview of concepts and capabilities, let's consider how this all goes wrong (as if an intelligent person needed it spelling out).
"AI" represents a change in the values of computer users from "reliable enough to bet your life on" to "sometimes good enough". It's a change from deterministic to probabilistic computing: If each time we do something we get a random but useful result, that's not science, it's luck. "AI" is like a one-armed bandit gambling machine. People who make decisions but don't know a lot about computers are mixing up these two styles of computing and putting much at risk.
"AI" is therefore a social phenomenon. It's a de-educational cultural revolution to reign-in and dampen liberal democracy because it subtracts agency and understanding. It subtracts it from people to locate it in (more efficient) machines.
"AI" is a form of theft. Recall that with machine learning we need neither the programmer nor the subject matter expert to be present. We just provide examples from many experts to the learning machine. We don't even need to ask or pay the expert, just spy on their behaviour or steal their published work. A serious moral problem with "AI" is that it's trained on data without the creators' consent. To add insult to injury these companies would like the protection of "intellectual property" law to apply for them and protect their "AI" software from being shared and copied. They are colossal hypocrites.
"AI" erases the interpersonal. As self-learning and agentic, any "AI" displaces all human relations, between us and experts like our accountant, or doctor, therapist, teacher. This fits and extends the social atomisation, reification and anomie experienced under conditions of capitalism
"AI" is intractable. Remember we said that we'd get the machine to "write its own code"? If we left a bunch of pre-verbal 4 year-olds on an island, Lord Of The Flies style, any survivors we encountered 20 years later would have developed their own language. The underlying "code" "AI" writes internally is it's own private alien world, meaningless to humans. We can't read that and understand how or why it decided to do or say something. Agents that are unquestionable make fine attack dogs for power.
"AI" dislocates responsibility. Because of intractability one social effect is to diffuse or hide liability. A misdiagnosis, miscarriage of justice, injury by a robot, bad financial advice… if none of these can be ascribed to a person then that's great for exploitative businesses that shift risk. A smoking gun is the urgency with which "AI" companies are trying to pass laws that would absolve them of the losses they cause through digital technology.
"AI" makes stuff up. Remember the "models" used for interpolation, extrapolation and regression? Generative "AI" can take those models and start with a random seed, then fill in the gaps or flesh-out the expectations. The problem is when a photo of a young stranger is literally "fleshed out" under the instructions to assume that all clothing is naked skin. Making stuff up (aka lying and deception) is the essence of creativity. It's great to emulate or assist artists, writers and so on. But making stuff up can mean anything; "Create me a plausible alibi for not being at the murder scene given all this evidence. Create some photographs or other data points to support the story." An audio codec that can interpolate a click for packet loss is a great use of "AI". Go too far and you get an "AI" codec that presumes to fill in missing words based on your speech model… a very bad idea.
"AI" models are always smaller than reality. The map is not the territory. Arsenal may have won 7 games this season, but any football fan can explain why they'll lose… they have only played easy games so far and now face the mighty Man-U! If the model doesn't capture that "parameter" it fails. The development strategy is to improve the model when new parameters come to light. Lessons learned. The problem with "AI" where lives are at stake is that nobody wants to be the data point that helps the model "learn and improve". For you, there won't be a "next time".
"AI" violates privacy. For example the confusion of identification and recognition we discussed earlier is at the root of many privacy problems. An occupancy sensor need only operate on broad infra-red changes to determine whether there is someone in a room for fire safety. Invariably the building-tech provision misunderstands this and installs a camera that can identify specifically who is in the room.
"AI" is a term of obfuscation that conflates many areas of computer science, signal processing, statistics and human-computer interaction into a more or less useless marketing term. Not scientific. Real computer scientists hate it. It overshadows good uses of machine learning and analysis like "pattern recognition" or "expert systems" that have amazing applications in biological medicine, agriculture etc.
"AI" is inelegant. It's a brute force solution that consumes unlimited energy and physical resources. It is the very opposite of efficiency. While paying human programmers and experts may seem inefficient on face value, the total cost of "AI" is simply hidden - in energy use, in lost opportunity, in errors, in social harms.
"AI" is a political project of dehumanisation based around creating "fake people". Computers are tools and should not be anthropomorphised. Capital is attempting to rewrite labour relations by devaluing worker skill. It attempts to modify politics by creating false persons holding opinions ("bots") who flood social media forums.
"AI" is a means of censorship and information warfare. Search engines and documents give reputable, repeatable, legible results, amenable to fact-checking, analysis of provenance and hermenutics. "AI" is an inscrutable source of pseudo-knowledge easily biased and filtered in invisible ways by actors who control its training and inference.
"AI" is a surveillance technique. The digital world is already appropriated by commerce and government to spy on citizens, but "AI" expands snooping to more subtle, continuous and intimate extraction of personal data. The purported "business need" for data to train "AI" is a perfect cloak for technofascists and a cover for totalitarian regimes.
"AI" is an investment bubble and economic stimulus gamble comparable to Mao's "Great Leap Forward". It's basis in wishful thinking and fear of "being left behind" is ultimately (fortunately) unsustainable and will eventually collapse with a massive impact on the world economy.
"AI" is a solution looking for a problem. It's driven by growth needs of the semiconductor industry whose technology has already largely met all human needs. The remaining problems are political.
"AI" is a psyops (a psychological influence) project that leverages human loneliness; our desire for control and agency, attachment, and being heard and seen. Social media grew to exploit our desire for connection and validation, and "AI" amplifies and extends this to further extract profit from human isolation and uncertainty.
"AI" is an encouragement to laziness and cheating. It undermines many professional areas, like medicine, cybersecurity, policing, teaching that require focus and interpersonal and social presence. Ultimately "AI" undermines academic and political integrity, threatening research and even the scientific/technological basis that created computers, digital society and "AI" itself. It is a self-defeating, self-devouring phenomenon that heralds the end of industrial society.
"AI" compounds errors. We often hear things like "it's 97 percent accurate". But a 3 percent error rate is catastrophic in operations composed of repeated steps - which most real problems are. It's not unusual for an algorithm to have hundreds of steps. With a 0.97 success rate, an "AI" using only 22 steps would yield a 0.496 overall accuracy. That's worse than flipping a coin. There is no specific step to which an error can be traced, which amplifies intractability and diffusion of responsibility.
"AI" has no structural faculties. Neuroscientists back to Jerry Fodor and before have stressed the importance of specialised functions that are isolated and modular. Good software must be like this too, in order to stop one part interfering with another. As a pathology, synesthesia (seeing sounds and hearing smells) occurs if there is cross-talk between faculties. An "AI" can literally be tricked by a person holding up a sign that says "I am not a person". The textural information overloads the visual stimulus and is taken as a kind of guidance or "instruction".
I now believe the negative effects and risks of "AI", in all areas, outweigh the purported benefits. Benefits frequently proffered, like organising health data and controlling self-driving cars, are specious. They've stood as unquestioned totems that actually fall with only a little critical questioning. Progress for all civilisation would be reducing the number of cars on roads. My conversations with medical doctors who work with hospital informatics leaves me with no doubt the claimed benefits mostly serve private profit not the interests patients. Digital technology does have great benefits for medicine, but not as proposed or currently configured.
How do we confront these truths?
In conclusion, I consider "AI" only a marketing term. It says so little and hides so much. It is for "entertainment purposes only". In recent years I've heard little or nothing useful from the breathless mass media. Nothing that feels informative, measured, reflective or educational (with perhaps the exception of Dr. Hannah Fry who put a good foot forward). Most of the coverage is dishonest and deliberately misleading, with no note of modesty or rational scepticism.
There's an urgent need for a "Grand Forum" or perhaps a national referendum on technology. The poverty of discussion about "AI" should serve as a positive invitation to deeper, grown-up conversation about the needs and costs of technological "solution" in society.
Unfortunately, fear amongst people, intimidation that comes right from the top of government seems to silence critics and make sure people fear for their jobs if they don't agree with the "approved narrative". A genuine conversation would require a detailed examination of actual (not imagined) problems, diverse ideas, the proposed algorithms and what would actually constitute "success" (we don't know how to measure this stuff except with stupid "efficiency and performance metrics"). Such a conversation should be held relative to existing "non-AI" technologies and discuss where genuine benefits may be worth the enormous risks, and where we have perfectly good ways of doing things thank you tvery much!
Now we are past the pant-wetting excitement stage of a six year-old with a balloon in one hand and an ice-cream in the other, the reality of "AI" is setting in. This offers a great opportunity for people who really do know what they're talking about to step-up and throw some light on matters. Real experts from professions like policing, manufacturing, social work, education need to fearlessly join the conversation to say what they think digital tech could helpfully add to their profession, and what is not welcome. They must become fearless to call-out solutionism techno-bullying, ideological cults and the disrespect shown to them as professionals. People who've dedicated their lives to their work and are not idiots should not have to feel bullied into silence at the risk of being labelled "a Luddite" or some other childish insult.
The important questions to ask
One again, here's Neil Postman's famous Seven Questions to ask about any technology. Learn them. Internalise them. Use them every time you are down-talked to about "AI".
- Q1: What is the problem for which this new technology is a solution?
- Q2: Whose problem, is it? Who will benefit from the technology, and who will pay for it?
- Q3: Suppose we solve this problem and solve it decisively. What new problems might be created because we have solved the problem?
- Q4: Which people and what institutions might be most seriously harmed by these technological solutions?
- Q5: What changes in language are being enforced by new technologies? What is being gained and what is being lost by such changes?
- Q6: What sort of people and institutions acquire special economic and political power because of the technological change?
- Q7: What alternate uses might be made of a technology?
Footnotes:
Figure 2: Answer: 12