What "AI" means now

egg.webp

Figure 1: "Philosophy is a battle against the bewitchment of our intelligence by means of language." – Ludwig Wittgenstein

Give me back my "AI"

The "AI" bubble will soon burst. That is a good thing. It's a necessary correction of a mass hysteria that's gripped the world for many years now.

It was supposed to be useful. When I was an "AI" researcher in the 1990s one particular technique we used was Fuzzy Logic. We made simulated "neural" circuits like Perceptrons and Boltzmann Machines to combine lots of sensors. We made decisions based on the weighted contributions of many inputs and then we corrected or trained (back propagated errors) those weights until the system made the right choices more often.

We didn't call it "AI" then. To do so would have been pompous and pretentious. It was just "DSP" (Digital Signal Processing), or plain old "data processing".

Nobody paid much attention to "AI" then. Not because it didn't work, it did work to some degree. It had already been through two "winters" of great promise followed quickly by more sober realism. In my shoutiest-at-clouds old-man voice I'll grumble; "AI we hear of in the media now is not the AI I coded in the good old days". The humble self-critical me wants to say, that's because things suddenly advanced and I couldn't keep up. But that's only a little bit true. I've tracked research quite well and have a fairly deep understanding of recent research,

The fact is that the term has been hijacked. "AI" became a political phenomenon far beyond its scientific foundation, just as nuclear physics did in the 1940s and space travel did in the 1960s.

"When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean, neither more nor less".

"The question is," said Alice, "whether you can make words mean so many different things."

"The question is," said Humpty Dumpty, "which is to be master. That's all."

– Lewis Carroll, Through the Looking Glass

To understand how this feels, imagine you are a doctor, a hard-boiled realist who works in genetics. You know that your subject is very young, precarious, complex and yields mixed clinical results. You are humble but determined. You expect to see steady, modest progress in your lifetime that will benefit the next generations.

One day along come a group of fringe researchers who claim a:

!!! MASSIVE BREAKTHROUGH in homeopathy !!!

They create a cult-like front of Wellness-Woo (WW).

Overnight, the language of medicine changes. Everything becomes WW. The mass media get moist at the very mention of WW. Unless you include WW in your research papers and grant funding applications you get nothing. Outsiders and chancers jump on the WW bandwagon. They get vast investments and whole new research buildings while your "old ways" are relegated.

The traditionally highly-valued virtues of caution and scepticism that make a good scientist become your weakness. You go along with it because it's bringing in huge cash-flow to hospitals and genetic research too, but at the same time people are dying from quack cures. You go along with it because to resist would mean losing your career.

Fake people, fake science

A sentiment we've heard a dozen times at The Cybershow this year is that "The problem is not with technology, it's with the people doing it." By this, commentators mean the Big Tech companies and their celebrity "tech bro" leaders, many of whom are obviously megalomaniacs or suffer some sort of delusional disorder or malignant narcissism.

Personality is reflected in creation. We tend to make and do things that mirror ourselves. Diligent "detail people" are amazing at building secure and reliable systems. Flamboyant, vain hucksters create exactly the quality of tech you'd expect from them. Their technology is a lie. They are fakes.

If we want good technology we must put good people in charge of it, not simply those ambitious and greedy and audacious enough to pretend that they are.

I now mainly think of "AI" as fake people. It seems to nail what the word means right now. It's not just language models being used to replace human dialogue, but the wider lack of concern for correctness that surrounds "AI" woo.

Words change

The contraction "AI" is a partial anthimeria, not really a new word (neologism) but something that's evolved and changed its meaning very rapidly of late. Of course, words do this over time. But rarely have we seen such extraordinary hysteria as accompanies "AI" and all the things it has come to imply as a noun.

Maybe it's time us scientists and engineers let go of "AI", just as creative programmers had to cede ownership of the word "hacker" when the mass-media appropriated it to mean "hostile computer intruder". What fool sincerely says "gay" to mean "happy and joyful"?

I'm still considering banning it on the show. Or rather, letting guests know that whenever someone says "AI" it will be replaced by a honky-horn clown sound, or a random word like "banana" or "hatstand" to emphasise absurdity.

No single change in programming language, processing scale, algorithm, storage breakthrough or whatever accounts for such a sudden change of meaning. As data scientists we still want to do perennial things; learning, recognition, prediction and generation. And every day we're finding great new beneficent applications of better representations and inference from advances in vector processing.

However, none of this explains how "AI", in it's success, has obtained new cultural velocity. Of course, as words from specific technologies enter the vernacular they attain meaning well beyond those scientists and engineers live by. It's a blessing and a curse. It brings money to the pockets of researchers and practitioners from the bank accounts of investors who do not understand technology but want a three-line "elevator pitch" to assure them. This feeds industry.

But it also changes the meaning and cultural attitudes toward what we do as scientists. In the worst scenario we're now facing "AI" as a project of political change, a new phase of class war and a debasement of science, reason, and knowledge.

Sticky Tech

Attachment and identification play a big part in this. Technology is sticky. It is needy. That is to say it attaches itself to human activities, and we to it, as a way of surviving.

To see technology this way is different from how we imagine ourselves as cold, calculating, rational creatures with impartial control of creating and deploying our inventions as tools. It is also very different from its opposite idea, "technological determinism", which is the silly belief that technology has a "mind of its own" and unfolds according to an immutable historical "fate". The first viewpoint is arrogant omnipotence, whereas the second is lazy, degenerate abdication of responsibility. Both are dangerous.

The proper, middle path is to see that technology, under industrial capitalist conditions, is rather like a "Dawkins meme". We must "Think like a technology" to understand what it "wants". Sometimes what it "wants" is unwise and dangerous, and it needs to be corrected like a child. Remember the "back propagation" to correct "AI" by changing its weights? We just need to apply this a bigger scale now.

To think like a needy technology is to think like "a solution looking for a problem". To not be "adopted" is to die. Clippy just wants to help. A danger arises when we humans, as creators and users, start to identify too much with any technology. We become little Clippys, trying much too hard.

People like Sam Altman are not invested in "AI". They're invested in the the idea of "AI". That's a subtle but massively important distinction.

As commerce brought a need to control technology through popular culture (advertising) it supplanted rationally democratic (institutional and scientific) structures and planning. For people who don't like traditional structures of civilisation, institutions, process… that's a good thing. It's disruption. Consumer technology, if it had a will, would be skittish, and destructive. It would have a disorganised personality, attaching itself to this or that thing as whims prevail. Disruptive tech laughs at cautious "elites".

Since 2010 technology has leapt forward on impulsive hype cycles tied to some buzzword or other. It's driven by skittish investment flocking, feverish media, and financial bubbles. The rapidity and risk of this effervescence boils-over where good governance and long-term strategy is absent. It allows private risk taking to become a public liability. Indeed, our governments have been weak and remiss on this count for a half century since Thatcher and Reagan. In the ensuing power vacuum, competing gangs engage in a techno-free-for-all. Each want their own "vision of technology" to prevail. All create public harms as externalities.

In this century we now recognise harmful technologies as distinct from weapons. We can bracket-out weapons because they are harmful by design. Our acceptance of global warming by hydrocarbon fuels brings with it a commensurate understanding of other common technology as a mixed blessing. There is a 'pollutant' element to progress.

Fortunately a popular and permanent shift is happening, not just an academic or a faddish sub-culture. Pragmatic philosophy of science is advancing for the masses who have now experienced new harms directly as financial loss, job insecurity, indignity, mental anguish, suicides and psychosis.

Counter-cultural tech-critiques of 1960s post-modernism are resurfacing in more mature forms. We all experience how mass produced technologies expanding rapidly into a civilian space carry an anti-utility, what Richard Stallman terms "a disservice". Their negative effects on humans start to outpace their benefit, individually and collectively, because they are designed with exploitative aims built-in. In a sense they've become a new breed of weapon, a Gun Fever spreading an "arms race" within and between nations.

It is inescapable that technology is power. From the moment electromagnetism was discovered it was of military interest, and that interest along with government attention remains more or less constant. But through clear epochs digital tech has been a baton passed between areas like banking in the 1960s, business throughout the 70s, music, games and entertainments in the 80s, civil communications in the 90's, and so on.

"AI" needs responsible stewards. As a hype phenomenon, it can be seen as the attempt of an over-producing tech industry to hold together by recreating "AI" as a commodity product, as a concrete noun. In deference those regimes now embrace technology for its social control function. For business it's something to sell to derelict absentee governments, and ultimately to expand its own power. Business - always eager to escape the social contract enforced by government - can usurp governance by creating and selling the "tools to govern", and so taking control of government itself.

As a cornerstone of technofascism, these uses are million miles away from the practical applications we innocently chased in the 1990s; optically recognising text, interpreting human gestures as input, synthesising and recognising speech - which were all once 'hard' problems. It's great that these are now solved, along with a thousand other benefits like faster medical diagnosis, cheaper phone calls, and whatever.

Careless words cost credibility

But the price - in terms of social unrest, psychological insecurity, corrosion of human rights and dignity - is not any feature of the hundred distinct technologies people call "AI". It's a project of wicked people who work to conflate and muddy these advances into a political agenda that solves no actual human problems but only creates new ones. They celebrate semantic collapse and weaponise it within a language game to influence affairs. But when people try to re-engineer meaning they risk losing face. They risk their credibility. If their redefinitions fail, as is happening with "AI", the collapse can be sudden and total.

Whatever was good in signal processing is still good today, even if people now call it "AI". The good parts are not destroyed, but they are discredited. That's what we mean by a "winter" for a research field. We can never recover old words or the spirit of their original forms. But we can be savvy about the ways dishonest, malevolent people try to shift meanings for their own political aims.

Neither Wittgenstein nor Quine sensed modern politics as its present raw semiotics, dog-whistles and epistemic crisis. They would recoil at our primitivism. For old linguistic philsophers, words were defined as a contract, simultaneously throughout the nomos, a mutually understood and coherent noosphere. Without a corresponding meaning outside the mind of the speaker they are "just noises".

Today people just make noises. Or communicate with emojis. We somehow understand the vocalisations of people like Donald Trump. Incoherent ape-like noises still communicate, but they bypass the logic and critical reasoning of actual language. When someone in the media says "AI", we respond to that noise at an emotional level with some flush of feeling, a reaction entrained by the operant conditioning of repetition, repetition, repetition, by those who own the apparatus of dissemination.

But now we know in our hearts that the meaning isn't what we thought it meant. We hear the artifice, but don't see the intelligence. We're starting to suspect that someone is playing games with words.

[Valid RSS]

Copyright © Cyber Show (C|S), 2026. All Rights Reserved. Site design 2023-2025: Ed. Nevard

Podcast by

Want to be featured? Have an idea or general query? Get in touch contact us by email

Date: 2026-04-10 Fri 00:00

Author: Dr. Andy Farnell

Created: 2026-04-09 Thu 18:25

Validate