Every time you say "AI" a kitten dies.
Figure 1: "For the love of kittens…"
"AI" is "not okay". It's official.
But those who are deeply invested in the tech are reacting to the backlash by claiming there is a "moral panic" around "AI".
Sorry people, but moral panic is an unhelpful term. It's too proximate with "conspiracy theory" in implying dismissively that there's nothing real to be morally concerned about.
The Vatican and The Union of Concerned Scientists at least, disagree. As do about 75 percent of young adults. So that's basically science, religion and the people, all versus ten or twenty rich dudes.
Indeed, sometimes conspiracies are real and having a 'theory' about them is proper scientific scepticism. Satisfactory evidence for me (just my personal threshold) is when the conspirators publicly stand and tell you what they're doing, their Bond Villain plans to usurp democracy and run your lives. When they start penning unhinged manifestos about their ill intents, I'll call that a 99 point NINEer.
At the very least we might call it a "well evidenced conspiracy factoid deserving of strong moral outrage", but clearly this is not a moral panic. It's rapidly growing resistance movement against a new sort of fascism.
As a computer scientist I am saddened. There are enormous benefits to be had from machine learning, advanced signal processing, and agentic approaches that we might more usefully regard as "Next Generation Applications". It's distressing to think work I've done in my own life, in research and education, comes to such a dismal end in the hands of high-IQ hooligans. The problem with talking about "next generation applications" is "next generation" is always vague and works any time).
"It doesn't matter what temperature the room is, it's always room temperature" –Steven Wright
"AI" has become a football song for a bunch of rowdy away fans. Our disgrace as scientists, as always, is letting a few technofascists snatch the steering wheel from the hands of democracy. When people are unable to distinguish tools with huge social benefit from social control toys of insecure men, it's a double loss.
Seeing slop and its effects in action
We ran an episode with some research scientists who made a company to improve cybersecurity. The initial interview didn't go well. We wanted to make our guests look good, as we always do. But hearing experts articulating complex things can be really painful - not because of a lack of understanding in either direction, but a because of the different worldview when inquiring as a journalist rather than a fellow scientist. The bigger aims in that episode became supporting the researchers - who we really liked as people and wanted to help out - from their own gratuitous and meaningless use of the slop term "AI".
The learning experience for me doing this as an independent podcast reseacher for a show, instead of as a professor, was to see from the other side of the cloisters. The higher power to answer to is now concretely our audience, not some examining board, or abtract idea of quality or an insignia to be saluted. I sensed that as academics they'd been browbeaten and brainwashed by incessant repetition, repetition, repetition of "AI" in their environment. In their trauma, they could hardly complete a single sentence without saying "AI". (I guess many academics can relate to this).
Their algorithm was actually well thought out for its application, which needed explaining to me. It wasn't obvious, but was obvious enough that on understansing it it was clear why you'd want to do that. Indeed they did a very good job of exposition in the end. Their use of decision trees and weighting graphs with custom language models with constrained output based on document ingestion and a guiding matrix became clear through our discussion. A better science journalist would quickly transform what I just said into attractive and relatable marketing without once mentioning (the dreaded acronym one may not speak of under pain of kitten disintegration).
Like automated bug finding and other areas of digital quality control tt is potentially a very good product, if properly understood.
I suggested that maybe it's wise for companies like theirs to start distancing themselves from the word "AI" and hype if they're serious long-term players. They kinda agreed. At this point no mention of kittens or their possible fate was discussed.
The media got all hot for "AI" because it sounds new. To be cool you had to say it a lot. They need the kids to tell them how quickly things get old these days. Well, that thing you can't say… it just got old very fast, as is the case for seasonal cycles and winters, which wiser minds have studied closely before betting the farm on growing sunflowers all year.
Anyway, we must learn to talk more maturely about systems and products that have some machine learning (ML) or advanced signal processing elements (DSP). "AI" is now a turnoff. It drives interest away. It's a meaningless term overloaded with politics and wishful thinking.
Commercially, I understand why we'd think people, given the choice between two similar products one of which is dubbed "AI" might choose it. But two universes are emerging, because as many people would look at the "AI" moniker and reject it out-of-hand for being "AI". That's not simply a matter of trust in accuracy of output. Or appetite for risk. Or demographic uptake, or any other silly marketing lens. It's a deep and unshakeable emotional response to the threat of an alien (as David Bowie described digital technology not long before his passing) The moral injury to scientists is folks now associate amazing applications that very smart people have poured their life-work and soul into, with the spectre of fascism. There's a lot to defend here.
Late to the table, but in time to steal the meal, comes the hoorahing rabble of rich hooligans arriving in a taxi carved from the shoulders of giants… The legitimacy and potential social benefits of our efforts in comp sci. are circling the drain because bad people took over the "tech industry".
That may be a separate story from accuracy and reliability of language models and so on, but practically, it's hard to separate. Using the forbidden Abbreviated Coded Rendition Of Name Yielding Meaning puts reason on suspend. Why? An effective resistance against shit-tech must focus on each issue with clarity; control, environmental impact, accuracy, cybsersecurity, political agendas, property rights, labour relations, each need their own sphere of effort…
These are complex matters, so a democratic challenge to toxic tech must come from the people/citizens, but also from within the tech industry itself, if for nothing else but self-preservation. One way to do that is to collectively "change the stuck record", which starts with addressing our vocabulary.
Reset your language model
Let alone using it, try not saying AI^H^HREDACTED for a whole day. It's like the first time you tried to not use a smartphone for a whole day, no? It's very hard work because, as the philosophers Wittgenstein and Ayer would say, you're struggling to find a word for something about which you have no concept. Saying it is just lazily making an agreeable noise. It's a noise that stands-in for an ineffable 'thing' that feels required to get along in company. But that's an illusion. Language disguises thought.
Most regular people might feel uncomfortable saying words like "matching", "prediction", "extrapolation", or "recognition" because that would cause them to commit the greatest sin in western culture, of appearing too clever. Okay.. find some more hip words. But for technical professionals to do the same is … well it simply makes you not a technical professional any more. When the function of words becomes making you feel socially confortable, rather than a tool for exploring life, you're stuck and it's time to hand in your geek card.
We're often timid to the point of being docile and blind to what we do in tech, in order to lead a quiet, focused life free of any more stress than coders and hackers already suffer. It takes a lot of concentration to write code and think about giant invisible digital structures. We're still better at it than machines. Yet more than anyone we want to believe in the magic that underwrites our own existence, and what we thought was our power.
The same words used to build worlds become weapons to destroy them when cloaked in the magical, woo woo "power of AI" in a pseudo-science circus. To name something is to have power over it. To name it accurately and understand it through precise and carefully chosen words is a greater power. That term comprising the first and ninth letters of the alphabet, weakens understanding and takes power away from you. I think that many tech workers are as bamboozled by marketing slop as the everyone else.
Figure 2: "Oh My! Who dares challenge the Mighty Oz?"
Tech workers have been kept in line by the threat of losing jobs. Ironically "that thing that is the opposite of IA" (as a Marxist reality) removes any last obstacle to dominion of the workplace. Since the now overt agenda of the 'Nerd Reich' is to replace humans, workers have nothing left to lose. We've all effectively lost our jobs anyway, so resistance would be the only logical, rational path.
But resist what?
First is to remove the smoke and dust of vague hand-waving half-arsery that muddies the waters to make them appear deep. Then we can see what we're dealing with.
How we stop saying "65 73"?
Ernest Vincent Wright wrote Gadsby, a book of 50,000 words without using the letter 'e' (the most common letter in English). Any exercise in constraint is a tough challenge. If Wright could do that, surely we children of tomorrow can utter a few sentences about digital technology without repeating "AI" like a squeaky kids toy?
So, in some upcoming episodes of The Cybershow we've banned the use of the word "AI"! :) Under Gadsby rules we'll all have to talk for an hour without using the dreaded word. Can we do that? One option is to replace every occurrence someone says it with a honking clown horn, or a random word like "hatstand" or "hippopotamus" to emphasise the absurdity. (Sadly, the proposed kitten quiz show did not meet with animal welfare requirements).
It's a tough call. It forces one to speak knowledgeably, with clarity and purpose, and to explain concepts, without falling back on commercial vagaries and sales-speak. The point is to expose the cursed contraction as a deflationary marketing term that subtracts value from any discussion of next generation applications, machine learning, machine agency as a legal and cybersecurity matter, advanced signal processing for capture, ingestion and transcription, and so on.
How this helps
In the case of the cybersecurity company we interviewed, I was satisfied they really do know what they're doing and have basically sound data science beneath it, but were making wishy-washy marketing statements. I said it would be a tragedy if good projects are blown up as collateral damage in the backlash against "0o101 0o111" and likely financial collapse, because they couldn't mark themselves out using serious language and communicate their wares well, scientifically yet accessibly. In the episode they say it a lot, plus many instances I edited out. I mention this episode as an example showing there's lots of great development out there from well meaning and smart people, but who:
- do know what they're doing and offer potentially good outcomes but use "AI" as a marketing term to their own detriment. These are the people we want to help stop down-talking their own work with marketing slop.
- don't know what they're doing, and are just throwing black-boxes at problems but still getting possibly good and useful results. These are an interesting group to discuss because they may well yield great outcomes if only they were more educated about tech. It may also save them from falling into the next category…
- don't know what they're doing and are getting seemingly good initial results, but ultimately dangerous and reputation ruining outcomes. In all likelihood this is the largest group. This is where the "moral panic" media are feasting. They're having a field-day because, like cybersecurity there are lots of fuck-ups out there making a bad show because they are greedy, impatient, inconsiderate of the consequences and ignorant of the tech.
- do know they're doing and are getting dangerous and sinister outcomes because they're sinister and dark people and feel "AI" gives them the excuse and opportunity to go full-evil. They are the smallest but most visible group. These guys just need stopping. End of.
This is arbitrary quadrant, simple but revealing of stances relating thoroughness of knowledge to morally conscious motives. As a movement I sympathise with Stop-AI. However, as disparate "activism" it's already going stochastic and at the point of firebombing and shooting at company CEOs. That violence won't do and is not okay. Exactly what the fascists want is proof of "savage Luddites and anti-progressives at the gate, who need more of our AI surveillance to contain them!" Law and Order! Law and Order!! To the bunkers!
Instead we ask you not to "Stop AI", but to "Stop Saying AI".
Every time you say "AI" a kitten dies.
Figure 3: "Techies! Before you speak, pleeeeease think of kittens…"
To a great extent the words we use create our reality. Casually saying {{{words we shouldn't say}}} colludes with a derelict media by increasing an "all good versus all evil" polarisation. It collapses discussion. Stop saying "0x41 0x49" and you cut off the oxygen of the malevolent minority.
Pay no attention to that man behind the curtain!
Repeating kitten-killing utterances just encourages the bad guys. It signals willingness to ignorance and to believe in their magic tricks. When people stop believing in his magic the mighty Wizard of Oz is just a sad, lonely man in a costume. His only redemption is to admit that the whole plan was to offer us what we already have.
Figure 4: "Bad dog! Toto, put that curtain back!"