Freedom from ThE B0llOcKS
Figure 1: "Freedom isn't to do what you want at somebody else's expense" – John Lydon
How we talk about the future has a way of defining it. When it comes to avoiding weak language, what do William Godwin (husband of Mary Wollstonecraft and father of Mary Shelley), Richard Stallman and Johnny Rotten (John Lydon) all have in common? It is their relentless, almost aggressive assertion of clarity against artifice. In very different ways, like Dr Martin Luther King and Sir Winston Churchill, each defines their own brand of great oration …against non-identity, against indecision, against complacency…
To speak stark, pragmatic, realist truth in a culture that is wilfully deaf because it's lost in "the warm thrill" of it's own involuted confusion is a big ask. It takes more than just courage, charisma and doggedness. One must set aside one's entire life, wealth and reputation going against the flow. A fine example is William Pitt Jr. who, in defence of the dignity of the country he loved, went against every single one of his wealthy peers to oppose slavery. When he spoke in parliament of the "abominable traffic" and the "curse of mankind that brought stigma to our national character", it was shocking. Pitt did not mince his words. Neither did William Godwin, who was in some ways a founder of Political Science. Godwin believed that deeply-held principles could stand alone as ideas separate from the group interests of the day. That remains a far more mature politics than exists today where nothing but expedient alliances holds party identity together. Godwin's writing excoriated the stuck, pretentious artifice of church and state authoritarianism. He implored us to think higher… and made changing the world a family tradition.
Where is such salt speech today? Search for it and you'll find only therapy. Can you hear British politicians damning Big Tech for an epidemic of youth anxiety, mental illness and unemployment? Like Clegg and Sunak they're only climbing the ladder of British politics on the way up to their Big Tech jobs, where the real power is.
The ability to say what is what, clearly and distinctly, is a super power that gets rarer each year. We must give more credit to our poets and wordsmiths if we are to survive what's coming. ChatGPT won't have the answers. Every time you say "AI", you make the whole world mumble and gaze at its shoes a little more.
Speak clearly
Years ago I heard an interview where John Lydon recalls Malcolm McLaren suggesting elocution lessons and telling him to clearly pronounce his lyrics. Punk is as much about powerful words as the raucous din. Side-chain compressors weren't a big thing in seventies sound production and before The Sex Pistols the trending punk vocal style was to sound ignorant and lazy, like you're a bit brain-damaged from sniffing glue.
"So you want me to sing like Matt fucking Munro or something?!", asked Lydon. Of course McLaren was right, and with "God Save The Queen" the rest is history. Its characteristic, over-enunciated speech is a trick he and other great musicians like Mark E Smith of The Fall have since used to make sure their vocals cut through extremely dense, disorientating shred-guitar backings.
Stallman has a punk way of cutting through the epic miasma of bullshit generated by corporate American tech. You won't hear him mumble words like "AI" or "Cloud". He doesn't use those words because they are meaningless marketing terms designed to subtract clarity from conversation. It's hard not to respect, even admire those who refuse to adopt the language of the enemy.
On the other hand, Stallman has a reputation for insisting others use a certain lens. Nobody likes the language police. Most times I notice it merely makes people double-down on bad language they feel is "theirs". So there is a balance to be struck. Effective communication requires us to accept some bad language from others in order to listen, in order to participate. Sometimes to fight the enemy you must move onto his ground, which means approaching a prepared position.
Stallman speaks in simple terms about what is 'fair' and 'just', in a timeless way that seems to leverage some of Brene Brown's wisdom on the power of the lesser position, without making that a Christian take. The power of Stallman's words is that any primary school kid can understand them.
When pitted against a multi-trillion dollar propaganda machine that owns the media and pays news outlets to breathlessly plug the word "AI" into every sentence, it's remarkable that Software Freedom, technical education or any of the more nuanced facets of computer science have any standing at all. It is testament to the sheer power of an idea and the innate sense that technological self-determination, knowledge and sovereignty is a birthright. It's why the European Commission is taking very seriously an economic vision where European technology is based on it.
This week on the Cybershow, as Helen and I discussed toxic news cycles, I had to edit-out my own bad language. People may notice we do a fair bit of post work on each episode, mainly to squeeze-in dense and complex ideas that could take many hours of rambling on other podcasts. We're all still learning… effective communication requires not being a stickler. Sometimes I'll say "AI" where that is necessary, to stand-in for something "most regular folk" would recognise. Other times we'll be very careful to choose the correct word for the job, or to take out what is unhelpful.
Of course, we're not talking about cutting cuss-words. Cybershow has always had a frank and mature tone. The question is; Does our language advance the conversation and urgent agendas? Dog whistles and technical code words may speak to fellow tech professionals, or even to loyal listeners, but if it isn't carrying a broader, useful message then it has no use. We have no need to impress. We know what we are talking about, are generally very prescient and have proven it many times over.
The roots of bad language
Most bad language has its roots in affectation and artifice. It is the product of institutions, organisations and tribes whose latent insecurity and hostility to rival tribes is woven into their language. We speak it to show that we "belong". For example guilds and secret societies create language to exclude outsiders, and only incidentally for precision.
Another huge source of pollution comes from the advertising, PR and marketing sector. It is meaningless spin, puff, bluster and deception that invisibly permeates every aspect of language.
For example your supermarket sells "Selected Wines".
What exactly is the function of the word "selected" here? What work is it doing? Naturally the supermarket must carry some subset of all the possible wines in the world, and unavoidably must have 'selected' those on some basis? Do we care? Not really. They were almost certainly selected on the basis of being the cheapest. The word "selected" is a decorator that vaguely implies some standard of quality.
"AI" and cybersecurity are domains blighted by such nonsense and tribalism that stops people from knowing what is really going on. It's a land of vague aspersions and neologisms that inhibit clear communication. For example, our language stops psychologists from talking to people in signal processing, or intelligence, because it takes patience, effort and careful listening to realise you are talking about the same things.
A more relatable example is maybe where racist, sexist and homophobic phrases become institutionalised, along with assumptions of violence, entitlement and fear of diversity - specifically in our area of interest; fear of different approaches to technology. It is so built-in to the language of companies like Google and Microsoft that we hardly notice their relentless talk of "arms races" and "domination".
Language is thus a battleground. The contenders are clarity (light) and confusion (darkness). Many attacks should be familiar, including Orwellian devices of Newspeak, Double-think and Thought Crime that constitute a war on clear thought. Deflation is where complex ideas are dumbed down into catch-all cliches, whereas inflation takes clear, simple ideas and muddies them by endless hair-splitting interpretations. Words don't just lose their 'currency', their meaning is deliberately shifted towards the interests of power groups who appropriate words to their ends. Anatole France made much of this dilution of linguistic potency. As happens with native cultures, the same is seen in tech. The term "AI" is now a conflation of over a dozen distinct ideas into an almost meaningless splurge of media hype.
Today we vigorously grapple with the question; "What do ordinary people even think AI is?". In almost all cases it simply stands in for "digital technology". I cannot think of an area of discussion that would not be equally served if we simply started talking about "magic". Not to recognise this as an assault on the intellectual fabric, on the psychological and social aspects of national security, would be remiss.
It's no counter argument to say that "language is always evolving". That's a truism. Evolution is a fitness adaptation, so we'd hope that language would always get more powerful at describing the present environment. That isn't always so.
For example the "enshitification" of everything is really a sentiment, not a weapon of precision. It's shorthand for our dawning realisation that the promises of technology under capitalism are seriously impaired - that ultimately everything is a con job to squeeze a buck out of you and leave you hanging. It's a powerful word in its social effect.
But at the same time it is avoidant and when journalists start using it too liberally I feel something is lost. To conjure up a nebulous phenomenon, "enshitification", is to diffuse the fact that specific persons are cheats, liars and traitors. By taking a shot at everything, but nothing in particular, it can normalise a lack of honour, make the general contempt for fellow human beings seen in US business feel like a natural phenomenon. However the niche conditions that allow "enshitification" - corporate structure, acquisitions, shareholder rights, sloppy contract law, low technical literacy, unfair laws and trade arrangements, corrupt judges and captured regulators - come from a broader failure of values that permeate Western society. As Dana Meadows and Jay Forrester taught us, it is always more productive to talk about the system values than to point to any errant symptom or process. As an abstract noun… a "thing"… "enshitification" is rather a term of politeness that retreats from calling-out; false advertising, crooks, spivs, gangsters and cartels, despicable criminal companies, bait-and-switch, network lock-in, and a dozen other well known scuzzy business practices. We forget that we already have powerful, time-tested words.
The Plain English Campaign, revolutionised the way legal documents are written since 1994. It had a positive effect on British government digital services - things like tax codes, benefits, and healthcare. Despite such progress there remains a culture of deliberate obfuscation in technical professions not in pursuit of technical precision but to deliberately confuse, to dumb us down, stymie wider understanding and democratic input.
Meanwhile smartphones and social media did to three generations what sniffing glue did to the punks. In the years since Taylor Mali implored us to speak with conviction, things have worsened. Alongside our insatiable thirst for convenient, immediate gratification has grown a babbling inability to name or even know what we want.
The fact is nobody really knows what "AI" is. Even the experts are extremely hard pressed to give a consistent, relatable account that the most learned and experienced judge or politician could digest.
"AI" is at once;
- a change in the modality of human-computer interaction to an imprecise, iterative form of text-based chat.
- anthropomorphism of common services like search and calculation tasks into fake people.
- a huge change in social relations to explicitly devalue fellow human beings in favour of "efficient" bots.
- a massive land-grab by corporations on peoples personal data and intellectual labour.
- a rapid resurgence of superstition, cult-like neo-pagan religion of technology.
- an arms race to consume all the electricity and fresh water on the planet
And yet not one of those definitions explains anything technical about what "AI" actually is or does. Almost nothing in the media attempts to explain how machine learning fits into our existing structures of knowledge and technology. "AI" simply appears, as is, as a "new thing".
In this world of exponentially increasing relative ignorance, the average person does not have any effective legal capacity to enter into the kind of contracts presented by tech giants. They cannot possibly understand the concepts, the risks, their responsibilities, what is really being offered as a service or what they are sacrificing in consideration. It's high time the law recognised this reality.
When Kate interviewed Aaron Balick on "AI Psychosis" I had to step aside. It was all I could do to stop myself attempting to preface the whole discussion by defining "AI", or worse, like Stallman, asking a guest not to use the term "AI" at all. That would have been ridiculous and a disaster, and would make me a very bad producer.
I cut only one word from that interview, which mentioned "Luddites", a term of abuse bandied by big tech. Historically the "Luddites" were an exemplar of justified resistance to technological harms who were brutally murdered by government mercenaries to protect private profit. Once a term has been dirtied by the opposition it's hard to claw it back without being misunderstood, so it's better to just remove it. Claiming to "not be a Luddite" has also become a popular apologetic trope to preface and soften statements rightfully and properly critical of tech. Usually that happens precisely when we actually need to amplify what we're about to say.
The phrase "AI" is the linguistic form of what Neil Postman called "abject surrender before technology". It is a profound dumbing-down of public discourse around a complex range of technologies now under scrutiny and consideration. Its widespread use is an admission of defeat by everyone, right up to the highest levels of government, prime ministers, civil servants and heads of intelligence services, that they've lost the will to be clear what they're talking about when it comes to digital technology. Moreover, not only do they not want to have to think about tech too hard, they don't really want other people to think too hard about it either.
What do we suppose the average person knows about their "AI companion" with respect to….
- pattern matching, recognition, identification
- expert, general or quiescent knowledge
- supervised or unsupervised learning
- feed forward, feedback and different kinds of neural networks
- data quality, examples, biasing, poisoning
- interpretive, generative, summative modes
- inference and learning
- extrapolation, interpolation, regression or deduction
- deanonymisation, correlation, known plain-text attacks
- security, leakage, prompt injection
- local or remote storage
- local or remote computation
- false positives, negatives and compounded errors
- bit depth and floating point precision
- model over-fitting, under-fitting and quantisation errors
- hallucinations, confabulations, overheating
These are just a random bunch of technical concerns off the top of my head, all extremely relevant to whether the word-salad generator in your smartphone is to be trusted, never mind a vast landscape of concerns from psychology, political science, communications and cyber security….
Today the entire discussion is flattened into;
" Is AI Good? Or is AI Evil? Click to find out! "
That's where all our clever technology has brought us to.
Obviously we cannot educate everyone to become experts in computer science. But a rich and meaningful new vocabulary would at least enable ordinary people to gain a stake in discussions that powerful folk don't want them to have any say in.
Yes, technology is enshitified. We naturally hold-out for The Great Deenshitification. But what does that even mean?
- When our technology actually does what we ask it to?
- When we're free from being spied on?
- When people can choose real alternatives to predatory Big Tech?
- When they stop pushing half-baked insecure crap down our throats?
- When all the Big Tech execs are in jail?
We need a bigger and better vocabulary. We need bigger and better visions of a positive technological future. Aaron Balick has suggested we replace "doom scrolling" with "hope scrolling". That phrase has some legs. It immediately attaches to something we feel inside is real and meaningful. Our new modes of speech will need to be punk, brash, rude, irreverent, clever, hopeful, and above all will need to be honest if we are to take on the artifice of "AI".