The New Digital Literacy
Figure 1: "Raise your words, not your voice." – Rumi
Need to say something? Good words are your friend, not volume or "reach". A few well said words in the right place can topple empires or seed new worlds.
Plummeting education standards and "AI" make us ever more dependent, and inarticulate. That poem by Taylor Mali keeps growing in relevance. As it gets harder to say what we want, or don't want, the tone of all conversation becomes frustrated and easily boils over into anger. The discourse around digital technology is headed that way.
Changing value of words
Once upon a time winning arguments was considered important. It's why rich and powerful people sent their children to good schools to learn rhetoric and debate. The ability not only to understand the world but to formulate and present a good account, a good argument, or spot obvious bullshit, are all empowering life-skills.
That world is disappearing. We call it "post truth" or the "epistemic crisis". Reasoned arguments are being displaced by emotional assaults, partly due to modern politics, plummeting education and now the effects of "AI" which undermine reasoning.
It is devastating because few of us have a naturally high emotional intelligence. It takes a long time and lots of human interaction to build emotional intelligence, yet children experience ever less exposure to reality.
On the Internet and mass media is it's rare to find arguments based on evidence and reason since we've created soundbite discourse. We don't have the attention span any more. I have to write in short sentences. Even for intelligent readers. I accept that few people will read this essay to its end.
Our journey to trash discourse passed through through several phases in my memory. In the 80s the wittiest rejoinder won the argument regardless of truth content. In the 90s it was the snarkiest, most sarcastic and ironic interlocutor who claimed the cup. After 2000 the person claiming the moral high ground triumphed. After 2010 it was whoever painted themselves the greater victim or identified with the least privileged "intersection". Since 2020, in the Trump era it's simply whoever can be most openly fucking rude to their opponent. Rarely has truth or fact played much of a part.
Most of the values I was taught about truth, with regard to science and law - not even the moral content but basic engineering safety and common good - have vanished in this century. As we go into the next decade I notice discourse being dominated by those who can act the most sinister, creepy and scary.
Obdurate opposition to reason was once the preserve of deeply religious extremists and truly mad people. Today, everyone has an "AI" in their pocket and considers themselves an expert with access to an oracle. Petty conversations down the pub about peanut allergies soon descend into a death-match of alternative fact-checking, minimisation, downplaying, distraction, overpowering, ad-hominen, isolating, smearing and discrediting, intimidating, gas-lighting… the list of ancient bad-faith craft has metastasised into every corner of our culture.
Intellectual self-defence
The ability to stand up against opinionated bullies might be called "intellectual self-defence". It's about maintaining boundaries against hostile claims and impositions.
Let's focus on things we don't want. When people do things we dislike, we must be able to say: "No thank-you, that's not acceptable to me", or "Please stop doing that".
Abusers try to silence. They have a vast playbook of manipulation psychology to attack the vocality of opponents. Niccolo Machiavelli's "The Prince" is a foundational work everyone should know. Compendiums of fallacies and bad-faith arguments abound.
Those who try to "define the narrative" want others to lack words. Inarticulate people are easy to exploit. Even if they know they're being taken advantage of they can't find the words to protest.
Campaigns to bolster confident refusal have included drugs - "Just Say No" - and rape - "No Means No" - but where is the tonic for people eschewing the pushers of toxic, hostile technology? What would our slogans be?
Positively claiming an identity like "Luddite", or campaigning on a single issue like "smartphones in schools" or "limiting social media" is missing the bigger picture. Of course it's a start, and for many people their first foray into critical thinking about technology feels frightening, forbidden. How dare you - mere peon - have an opinion about technical things?!
That courage is a step on the road to a wider understanding of technofascism and its many faces.
Take false concern which is so commonplace we hardly recognise it. It has roots in advertising, where contriving a specious premise always preceded "miraculous solution!". Much of tech is premised on non-existent problems, nebulous ideas of "necessity" and "efficiency" that have never existed in reality. Under this sheep's clothing the pushers present as helpful and concerned. They're always "just" doing something for "your own good", for your "protection and security". Refuse that "help" or show any signs of independence and watch the mask slip.
Lesser known tactics used today are discombobulation and conflation,
The term "AI", and it's use in the mass media is a perfect example of conflation.
Sorry to question a sacred idol, I love Prof. Hannah Fry too as our popular, acceptable "science influencer" in the UK, but I'm not impressed by the tone of the recent BBC series. Of course the BBC write the scripts and the BBC isn't without an agenda. Fry can't personally be responsible for the shortcomings. That's something we need to remember in popular science journalism.
Carelessly throwing together a dozen different unrelated things from mathematics and computer science into the same pot and calling it "AI" is a dangerous thing.
I'm pleased the BBC made the effort to showcase some benefits alongside the hype and risks, but if we mix up valuable aspects of technology with awful ones under the same words we throw out the baby with the bathwater. We confuse people and lead them into learned helplessness in the face of complexity.
Maybe that's the aim, who knows?
Not to single out the BBC, the problem exists across all media, in print, online and on the air. People should know that the pattern recognition and signal processing revolutionising medicine has barely anything to do with generative language chat bots or predictive spatio-temporal algorithms used for collision avoidance in self driving cars. Nonetheless they're all diced and all go into the same marketing-speak stew along with a big spoon of sugar cavalier and a dollop of wishy-washy post-modern relativism that "technology is neutral".
Why? One explanation is that the same few US technology companies that all bank each other, have produced a general brute-force approach to… mostly selling expensive microprocessors (NVIDIA), Regardless of the very diverse algorithms at play, a hyper-industrial approach based around buuilding massive data-centres with millions of matrix processors has become an arms race. That alone has forced "AI" to be seen as a generalised phenomena. We are mistaking much of what is a political and economic force for a scientific one.
A lack of historical context is also worrying. Mixing up breakthroughs from only a few years ago with logical and statistical methods that have been in use since the 1950s is unhelpful. This "deflation" of discourse is characteristic of dumbing-down in order to push undesirable and ill-considered tech onto people.
For us as cyber-security people it's so depressing to see computer security become a protection racket, where suppliers have incentives to make things even less secure, to ramp up fear, in order to sell "security". It's terrifying how the mass media tries to mix security and "AI" (there are just a few exciting fringe cases of success hunting vulnerabilities automatically - but even that is a double edged sword) whereas every credible cybersecurity expert is screaming to run in the opposite direction.
The media seem unable to contain and reflect seemingly contradictory tensions.
Constantly using the vacuous term "AI" is now very, very unhelpful. Seriously it's time for thoughtful people to stop saying it. Stop colluding in obfuscation and helping malevolent marketeers turn reason into magic. It's anti-scientific!
Unfortunately, even if Hannah Fry and the BBC wanted to up the tone, to give viewers the benefit of the doubt, and speak in a more nuanced, grown-up way about paradigms now re-shaping the definition of computing, I'm not sure they would be able to find the words.
Not because they don't know those words. We can talk all day about matrix transforms and convolution and achieve nothing. The problem is we still need to invent some new, effective words.
Language is adaptive, it evolves in reaction to the environment. We are all children still finding words for such a rapidly changing world.
It took many years from when Big Tech became grossly abusive to Cory Doctorow coining the word "enshitification". That word explains how digital service providers abuse us. It names a rather complex process involving sly, deliberate cultivation of dependency, moving goalposts, breaking promises, sabotaging quality, playing people off against each other, and finally cashing-out. It describes a pattern of planned treachery.
Maybe what Doctorow doesn't emphasise enough is that enshitification isn't accidental. It's not an "unfortunate natural effect". Tech bros would have you believe it's an inevitable side-effect of economics or "technological determinism" or some such twaddle.
Lot's of technology failure is like this. Massive data breaches and shutdowns are a direct effect of choices made by people who put money before safety. The constantly falling quality of digital services and their bullying imposition are direct results of choices made by poor leaders. We need words to explain not just what a technology does, or what it can do, but who uses it and to what ends, and how that affects society.
Very importantly, once we have a word for something we can share it. People can warn each other of dangers, scams and predators. That's why writers and wordsmiths are cultural heroes as well as scientists. We need both. It's not only that they entertain, the poets name things. Often they name things that scientists themselves discover but cannot explain. At it's best that's what good science journalism can do.
I wholeheartedly reject any rubbish about how "ordinary folk" (and other cutesy terms) "can't understand the technical stuff".
When I was 10 years old I listened to Radio 4 "Science Now" in bed. I'm not a particularly intelligent person but by the time I was 12 I had a good vocabulary and understanding of subjects as broad as nuclear physics, astrophysics, climate and environment, and advanced organic chemistry. Back then the BBC did not hold back from the layman on technical details nor political implications. I wish such service was available today.
Hearing other people use a powerful word gives us permission to oppose and refuse too. It gives us permission to have an informed layman's opinion, not just a floaty Brian Cox "wow factor".
Labelling everything "AI" cripples peoples' reason and ability to discern. I wish the BBC would stop doing it.
Counter-disinformation
Make no mistake, corporate technofascists feeding the media know exactly what they're doing. Like all manipulators they research, plan and share a playbook of techniques. This video by forbrukerradet is rather funny, and it nails the mentality of the corporate Enshitifificator.
However, from a counterintelligence view, once a pattern is exposed and named, it is disarmed. If you know the trick, the magic is broken.
Sometimes this as as simple as re-framing. Aaron Balick turned "Doom Scrolling" around with his realisation that we are "Hope Scrolling". Purveyors of "AI companions" constructed from the memories of dead friends or relatives pitch them as "helping grieving". There's absolutely no credible scientific evidence of that claim. Nor can there be, because the ethics involved in a significant study of digital necromancy are already problematic. Once you realise that it's about "avoiding grieving" - and the terrible long-term consequences of doing that - it's a whole different perspective.
This is the power of sharing what we know about hostile targeting - how digital technology corporations go after "whales", vulnerable people, and so on.
Where the BBC succeed is with programmes like Scam Interceptors. It's really bolsters civic cybersecurity. Education is a weapon against abuse, which is why it is so vigorously attacked by those seeking domination.
Right now the word "Enshitification" is a doing a lot of heavy lifting. It's widespread use is something of a phenomenon in itself reflecting a pent-up disaffection and sudden outpouring of realisation. There's a "me too" factor.
Unfortunately it's getting overloaded, sometimes unhelpfully, because Doctorow describes a quite specific ruse. We need to expand our vocab in multiple areas of digital literacy.
New digital literacy is how we fight technofascism and take back tech.
The problem is not with technology, it's with the people who make and control it. And let's be honest, that's going to apply to the regulators as much as the pushers. Let's not flip-flop between supporting Mr. Fox and Mr. Wolf for president of the hen-house.
Recognising the need to protect children from corporate vampires and purveyors of gore, hate, and illegal pornography is a big win. Implementing it with "proving age" is naive, disastrous to digital rights and will ultimately fail. Already governments have realised they now want to attack VPNs as an obvious "loophole". Virtual Private Networks are a bedrock of digital security and attacking them to defend a ridiculously naive "age based" strategy to tackle awful content is simply insane.
If you could "win" that battle you'd then have to regulate operating systems. All of them! Including the ones you never heard of. And if any government could "win" that battle they'd be left trying to regulate microprocessors, yes all of them, including the ones imported on the black market from who knows where, and compilers… and text editors… yes all of them, plus the ones that haven't even been vibe-coded by a 13 year old on a dare yet.
Axiomatically the new kids are always smarter than you, and anyone who doesn't know that hasn't been paying attention to anything in technology over the past 50 years.
We must keep constant gardening to make sure desirable outcomes aren't ruined by brain-dead implementations.
Even getting the public to understand such problems of implementation is a challenge, But it's doable and vital. Moreover it's inescapable if we want a "digital society" that isn't degenerate and anti-democratic.
Digital Literacy 1.0 was about technical education. In the 80s and 90's we learned at school about disks, networks and microprocessors. That's all great, and technical knowledge of the world we live in is still important. But more is needed.
Digital Literacy 2.0 is about our human rights and basic expectations of social contract that technology threatens. Things like equal access to education, fair legal process, privacy, community, democracy and accurate information. It's also about abuse of technology to trick people and extract from us, and how to react to that.
Digital Literacy 1.0 was optimistic at heart. It heralded the dawn of extraordinary growth and opportunity. Digital Literacy 2.0 is a realist, pessimistic counterpoint in an era of epistemic crises, valuelessnes and managed decline for Western democratic civilisation.
I've said for a long time that we need many more new words. We need a vibrant vocabulary full of powerful terms. One of the best sources of neologisms is to listen to how young people are interpreting their world and to select what is meaningful in a wider historical context.
We need understanding not only about what technology is, but how it functions in society and for whom.
Authors like Mumford, Postman and Franklin need to be on the school curriculum.
Thinking through a security lens is helpful. National security thinkers have long understood that we need a population armed against unconventional threats. Three components make this;
- Vigilance: understanding of and sensitivity to threats
- Communicative Confidence: ability to speak-up and name our concerns
- Trust: belief in our peers and authorities to remedy threats
Here are the key pillars for digital, intellectual self-defence and civic cybersecurity.
As far back as the Home Guard (Dad's Army), vigilant governments have prepared their people for innocuous threats in everyday life. Back in WW2 it was potential Nazi spies. Today, "See it. Say it. Sorted." applies not only to terror threats but threats to social stability from global digital technology companies. That has been an unfortunate blind-spot for decades.
How do we talk to the public about their devices spying and listening? Isn't this a clear threat to national security by dint of compromising every business meeting?
What should we say when we know a friend is having a reality break because of chat bot use? "You need professional help", might be appropriate but it might not get the desired effect.
What are the words to describe our horror at being asked to submit biometrics and passports to some random fly-by-night "third party" verification company that we never heard of and fundamentally do not trust? Given the daily rate of massive data leaks that's not something a reasonable, conscientious person would do.
In our interviews with the world-leading security experts we hear the same tropes presented in different words, with slightly different narratives. They describe people and their governments as unwilling or unable to take control of digital technology because they;
- lack clear understanding
- lack a common vocabulary
- fear challenging incumbent power, and what might replace it
- are enmeshed and dependent for their own power
- fundamentally don't see the digital world as "real"
Governments and their agencies can only help a little with this. It can't be fixed by edict, legislation or injecting money. We must do the rest ourselves.
Something like the Online Safety Act is a start. Many countries are moving that way. But we can't succeed by allowing the state or private entities to become the Internet Police. We have to do the enforcement ourselves, which means supporting parents and teachers against the sources of harms - not punishing the victims (our children) by denying them internet access.
This requires government take a stand, an actual position. Governments must make a now urgent choice. It's time to choose citizens over foreign businessmen. In my work on civic cybersecurity we've realised we need security from Big Tech and "AI" billionaires. If government won't help with that we'll have to rise to the challenge ourselves.