Always crashing in the same car: Bad technology as a preventable social harm.
Figure 1: "Love of goodness without love of learning degenerates into simple-mindedness. Love of strength without love of learning degenerates into recklessness" – Confucius
Why do humans repeat the same mistakes over and over again?
Think of a person, place or job you wish you'd never encountered. You'd have been spared a mountain of pain and loss had your paths never crossed. Even just encountering an idea can derail a life. Ask a few people and they'll say "my ex-wife/husband!". Or maybe falling into a vice like alcohol or gambling. It's an unpleasant question we avoid. The point is, how do we deal with realising these mistakes and not making them again?
We often rationalise a Back To The Future style time-travel conundrum, that if we changed history in this or that way then some other great thing wouldn't have happened. Using this fatalism in technology we rationalise the second world war by saying, "Eighty million deaths at least brought us space travel and nuclear energy". Blind determinism lets us claim that in order to have good things we must inevitably suffer bad. It's a really quite stupid, reckless and thoughtless view of life and the world, one peculiar to a very recent and narrow concept of linear history. It's an unrepentant and unreflective philosophy increasingly imposed on the majority by powerful minorities who want their historical narrative - which for them explains "progress" - to go unchallenged.
Faults of Industrialism
The weakness of industrial fetishism, the worship of scale and uniformity, is that it denies forward rationality - real choice. It supposes humanity as a giant ship set upon some course. Somewhere there must be a captain, some navigation charts and a plan. History shows that all great plans whether from the pen of, Stalin, Pol Pot or Mao Zedong, end in catasptrophe. "AI" is becoming just such a plan. This time the plan is, as Neil Postman said, The Surrender of Culture to Technology.
It's not just that 'futurist' visions based on a complete lack of values are problematic. I've regularly advocated for more human values and criticised industry and governments for lacking those. It's that the world, events and discoveries always outpace political schemes and the dream-reality of incumbent power groups who try reducing "progress" to the idea of a journey - toward some imaginary goal. That is the progress of a singular heavy vessel whose momentum leaves it no agility. It's a recipe for stubbornness, and for the 'visions' of madmen to rule.
Creating an idea of "the future" simultaneously creates "a past" - which is always inferior and must be destroyed in order that we "advance". Philosophically this is schoolboy account of science, not as Apollonian but as a Dionysian drama. It supposes no way we can advance carefully, peacefully without destruction.
By contrast, mindful progress has happened throughout history on multiple fronts, at multiple rates, allowing a plurality of approaches and a continual resysnthesis of scientific knowledge. That's how nature has done it for millions of years. With mindful, piecemeal evolution, making mistakes is the very essence of progress. It is Science in it's purest form; tentative, exploratory, fallible. The trick is to do it without violence.
The problem with global industrialism is the problem of common fates, that a single mistake can wipe out the entire planet. The danger of the industrial creed is seen in the management of nuclear power stations versus renewable solar and wind; concentrating risk instead of distributing it. "AI" and Big Tech typifies the problem. which is building more powerful tools without growing a culture to manage tentative risk. It's bluntly summarised in this famous quote comparing two programming languages:
"C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off" – Bjarne Stroustrup
Fatal mistakes
Of course it hurts less to think as a fatalist. Certainly we cannot change the past. But we can interpret and deal with it more or less sensibly.
We don't get to selectively unpick history choosing only the good parts. But we do get to see our errors in an evolving context. Like Edith Piaf, we may have "no-regrets", because we understand a decent and ordinary life must be filled with mistakes and the chance to learn from them.
But what do we do when we're moving so fast we cannot even see our mistakes in the rear-view mirror and when each mistake could affect the entire population of the planet?
The Internet allowed small groups of people to use the people of Earth (at least those connected digitally) as their personal laboratory for pet projects without ethical oversight. Had they tried similar reckless engineering in the chemical, nuclear or biological realms they'd have brought down armies upon themselves. Sitting in a Californian suburb sipping lattes while promising an egalitarian utopia, they went under our radar. Foolishly, most of us believe information technology is essentially harmless (even after we allowed idiots to connect jetliners and nuclear plants to the network). Silicon valley 'bros' replaced localised, calculated risk-taking with globalised egotistical recklessness.
Only when the scope of progress is bounded in time and space and where opportunity for innovation is widely distributed can we safely experiment, freely learning lessons as we go, in a natural evolutionary way, This way, many ideas can compete, evolve and recombine, and no single way of doing things can rapidly overtake all other systems. This was the state of science and technology in the 20th century.
Today however power is extremely concentrated in monopolies with extraordinary reach, such that pathologically singular ideas disseminate very rapidly through global software supply chains. Moving fast and breaking things means breaking people, societies, nations and whole ways of life. It is recklessness toward the future, in the name of zealous vengeance on the present, for hurt and rejection in the past.
Externally created attacks like Kaseya, SolarWinds and Crowdstrike have corresponding internally generated threats to global computing capability, since Microsoft or Google can change one line of code in an update and deny or modify some functionality across the globe. This is an unacceptably dangerous degree of control and reach. The fragility and poor resilience of computing mono-cultures and totalitarian credos cannot be overstated.
So where is the learning part? Particularly in computing technology we seem to fall into the same hole over and over. Each time it's more costly and is becoming more serious as we approach larger scales amidst diminishing natural resources.
When it comes to technology we often hear a bullheaded, truculent, insistence around inevitability. Strange that these "inevitable" courses invariably favour the ruling industrialists. We say, "Things can't be un-invented." "Nobody can resist progress."
In reality strategic limitation and suppression of technologies has been successfully practised in the past, admittedly with difficulty, but effectively nonetheless. Only 50 years ago you could buy arsenic, cocaine, and fuming nitric acid at the local chemist shop.
Why our slow response to harm?
Lot's of harmful technologies are, on reflection, not such a great idea, and so we change laws and society to discourage their use and availability. Asbestos is a fantastic insulator and fire retardant that's no longer used in construction. However, the time between its widespread deployment and compelling evidence of harm was many years. By comparison the evidence of harm from "AI" is almost immediate. We have no excuse.
It seems though that each year the law gets weaker, more cumbersome and more corrupted. Regulators are captured. Scientific expert advisers are bought and sold. Our institutions are under attack from technofascism and from "AI" which many proponents hope will "replace lawyers". Moreover there is a general reluctance in people to question technology and its balance of benefits and harms.
Our slow response to the harms of smartphones and social media is a good example to reflect on. Today (January 2026) New York City commenced a statewide smartphone ban for children. Back in 2016, as a teacher in London. I expressed serious concerns that my undergrads were experiencing a huge impact on their focus, retention and attitudes due to smartphones. At the time I was not just ignored but rebuked for being a self-centred old-fashioned "dusty professor". I was 46 - not exactly ancient - but also pioneering technology as the head of research and development in two leading start-up companies based around interactive music on mobile devices. In my university classes I was teaching cutting-edge signal processing and data science (advanced convolution, wavelets, spectral feature identification and clustering).
I could hardly be called "a Luddite". Nonetheless, as a very experienced teacher and dedicated scientist I could not ignore the evidence before my eyes. Students were getting dumber every year - and it had something to do with these phones. Surely this was something really important that needed urgent investigation? Yet there I was, an expert in the field of mobile computing devices, encountering a stonewall opposition from friends and colleagues who could not hear even the mildest criticism of the values enshrined in that technology.
It still puzzles and haunts me. A very significant number of people are predisposed to believing in technology in a very unscientific, quite religious way. What is behind this myopic psychology?
Psychology of technological docility
I believe they were afraid. People employ complex cognitive processes, psychic defences, against hearing critical observations.
But if we are to have any hope of civic cybersecurity and digital self-defence, I think it's most important that as a society we understand those forces, not just in the traditions of anthropological tech-critique (Mumford, Franklin, Illich etc) but through the lens of modern research psychology.
Why is digital technology so often a cultist, hive-minded mass hysteria that knowingly disregards grave harms? The 'excuse' that "nobody knew about the harms" simply doesn't hold-up. I was certainly not alone 10 years ago and while writing Digital Vegan discovered allied minds like Sherry Turkle, Cal Newport, Shoshana Zuboff, Nicholas Carr and so on. Why did it take us a decade to even begin awakening a critical re-examination of mass technologies?
The claim that the "tech industry is powerful", while true, is not sufficient to explain fearful and self-deceiving behaviours. Today we can hear similar voices resisting any grown-up and impartial discussion of "AI", and still suspicion against more refined, cautious people not "embracing" technologies of domination, control and stupification.
Now that we are ready to admit the evidence around social media use, and governments are enacting legislation, it's also a good time to rethink why we are so resistant to technological self-examination. This would be timely, before we sleepwalk into the next wave of damaging "AI". For someone out there, there is an enormous research opportunity to investigate technological conformity and recalcitrance to reflection.
Can we learn not to spread bad ideas?
If only we could spot the pattern and not fall into the same hole again. Can we learn?
It is upsetting to think of the harm done to the educational opportunities of younger generations in the past decade. It's surely an impact every bit as damaging as poor diets and leaded gasoline fumes. So what if we could undo the past years spent on social media? No shortage of people say "social media" feels like a huge mistake and wish it had never happened, to the world, or to them personally.
How did it happen? Here's a slogan you wish you'd heard:
Friends Don't Let Friends Use Social Media
And here's one for today:
Sometimes people with the best of intentions pass something destructive on to us. Herpes. Gonorrhoea. Who gave you your first drugs, your first cigarette, your first party pill? Who got you into a cult you wasted 10 years of your life following?
Chances are it was someone you liked, already close to you, and acting as a friend. In many cases we'll never even know. Life would be too hard if we could root-cause every twist of fate and fortune. In Meet Joe Black, Death (Brad Pitt) reveals to Anthony Hopkins' character that it's his love, of good life, luxury and his wife's cooking, that lay behind his heart-attack. Don't the noblest sentiments take young men off to war?
Can you remember how you got on social media? Who was the pestering voice, the siren song calling "Come with us. Join with us."? Didn't they genuinely have your good interests at heart? Or was their own neediness the catalyst for recruiting you?
Do you remember waking up one morning and feeling "Today I really need to spend a month's salary on a six inch slab of silicon that will be forever attached to my being, monitor my movements and preferences, require daily charging, maintenance, service fees, and will damage my real-world relationships and mental health"? Or did the "need" for it slowly encroach through the persistent allurement of peers and social norms?
In any case, is it fair to call that a choice? At what point do you remember thinking "Hold on. This is wrong. I want out!". It's a feeling every addict can relate to. Fortunately in many circumstances we are strong enough to extricate - we are able to listen to ourselves and act. The big stumbling blocks are social and peer pressure and lack of self-confidence in our clear but fragile choices.
This line of thinking points us back to personal responsibility. We must fully own the path we take.
Freedom is the ability to take personal responsibility.
In doing so we let pain not blame educate us. Instead of harbouring grudges and resentment about "who or what ruined my life", to live well and learn we must understand the nature of harms, how they propagate, and how to avoid them.
So what will we do to avoid the next round of abusive technology? What lessons have we learned?
If we relate to technology through a model of free markets and democracy, it seems somehow we all have an influence. In this model the technology we have is an emergent, consensual creation. It's a nice idea. But if so, who voted for nuclear weapons? The idea that we can make all our own decisions and are fully in control of our own destiny is as silly as the belief that external forces entirely control us.
In reality we're a synthesis of free-will and determinism. Even in the Software Freedom world we must recognise that without Bell Telephone, DARPA, Raytheon and McDonnell-Douglas, we'd have no networks or operating systems to call "the peoples' technology". In turn, none of those barely significant blips in the history of technology would exist without Ampere, Faraday, Fourier, Gauss, Helmholtz, Hertz…
The difference between Software Freedom and corporate Big Tech lies in the choices we make. Monopoly tech is prescriptive. It tries to tell us what we want, how to live our lives. It presents technology as a fait accompli, a total one-stop take-it-or-leave-it package. Free Software is holistic. It is a deliberately unfinished work. It preserves flexibility, openness to change and improvement, choice, configuration and personalisation by the end user with genuine meaningful control. It is the seed of innovation that makes the difference between nations that are leaders and those who follow.
Mastering memes
Benefits as well as harms are infectious. Richard Dawkins conceives this as memetics. Yet Dawkins is only partly right in his appraisal of propagation. In the mind of each individual we get to choose which ideas and values to reproduce. That is what makes us human.
Consider the things we hear and then choose to repeat or not. Disinformation is one sort of infectious harm. Most of it would die at source if we didn't pass it on, as gossip and attention-seeking. We sometimes want to be seen as cool and "in the know", so we share stupid little things without thought. Maybe it comes from an ancient instinct to share knowledge about food, minerals or other resources with our tribe, and about other people's relationships. Jane Austen reminds us how gossip is a natural part of life. Advertisers love to harness this tendency to gossip, as nothing is as powerful as a personal testimonial.
Other times we are more guarded. A cautious voice says "Is that true? I don't want to seem a fool by repeating it". But that same guardedness can be a handicap too. Rigid thinkers, the sort that Alan Watts describes as "prickly" and who gravitate towards STEM subjects because it gives them a strong, stable and respectable foundation of 'certainty', often fall into the trap of propagating memetic harms as modern dogmas. Not wanting to appear 'irrational' is a fear-driven behaviour that over-rides our clear, strong and correct emotional intelligence.
So technological benefits and harms come to us through peers, friends, family or employers as propagated memes (ideas). But if we can pause, be a little less agreeable, a little more sceptical, a little surer of ourselves and our own growth, we can avoid - as David Bowie best put it - "Always crashing in the same car".
Figure 2: "Take it easy driving - the life you save may be mine." – James Dean
The "crash" follows a familiar bandwagon pattern;
We encounter something novel and superficially good. We "embrace" it too fully. Our sense for harm and our abilities to moderate are clouded, inhibited by various psychological processes. Only once the harms are extreme, often too late, do we begin to extricate.
A friend makes a recommendation of some good-thing. We feel initially insecure since the friend clearly knows more about these things, and is empowered. We express gratitude for the suggestion. When good turns to bad, often on purpose, instead of stopping, both parties are held captive by cognitive biases. They continue as if everything were still good. They may even redouble their efforts to convince each other and bring fresh recruits to the mess. This maladaptation to change leads many groups into dependency, addiction and cult-like formations.
This is the real story behind "enshitification". It's the psychological factors behind the network effects that must be understood.
I'm now hearing from people who finally got-off social media. They're very angry. They realise they wasted ten or more years of their lives. Some people went down a technological rabbit-hole after 2010 and only recently escaped.
They poured tens of thousands of hours into building "profiles", gaining "likes", making themselves "competitive" and "attractive" by laying sacrifices of selfies and tweets before the machine. A machine that now no longer needs them; a Linked-In that has no jobs to offer them, a Spotify or Substack over-run with "AI" slop and fake audiences. They installed this or that "app" and joined in little social games dreamed-up by Big Tech bros, most of which are now dead or dying. Just good fun, or a tragic waste of humanity?
Finally, social control media is collapsing, and like Web 2.0 before it leaves those who participated all washed-up, empty, bereft, and feeling cheated. The winners are whoever bought-up the companies and all the data. One older person tells me "I feel lonely. There's nobody left on Facebook, I only get adverts and robots now". I ask whether they have the email addresses of their old friends. "I'm not sure who is still on email, or how to reach them", they say. Worse some say, "I'm not really good at writing long messages any more. What am I supposed to say?"
The polynomial value of network-effects goes into reverse as social networks fall apart. This leaves people quite suddenly adrift, able to explore the social opportunities of their local community only if they are mobile and socially confident enough to go out. For others, cut-off by disability or rural living, the sunset of social media is frightening. They seem greatly at risk from chat-bot psychosis.
This is the cycle of abuse we must break. Each new all-consuming technology destroys the previous. Social media killed the art of writing letters and long-form thinking. After just a few months of using "AI" people are saying "I'm forgetting how to think for myself".
Fortunately, those who broke-free are too busy finally living to engage in over-reflection. They're just spending time in reality at last, enjoying walking on the beach - instead of making a video of walking on the beach to post on Instagram according to some mass-hysterical social pressure.
But the danger remains; that people will simply jump on to the next addictive and sapping technological trend the broligarchs and pushers have lined up. Whether that is "AI chat bots" that send people psychotic, or new forms of online gambling, for most of us, the next big thing that consumes years of our lives, breaks our relationships, health and prosperity, will be offered to us by someone close.
The machine feeds on the emptiness we feel.
Next time take a moment and find the strength to "just say no";
"You know what… I feel lonely, empty and frightened too. But us sharing this drug/technology isn't going to fix that. It's the cause. It's actually what separates us. Don't let me stop you enjoying. Let me know how it works out for you."
But as I wrote in Digital Vegan, that's hard. People want to be liked. We want to go along to get along, and to join in the "fun".
A better approach might be to address the supply-side of memetics. We've built a toxic culture in which "everyone is selling something", even if all they're selling is themselves. This narcissistic "myself as a brand" cult shows no sign of abating.
People who follow "influencers" online are themselves wannabe influencers. They want to repeat what they've heard - including the platitudes and mannerisms of their Internet 'thought-leaders'. The urge to be "at the front" is so insanely destructive to personal development. Early adopters never learn, because they have nobody to learn from. They are someone else's lesson.
Sharing happens because someone is interested in you. They ask. It's a much more feminine/receptive posture.
Nevertheless, a responsible life-stance may be to actively think things over before spreading ideas, even when you are asked. Stop being the pusher. Check your own neediness for validation. As a writer I find it the greatest challenge - separating truthful observations from the desire to have others think a certain way - like me. As the TED project used to say "Some Ideas Are Worth Spreading". And some are not.
Occasionally a wise internal voice says "That idea is so difficult that I don't want to repeat it, even as a warning." Something I can relate to with Jaron Lanier is his withholding from think-tank like conversations to "brainstorm worst-case terror scenarios". It can give the wrong sort of people bad ideas, and once something's been said you lie awake at night worrying.
I'd call that "malinformation". It's neither true nor false, nor meant with malign or benevolent intent. It's just ideas that the world would be better off without, and it's good discipline to keep ones mouth shut at that time, not feel a need to look clever, and hope nobody else thought of it.
Changing values not rules
Dana Meadows said that it makes little sense to fiddle with conditions, effects and feedback loops. Offering superficial remedies to technologies that disrupt entire nations and cultures is weak and disingenuous. In most cases that's all 'regulation' does. It's a sticking plaster. In some ways it even legitimises gross harms, by lending a veneer of acceptability. Regulating something harmful with soft words and half-measures can tacitly condone it.
Take the "bans" on social media for children. I don't think that children or adults should waste their time on that rubbish, maybe because I'm an "elitist". Maintaining real relationships and seeking out knowledge does and should require some effort. Effort is what underpins quality.
But "Age Verification" is the stupidest and most ineffective approach imaginable. Every smart kid already figured out how to get around it. It's nothing but an excuse to roll out more fascist nonsense. It attacks the very important place of anonymity in society, in adolescent development, in keeping young people secure, and ensuring political freedom in an increasingly oppressive and authoritarian world. It's getting the problem completely ass-backwards - or more likely wilfully misunderstanding things in order to ratchet-up tyranny.
At no point has ANY government, plainly come out and said:
Social media is harmful to you.
Why not? If governments gave a clear and simple message, millions of parents, teachers, community leaders and kids themselves (who are not stupid) would start to turn-off and move-on. Why is such a simple, clear statement of value so impossible for governments to say?
Why don't we teach kids from their first days in school that smartphones and social media are bad for you?
Are governments scared of Google and Microsoft's lawyers? Oh, shudder! Why do governments around the world lack the basic courage to say "Tik Tok, Facebook, Instagram… are your enemy. They harm your mental health"?
Because they would like to be in control of social media. They're afraid of what would happen if people started to really talk to one another using freely available technology. They are fundamentally conflicted. What pisses them off is that a bunch of US private companies took the ground they wish they'd had - what the French MiniTel would have become - government run and mandated digital systems.
What a liberal democratic society needs are strong, unambiguous words but weak and flexible laws.
What is needed is government projects that motivate abandoning social media and brain-damaging smartphone dependency with clear, strong language and values.
In other words, lessons-learned must apply to changing the highest values of broken systems. We must nip toxicity in the bud. The toxicity of social control media has little to do with its content, foul though some of it may be. It's structurally toxic, being a centralised and invisibly manipulated technology - regardless of who controls it.
Friends Help Friends Understand Power
So why would we knowingly encourage people to do things that harm them or all of us? If we believe in the "neutral technology" idea - that some technology is good and some is bad - we must wonder, where does bad technology come from? Who propagates it?
If we are not too old to learn and avoid crashing in the same car we can spot patterns, types of people, types of arguments, situations when are we vulnerable. We must become hardened to malinfluence and being led astray. This is the essence of Digital Self Defence.
Fortunately old tricks have hardly changed at all. Bernays and Lippmann laid out the basics a century ago. You hear the same tired old things…
- "You don't know what you're missing."
- "Everyone's doing it."
- "Don't get left behind."
- "It's the future."
- "It's policy now."
- "It's so convenient."
- "That's the way the world works."
The same old words and ways are used to manufacture consent, to pester, cajole, badger and wheedle folk into doing something. Usually it's something that's making a small group of people very rich and powerful. Sometimes that thing is smoking, or driving automobiles, or fascism. When the show is over it's the people who are left with the wreckage and paying the bill.
Powerful influence comes "top-down". There's a hierarchy of dissemination. Workers take their lead from bosses, who are instructed by managers. Senior managers go to policy meetings, think-tanks, conferences and trade shows, led and instructed by shareholders and investors. Under Soviet Communism these people would have been "inner party leaders" of the Central Committee. Now they have different names and only the furniture is rearranged.
Pause for a moment to realise; none of this has any relation to rational, meritorious judgement and selection of technology. It is patently unscientific. Science may provide us with technology, but it is conspicuously absent when it comes to our use of technology.
Were you were around in 1998 when Google burst into the world? If so you'll remember how extraordinarily quickly it happened. It can't be explained by any meritorious market theory, osmosis or mere rhizomic, lateral propagation between peers, even by the most zealous models of percolation and dispersal. The popular narrative that Google was simply a better search algorithm is only partly true. Great amounts of money and coordinated effort also went into ensuring its rapid dominance using funds from the US National Science Foundation (NSF), Massive Digital Data Systems (MDDS) at the Central Intelligence Agency (CIA) and the National Security Agency (NSA).
So we can trace influence as directives and decrees from governments who now push "AI" even as we've got a comparatively early heads-up on how very deleterious some of it is to society and individuals.
Malinfluence propagates though work-places with bosses who push policy to use biometrics, policy to use "AI", policies to push invasive surveillance. They're largely unaware that they're making a few oligarchs rich, while paying very little attention to any actual benefits to their company or product.
The social side-effects are disastrous because acceptance of militarised and fascist ideas then spills out, into the family, into schools and into civilian public life. We must look to the ultimate sources of investment and cash-flow:
To keep the public funds flowing, justifications are needed. And this generates the need for a credible long-term enemy. In the real world of technology, there are then two tasks for the state, if governments wish to use arms production as an infrastructure for the advancement of technology: the state has to guarantee the flow of money, and the state has to guarantee the ongoing, long-term presence of a credible enemy, because only a credible enemy justifies the massive outlay of public funds. – Ursula Franklin The Real World of Technology
Let's remember the origins of the Internet viz DARPA, and the funding origins of intelligence gathering networks for Facebook and Google. It is too simple to say that "they are weapons". However our technology evolves along lines, and obtains components, that are hardly peaceful and civil in nature if considering "lawful" spying tools or the use of face recognition. These are plainly fascist devices.
We create technologies that move us forward out of fear, not love. What we end up with is a civic-state arms race, a fundamental confrontation around encryption wars, counter-surveillance; because in absence of a "credible enemy" big enough to sustain the market the sharp end of technology turns inward on a nation's own citizens. All technologies become ostensible weapons. "AI" happens to be a particularly good weapon for jamming, psyops, discombobulation, and monitoring of large populations, things that no friendly, democratic. peacetime government should have any business doing, or allowing others to do to its citizens.
William James, in The Moral Equivalent of War asks us to redirect our warrior energies into peaceful activities. But it is impossible to see "AI" as anything but weapon of war, not peace, because every word written on it by industry and government is about an "arms race".
Satya Nadella, the current Microsoft CEO uses a very funny term to describe a very old problem in technology. "Model overhang" is a way of talking about a solution looking for a problem.
We know how to do lots of things, but we have no idea why we'd actually want to do them.
So "uses" or "purposes" of products created by companies like Google or Microsoft, have to be creatively conjured-up post facto.
This is no sensible way for "progress" to happen. The real challenges in computing now are nothing to do with making machines faster, or smaller, or cheaper, or more efficient. They are about turning technology back into to peaceful, universally empowering and liberating forms.
Turning the world into a circus of flying robot cars and boxes that talk to you is patently infantile. These are the half-ideas of men who lack the most fundamental vision and humane creativity.
Any project to free the world from unhappiness, poverty and injustice must begin with renouncing concentrated power and mono-culture.
Adding more capabilities to computers is the least important of our projects today when we should be figuring out how to use the immense communication and computing resources we already have to do better.
The problem is that most of these new capabilities are essentially fascist in nature. Not all technologies are, but we're somehow increasingly selecting for the ones that are oppressive and weapon-like.
Some of these technologies do have a place; On super-secure military bases protecting nuclear weapons and so on.
But a centrally run network of camera doorbells on every house in your street - with the only benefit stopping your Amazon parcel getting stolen or stopping the neighbour's dog shitting on your lawn - is a dystopian and insecure society that's barely worth living in. It is infantile and regressive.
A computer that remotely spies on your every thought and feeling, in order to "help you remember and be productive" is the very antithesis of "intelligence amplification". Only Nick Park's evil Gnome-bots approach this level of technological stupidity. It's patently not something a sane, reasonable and dignified person would want or accept.
Figure 3: "Neat and tidy" – Norbot
Therefore we are compelled to ask the question: why is it being pushed and for whose benefit?
Because we only see "convenience" as advantage over others and we conflate "efficiency" with competitiveness, then "AI" can only be a weapon. Misuse of potentially good technology is built into our broken way of seeing the world.
Widespread leakage and normalisation of surveillance and militarised technology is a catastrophe for peaceful civic life. The issue is not physical limitation of production and movement. The arms trade is as well regulated as we can manage.
Some people; criminals, military collectors and enthusiasts, outdoors types who hunt and hike, will always want and acquire advanced offensive and surveillance technologies. We've always been able to licence and police those edges.
You don't need to be a teenage boy to appreciate it's really fun to fly around and blow shit up. It's a deep and primitive impulse. But it belongs on the firing range or within some socially acceptable sport.
The problem now is with systematic, actively inculcating the social normativity of offensive technologies. Don't like where your neighbour parks their car? Cover your driveway in CCTV cameras! Neurotic about where your kid is? Give them a phone stuffed with trackers to "chip" them like a farm animal!
People get the wrong idea that these industrial, capitalist and military models of technology are the only possible ones. They represent only a tiny modicum of the vast and rich spectrum of technological possibility, most of which isn't rooted in domination and control. But to even see beneficent technology we must change our whole way of thinking about it.
Taking thoughtful technological responsibility, versus being compelled by blind economics to enter an arms race therefore requires resistance against preventable industrial injury on a society-wide scale. It applies as much to digital technologies as to guns and bombs, to the cogs, wheels and pulleys in factories or the processes in chemical plants.