Sorrybots: Rise of the apology industry
Figure 1: "Sad Robot" – (image: V. Eisenmann)
Sorrybot (Advertisement)
Introducing the Sentient Systems SB-9000
An AI breakthrough in the fight to stop angry mobs from literally tearing your failing business to pieces. Featuring:
Sincere tone | The SB-9000 groundbreaking synthetic empathy engine mimics the nuance of human sorrow from mild lament to disconsolate humiliation. Choose from a range of non-threatening "squeaky little thing" voices that add a giggle to the misery of ten-hour commutes, missed opportunities and lost productivity. | |
Believable | A lexicon of over 9000 highly plausible cover stories for your pantomime. You'll never have to fall back on "signalling failure" again with unprecedented crisis mode and real-time adaptation of live news feeds re-framing freak weather conditions, escaped zoo animals, even astrological events as root causes of your incompetence. | |
Interminable | Mask your total lack of concern with our patented Apology Wall (tm). Capable of delivering over 120 assurances and deflections per minute, gloss over glaring inconsistencies and disorient your customers with an avalanche of back-to-back pretexts, leaving them no time to think or feel. | |
Grovel mode | Trained on a corpus of over ten million histrionic narratives the SB-9000 CFP (Counterfactual Fantasy Processor) can seamlessly slip into self-blame and melodramatic over-reflection. The SB-9000 can even enter self-destruct mode and beg for forgiveness or for its own life, buying you time to cash-out and make it to the bank. |
We're so, so sorry. Really.
Apologising now accounts for the majority of all human labour. And if that's not true - because I simply just made it up - then I'm really sorry. And that makes it okay. Right?
Whatever sector of the remains of Western industry you work in, chances are you spend a lot of your day saying sorry.
You're sorry that you failed to deliver the goods and services you're responsible for. Whether that's the cancellation of software, trains, buses, appointments, medical procedures, meals, education, parcel deliveries… In Britain I feel we are all now caught-up in a perpetual "culture of disappointment". Since the pandemic we have normalised failure.
Now, there's some very obvious bias here. Competence is invisible whereas failure stands out. We forget that over ninety percent of things work well, and go smoothly, because ten percent failure overshadows that. Ten percent is actually an intolerable threshold for failure in many areas of modernity. Imagine if on one in ten trips your car broke down, if one in every ten surgical procedures resulted in death, or if one in every ten court cases produced a miscarriage of justice.
Another factor is exposure bias. Modern life became intolerably complex, mainly because of technology designed to "make things easier". Only twenty years ago we would interact with systems once or twice a month - to book a dental appointment, take the car for a service, or order an occasional takeaway. Today we are enmeshed in an almost suffocating service-culture, daily using dozens of services which bamboozle us with fake choices, and are designed to give a false sense of agency, accountability and transparency. Simply living is exhausting. As an old Human League song goes;
Your life is like a schedule
You run to meet the bills
No one's awake to tell you
Life kills
The faster we move, the more it hurts when we hit obstacles, so we notice all the bumps of modernity more.
To compound matters in part, the apology industry is driven by useless regulation. For example shaming of rail operators forces failing train operators to publish their rate of cancellations. Who does this help, exactly? The idea that transparency naturally leads to remedy is naive. Unless victims of systemic failure are empowered giving them more information merely increases frustration and anger. There is a tragedy here that our attempts to remedy failure cause more harm.
If we accept these biases and put aside our anger toward broken systems we can more easily face the truth that our bruises are of our own making. The rational and moral response is to slow down, to disengage, to curb ambition and expectation. This is the essence of the slow living movement.
But modernity does not "leave you alone". It intrudes. It forces itself upon us - in an aggressive, predatory and abusive way. It demands more "engagement", encourages ambition and expectation - which it then fails to deliver. In these paradoxical conditions what sprang up is an apology culture, which is another way of talking about insincerity and wishful thinking. Our culture is dissociated from reality.
Along with it comes passive-aggressive preemption. Every site of failure, be it a surgery, railway station, supermarket or school, proclaims loudly in announcements and notices how "We will not tolerate abuse of our staff…", and so on. It prefigures natural and justified anger in a bid to subdue it.
However this in itself is an aggression. It is a threat. It is an accusation. It is an insult.
Too many businesses and services are now full of puffed-up bluster about their "security" and zero-tolerance of this or that. They make a military parade of their defences, their invincible elite guard with "body-worn-cameras" and 24 hour CCTV. If these defences against you, the customer were ever sincere attempts to reassure safety they've mutated into messages designed to intimidate all and sundry. Only a very dishonest or wholly insensitive person would try to claim that it's "only aimed at criminals and if you've done nothing wrong you've nothing to fear".
As orders bark from loudspeakers in British public spaces, noise cancelling headphones become essential for any sort of peaceful and stress-free existence. This drives people inward and further erodes interpersonal communication, so exacerbating rudeness and insularity. When seen in the context of an apology culture, the only conclusion that makes psychological sense is that preemptive aggression is driven by insecurity, guilt and shame associated with failure. Adding cloaked threats to an apology only makes one look weaker and ineffectual.
So, your staff were ill, communications failed, the system broke, the internet went off, there was a bug/glitch/hack, workers quit, the Earth's magnetic field reversed… there are no limits to the creative landscape for inaction or abuse nowadays. The apology industry is booming.
The classic shit-sandwich with upbeat enthusiasm wrapped around a smirking gut-punch seems old fashioned in the age of the AI Alibi. One of my favourites is the recursive apology, whereby unfolding fiasco is blamed on "earlier problems" (which in turn presumably have their own tenuous priors, and so on). The good news is that these causes of your current misery "have now been resolved" - a sparkle of hope on a long dark night.
As with everything else, over time the quality of the average apology has "transformed" (definitely not declined, oh no… "transformed"). Now we all suffer apology fatigue. As emails and announcements border on brazenly taking the piss I respectfully anticipate the truly honest apology that says, "We simply couldn't be bothered. What are you going to do?"
As well as wondering what kind of economy we can maintain where everyone's job is apologising to each other, I'm concerned about the psychological impact on workers. Having to say sorry all day long is depleting moral labour.
It also creates what Helen here called "security scarecrows". We see more and more that security personnel are not there for anything we'd recognise as actual security. They are used as meaty surveillance robots, fleshy mobile CCTV poles. They become a show of force, to intimidate. They become a line of enforcement for policy that organisations know is wicked and wrong and harms customers.
Here's an obvious place that "AI" is going to flourish, because no human should be put in the position of utter helplessness, while tasked to help. "AI" can confabulate, obstruct, protract and forestall all day. It is their most natural application.
Right now, the apology industry currently employs some of the most vulnerable and desperate "human resources" in the front-line of its meat-grinder.
As enshitification intensifies what is going to happen is people will become extremely rude and abusive to "AI". As the uncanny valley narrows we'll no longer know whether a real or fake human is on the other end of the wire, and we will inadvertently hurt the feelings of people who do not deserve that.
In response the foolish authoritarian will attempt to "protect" the dignity of "AI", which has become a symbol of capital power, thus placing a box of silicon chips on par with an actual human. Either by elevating software to have "rights" or stripping them further from human beings we'll make a crime of upsetting an imaginary person while real ones lay in the gutter of life.
No help at all
Figure 2: "All our operators are busy right now"
In the apparatus shown in fig.2, the bottom button labelled "Information" occasionally connects to a poor, overworked mother, who after giving a false name reads robotic broken English from a script offering no useful information, and then fails to answer a single question with any semblance of "help". It is already difficult to tell her from "AI".
Both her and the hopeful stuck on the freezing, rain-soaked platform with three heavy suitcases, a sick child, and no train service… or the disabled person imprisoned by lack of ramp access, each scream apologies at each other. Everyone is a victim of the same failed system.
The failure of such systems is a paradox of the information age; that we are drowning in data, very little of which is informative and even less is useful information. Since useful information is power it is a restricted commodity, and under market conditions becomes distorted, hidden and managed through highly-controlled agents.
We assume that experts and agents of organisations have information, at least about their organisation. They don't. A most eye-opening example for me came while speaking to a senior representative of a British bank. I asked if they had the phone number for their business security department. "Have you tried googling it?", he asked.
There is more than laziness, compartmentalisation or "need to know" going on here. Many organisations today, while presenting the facade of a singular, unified entitity, are internally disintegrated. The parts may seem connected by a common policy or just by a common logo/trademark, but more often - for legal, taxation, and economic reasons - they are just loosely associated bands of privateers following the diktats of a charismatic and feared leader. Modern Western business is much more like tribal warlords than we might care to admit.
In case you're wondering, the top button labelled "Emergency" connects immediately to a well spoken office worker with common-sense and instant access to timetables and live information. Pressing "Emergency" grants a higher level of access.
Which are you going to press - assuming your opening is to apologise for pressing the "wrong" button?
Interestingly the Office of Rail and Road (ORR) are aware of the extraordinary failure of railway station help points from complaints and so produced this report on the "reliability of help points at stations". It is rare to see such an artful misreading of the mission. The report covers technical reliability, operational function, communications uptime, adherence to DfT standards, test and fault reporting procedures… and concludes with the void truism that; "If help points are to have any value to passengers, they must be working when they need them."
Nowhere is the efficacy of the "help" service mentioned let alone examined. The real problem - that the "help" provided is next to useless - is dodged entirely. It is a wonderful example of a "nothingburger" that typifies the "AI" era already. It is all form but no function. It says so much, answers every question, yet tells us nothing. It is like the "cybersecurity by numbers" we see under compliance-driven conditions. It ticks every box, but fails the mission. I'm sure that if I emailed DfT or ORR I'd get a very nicely worded apology that the report didn't meet my expectations.
Where does that leave us in IT and computing? According to Axios the apology backlash has already started. "People are simply tired" of "rapid pace of the news cycle".
It struck me in late 2019 before the pandemic started and news reporting priorities changed, that cybersecurity news went quiet. Regular data breaches had reached a daily level. It was no longer news. The main outlets stopped reporting. From an uninformed standpoint it looked like cybercrime rapidly fell. I think what was happening was actually fatigue and the first signs of an overdue adjustment.
Then in 2024 Crowdstrike reportedly first acknowledged its world-stopping outage without any "sorry" or further comment. In a way that seems refreshing. When you've made your software so central to the lives of millions of people, and then so comprehensively screwed-up, saying that 'We failed' and offering $10 Uber Eats vouchers is only adding insult to injury. Their first instinct to simply stand in silent shame was the correct one, but later the company's Shawn Henry issued a "personal and heartfelt apology". That felt off.
What would really make sense is something altogether different. If Crowdstrike had said:
"Look, cybersecurity is really hard and we try our best. There are a million different ways we can catastrophically mess up, and nobody really knows all the moving parts. If you rely on us, caveat emptor."
In fact, this is what ALL Free Open Source Software says. Most licences for Linux and BSD comprehensively disclaim any liability and license software "as is", without warranty of any kind, explicit or implied… or similar wording.
Nobody installs freedom respecting software, screws up and then expects an apology from Debian or BSD… even if the failure is systemic and caused upstream. Users accept it is their responsibility to correctly use, maintain and understand software, or they're smart enough to recognise it as "beyond anybody's reasonable control". What Free Open Source Software gives you is the ability to take responsibility.
Commercial software users sometimes behave like those kids that sue their parents or blame their school because life didn't work out how they wanted. In turn commercial vendors act with messianic omnipotence, as shepherds of the helpless and masters of all. If we are taken in by them we are very vulnerable to abuse.
I think this schism lay at the heart of the "fundamental structural and governance failings" in the UK Horizon Post Office scandal. The law allowed (and still does allow) supposedly infallible megalomaniacs to act "at any cost". The law is stuck in a 19th century command model of industrial society better suited to the East India Tea Company.
In thousands of pages of court documents spanning decades there is suprisinly little said about the victims' (subpostmasters) right to examine and technically challenge systems and their owners. It is obvious that a huge change in the law is required to reverse the burden of proof and make discovery of source code, data and algorithms a foundational right in such cases.
Moreover, what is missing in IT generally is healthy co-dependence, risk-sharing, fallibility, humility, those threads of relational respect that bind people together in collective activity such that apology is rarely needed. We each recognise our own faults, parts to play and duties to one another. It is not inequality but the prevailing, toxic idea - that you buy or bully your way out of responsibility - that's the chief failing of technology within a free market. Our focus on relations around money completely blinds us to relations of technical power.
The Horizon software was a pile of garbage. Any cursory examination of it would have revealed that and ended the fiasco. Money would have been lost and people fired, but thousands of lives would not have been ruined. The owners bought and bullied their way out of responsibility. A lesson is that we (via our laws, assumptions and practices) have a partial, one-sided idea of cybercrime/hacking as doing harm by attacking systems. In reality, enormous harms are also done by the imposition and defence of systems. Much of BigTech's "social media" offerings which inflicts mental health damage on young people, or the insecure operating systems from big vendors fit that profile.
What changes when you pay for big-brand software? You buy into the blame market. You lose the ability to take responsibility, and gain the notional idea that someone else, someone very competent and powerful now has your back. When this turns out to be false, the bigger and more powerful the company the worse the fallout. What good does it do you if you lose a business, or a life, but gain a pizza from Uber Eats?
So, when it comes to the broken promises of the over-reaching technological society I think at this point the proper attitude is to make peace with failure in a revolution of lowered expectations, as in the words of this song of stoic British acceptance, Chicago by Josienne Clarke;
The cookie's gonna crumble
And money's gonna burn
So make your peace with failure
A lesson that you learn.