Human-AI Hybrid Jobs
Figure 1: "The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.– Warren Bennis"
Several recent internet blog posts have tried to anticipate the effects of AI on future work. More than anyone, it is programmers, creative artists and "white collar workers" who feel the impending threat of AI to their jobs.
Can humans "work with" AI?
Some suppose that future jobs will be "AI plus humans" and not "AI or humans". I think this is more complex than it seems, and here's why.
My observations of "AI" so far are these new tools are presently novel and exciting. The potential for replacing human creativity looks awesome. Bosses are rushing into that space, fully with the intention of cutting costs and jobs, replacing humans with computers.
The climate is ripe for that. The end of cheap money and reckless investment is providing the perfect smokescreen for layoffs. Many employers are simply re-advertising the same jobs at a lower salary as "AI augmented". The assumption implied is that the AI augmented jobs are somehow less skilled than those they replace. It seems a good way to ratchet down wages.
The whole project is a bet on these things;
- that quality will continue to improve at least linearly if not on a higher order curve.
- that employees will naturally adapt to versions of their previous roles, but where they supervise or correct an AI that does the heavy lifting.
- that primary users of AI (bosses) will retain enough control over it to continue offering value and some compelling differentiation.
The first of these points has been hotly debated, mostly from a technical standpoint. Whether Moore's Law applies to AI is still to be seen. We may be at the start of a revolutionary spurt of change, or already stuck on the next plateau that will remain for 10 years. I will not discuss that here.
The other two issues are more interesting and difficult. They've been only lightly addressed by other commentators.
Here's some things that may go horribly wrong:
The "jobs of the future", which all techno-optimists bank on, is where humans work in harmony with AI". According to this picture we will all be happy to correct or steer AI machines. But there is currently no evidence that such synergy will work out. There's not even an approximate parallel to compare with.
Human labour relations with respect to mechanics, as seen during the industrial revolution is not comparable with present relations of intellectual labour and AI. Invoking "Luddites" and the "March of progress" is naive.
Most importantly, in all its manifest forms, AI (artificial intelligence) is in opposition to IA (Intelligence amplification). It is not a tool, but rather turns its operator into a tool - in the near future, mainly as its teacher. AI does not replace, but is parasitical upon the human.
We anticipate that under new "efficiencies" humans will complete "one percent" of the work, mostly to correct AI. But that is a dismal and tedious job. Yet it must be done by diligent and intelligent workers. They must be at least as intelligent as the AI they are correcting. It's dull and undignified housework, cleaning up the mess made by something else. It has no agency nor intellectual reward. Hence these proposed jobs rest on a complete mismatch between the psychology and skills of those needed and their motivations. The only thing that is fit to augment AI in this regime, is other AI.
On the other hand we might imagine that humans will direct AI. Making AI a slave to fulfil tedious jobs like "writing an email" or "choosing the best applicant". These apparently replace secretaries and HR people. They allow companies to be run by ever fewer people, perhaps just a single person? Or perhaps nobody at all, with an AI as CEO. But here's why that won't work;
The likely trajectory of AI, as we have seen with all communications and storage technologies over the past decades, is a brutal consolidation into two or three companies. Those monopolists will supply the models, access tools and set the rules of use. Already moves are afoot to regulate and constrain AI in such a way it will favour incumbent power.
Despite the apparent "infinite potential" of AI, in reality all opportunity for differentiation will collapse, driving value and novelty to the margins of criminality and consolidating much of the "information industry" into a spiral of mediocrity. It is an inevitable race to the bottom of uniformity.
Expect to see, for a short while, warehouses filled with PhD's doing menial work to wipe the bottoms of generative AIs whose voluminous output will be consumed by nobody except as bait to drive advertising clicks. This represents the "death of content".
Moreover it represents the death of creative agency, since AI will become the barking dog, whose only purpose is to discipline the workers into feeding it.
Exploring this space feels urgent
On the Cybershow and at Boudica Cybersecurity, we're very focused on the interface between technology, human psychology and society. We see cyber-threats in a nuanced and as complex way, within a broader context. We consider resilience and threat models that other educators and consultants struggle to see.
"What doth it profit a man, to gain the whole world, and lose his own soul?"
The same can be said for business. What long term profit comes from maximising efficiency at the cost of all purpose and value? How can you safely incorporate AI? As carefully sand-boxed internal projects? Or will you bet the whole farm on promised safeguards, improvements and that you will retain any control over it ?