Digital Brutalism

brute.jpg

Figure 1: "All actions are for the sake of some end (and take) character and colour from the end to which they are subservient" – J.S. Mill (image: DC Washington Hilton. John Leszczynski)

"AI" heralds a new era of utilitarianism. If we heed the history of brutalism in architecture, its association with urban decay and totalitarianism, we might be better armed against known pitfalls around human wellbeing. Just as with bricks, steel and cement we can create poverty as a side effect of "efficiency". As in failed "social housing" projects, we do the same in the digital world.

Let's stop pretending that "The Market" has anything to do with driving digital technology in 2025. Digital technology is now largely an ideological project that "solves problems". However these are not your problems, but the problems facing power - banks, mass media and entertainments businesses, intelligence agencies, the control centre of your "smart city", and so on. Your needs and opportunity afforded by technology is being pushed to the margins. The "solutions" offered by BigTech, at the behest of these powers, are the equivalent of those giant concrete boxes of the Soviet era. They are symbols of emerging consumer-communism in the West, only in electronic form.

The 2024 Turing Award went to Andrew Barto and Richard Sutton for their work in reinforcement learning, a component of currently popular chatbots.

In the pile of books and papers littering my study, Sutton and Barto are mentioned going back to my 1980s undergrad encounters with machine learning. It's great to see people who have been chipping away at their craft for their entire careers get the reward they deserve. I think we call that reward "recognition", although they also get a considerable sum of money.

At a time where science and its institutions are being dismembered, let's shout a very loud "Well done" and also remember that the kind of people who devote an entire life to knowledge and understanding are not those primarily motivated by money and short term success.

It is no surprise then that Richard Sutton's 2019 essay "The Bitter Lesson" espouses more or less these values - that it is best to take a long term view based on the most generalist principles possible rather than leverage parochial knowledge that gives satisfying early gains, but fails in the long game. I hope Sutton and Barto's Turing Award shows that the "brute force" of perseverance, which we might call a belief or faith in a path, is indeed rewarded.

However, we should note such principles rarely, if ever, extrapolate into real life, the physical and social world. On that principle, I'm reminded that Reinforcement Learning is one fascinating but small part of any general textbook on "AI", and that while it's a compelling idea, maximising a value called "reward" unfortunately surfaces the problem that "reward" is not something well understood. In other words, although compute and its ability to yield results matter, so do human values. If we build machines that can learn, there must be clear values that guide what is preferable to learn or to ignore.

An irony is that while brutalist systems seem wholeheartedly functional, they end up optimising for non-functional requirements. Consider for example Soviet brutalist ideals of "equality". A friend who lived through East European Communism explained to me how it's not possible to design a housing block where every apartment is equidistant from the car-park and the lifts. People will find infinite reasons to be jealous and find resentments against their neighbour.

One-size-fits-all IT/cybersecurity ideas are just as pernicious. Indeed most real-world scenarios are a patchwork of ever-changing special cases which packaged "tech solutions" do not address. Those still truly empowered by technology remain the technically able, who can design, build and use their own creations. Alas, developers or "programmers" are themselves under threat and thus the window of opportunity for democratic tech is closing. Meanwhile the docile Hoi Polloi are shoehorned into whatever Procrustean fad is popular with BigTech today.

Systems that encode human knowledge are doing more than attempting to replace those humans. The encoding is a form of culture by which systems persist useful knowledge that is beyond the ability of individuals or groups to consciously maintain. Together the human and the system form a man-nachine (a philosophical construct that informs Interaction Design as well as great Kraftwerk album). It is in this synthesis that we obtain Intelligence Amplification (IA) - superior and in many ways oppositional to "AI".

Just as physical buildings are "experiences" of norms, hierarchies, territories, all sorts of invisible human affairs are encoded into the systems that run modern life. This abstraction is traditionally what institutions, libraries, legal archives, political structures and applications that encode domain knowledge all do. Not only should knowledge be explicitly encoded, it must be legible to the public. This is the "open" part of "Open Source" philosophy. By contrast, as Richard Stallman framed it "OpenAI" contains two lies, being neither open nor intelligent. Sadly, secretly trained arbiters are the paradigm for commercial BigTech "AI".

No doubt the mainstream press tell a painfully simplified story of how the "Godfathers of AI" (there are so many) won the "Nobel Prize of computing". A very understandable widespread public sentiment against "AI" will therefore yield a negative chorus, highlighting along the way that it is Google (the worlds largest spying company) who sponsor the million dollar ACM Turing Prize. At a time when science and US BigTech are under attack by different groups, each for different reasons, I don't expect the popular press or even science journals to do a good job of unpacking the complexity of RL, do it justice or explore its human complications.

I have heard little of even the most obvious arguments, for example that it is possible, indeed likely, to reinforce mistakes. "AI" is the most susceptible technology to GIGO (Garbage In, Garbage Out).

So let's bring up two obvious and topical criticisms of a philosophy that will no doubt be attributed to Sutton and Barto's reinforcement learning as seen in LLMs; namely the environment and dehumanisation.

Understandably, the ecological cost of compute was never really on the minds of pure computer scientists. Moore's Law promises ever increasing efficiency (a dubious word at the best of times). But efficiency is not the only way one can get more compute. Pure brute-force exertion/implementation (of the paperclip-maximising kind) works too. In other words, just keep building more energy-guzzling data-centres. We produce tens of millions of chips (matrix multipliers) that are obsolete in 12 months.

Industrial inertia or flywheel effects preserve the mentality of "growth as progress". The reality is we live with finite resources and growth potential, as well as complex geopolitics around tech manufacture. We cannot just build ad infinitum with the hope of unknown fruits to harvest, no matter how bountiful and juicy that harvest may seem. Where computer science touches reality, we need to pick specific telos, aims informed by human values.

Naturally we should expect "AI" (viz LLMs) to rapidly specialise into boutique, even bespoke solutions, leaving the generalist approaches struggling for resources and markets. As we saw with microprocessors between 1970 and 1980, appliances and embedded systems dwarfed more visible but smaller market of general purpose home and business products.

This is exacerbated by another blindside, which is the tendency for brute-force systems to always dehumanise. If we build an "AI" to play a game so well it can beat any human opponent, what was the point? Mere victory is empty unless we learned something about how and why humans play that game. Does that knowledge help us play games better and enjoy them more? Approaches to "win at the cost of all other values" are great for building machines of war and destruction, but not so much at mapping out socially useful and progressive knowledge. Ironically, brute-force compute, while being a great strategy for machines, may not be in anyone's long-term interests as it works against human learning and the enlargement of human knowledge and experience. As the summary of the 2024 Microsoft study says "AI makes you stupid".

But where I really want to bring this back to is security. We speak to many people on the Cybershow who have big hopes for "AI" to revolutionise cybersecurity. Can we add clarity around those sorts of hopes, discussions and claims?

In "left of bang" philosophy, we must anticipate catastrophic threats. That is another way of saying that often "we can't wait to learn a lesson". There are no "lessons learned" because we're all dead. We must a priori encode our best-guess heuristics as human experts, so an element of expert-systems philosophy seems unavoidable. This goes beyond just supervised versus unsupervised learning.

It says that as developers, we are not allowed to use the world as our personal laboratory for experimenting on people, with no ethical oversight. For decades Silicon Valley has asserted this unchecked privilege. For all the good it has done, equally it has bred loss, misery, exclusion, frustrated opportunity, wasted time, and heartache. As much as it has made us "smart" it has left us stupid. As Neil Postman says, "Technology giveth and technology taketh away, and not always in equal measure."

Ideas around iterative, reactive security, in which machine -learning is in the loop seem attractive. But all such systems require failure. They need failures to learn from. And since such systems always hunger for data, or face diminishing returns, their perverse incentive is to persist insecurity, inflexibility or both. At worst they are uncontrollably expansive since they push the frontier and forever redefine new situations as "failure" in order to have something to learn from. Maybe this only happens with "pure" (no human-in-the-loop) systems, but the forces of convenience and efficiency inevitably drive out moderation and safety.

In cybersecurity then, we should be extra careful of ideas and claims that putting machine learning together with massive open-source intelligence sharing, as collaborative threat modelling, doesn't lead to converging on local minima that are dangers to security overall.

This is about more than a mere balance between false negatives and false positives. The 20th century solution to tyranny of majority was to temper democracy with liberty so those "going their own way" could always route around convention and normalcy, provided they do so without disrupting others.

In the 21st century we have lost the required tolerance. We want absolute solutions. Minorities, newly able to express through flexible technology, have become a terror to rigid thinkers, refreshing the right-wing of politics. The more technology offers absolute solutions, through "AI", blockchains and the like, the more aroused the authoritarians become, heralding an era of technofascism.

There are the many yet unseen, difficult problems we must face when dealing with ideas like "zero trust", "behavioural identity" and "autonomous gatekeepers" that learn but cannot truly reason, and do so without any regard for human values - or worse, entirely guided by a technofascist philosophy.

[Valid RSS]

Copyright © Cyber Show (C|S), 2025. All Rights Reserved.

Podcast by

Want to be featured? Have an idea or general query? Fill in our Enquire Form

Date: March 2025

Author: Dr. Andy Farnell

Created: 2025-03-06 Thu 10:28

Validate