Eating your own dog food

dogfood.jpg

Figure 1: "Chunky Goodness"

Following a post about Gromit the Dog last week, I'm wondering, does Wallace feed Gromit a home-made recipe made by a Food-O-Matic machine?

Who eats their own dog food? Does it taste okay?

Thinking about responsibility in cybersecurity "eating your own dog food" is an expression about our level of comfort and trust in ourselves. It's about our ability to meet the standards we set, particularly those we would have others follow.

If for example we set a password policy, do we stick with it ourselves? Even when we have admin rights and can choose to bypass the rules? Even when its really inconvenient, do we stick to what we preach?

For key accounts I make myself remember hard passwords and keep only an inconvenient coded record for prompting myself. It's a thing of mine. First week back after Christmas of course I'd forgotten some passwords. Especially newer accounts. So I get angry at my own password security policy. When something like this happens its easy to start blaming other people that the dog food tastes shit even though I cooked it, at least by my own choices.

When we make rules we get tested by them, and our integrity is how much we stick to what we commit to. Blaming ISO27001 or NIST isn't going to help. Blaming the application interface isn't going to help. Caving in to the convenience but added insecurity of a shortcut, whether it be a centralised password policy or single-sign-in authentication tool, would be easy, but I'd specifically set out a policy for those accounts - an act of will for myself to follow. This boundary setting should feel not too far away from the logic of boundary making when you're doing it for others, just put yourself in thier shoes, because soon enough you will be in them.

If you like, when you are in those shoes, revise the rules as if everything would be perfect for you and make a note, but don't implement them yet (otherwise you'll flip-flop). Take them back to the context where you're rule-setting and re-negotiate with yourself.

We'd call something like this a "active" security policy and lots of policy frameworks strive for this sort of thing. But in practice the feeback loop rarely closes and so the model rarely changes much. This happens either becuse we are not honest with ourselves or hide the relevant evidence, often unwittingly.

I think that all programmers must eat their own dogfood via debugging, or negotiation with a compiler as some might frame it. You've a clear thing in mind, what you want, and you've a clear model of how to instruct a machine to achieve that. If those instructions fail you revise and re-issue orders. Proramming is not so much about knowledge of any language, or the logical skill to imagine complex structures, it's about dogged determination to see the results you want. To make the machine do as its's told.

Back to security, given the number of smartphone settings and apps which attempt to enforce limited use, and their low effectiveness, it seems failure to uphold a difficult principles when technology tempts us with convenience is commonplace.

A huge part of the problem is "creep". This is something specifically modern and unusual. Until very recently people set up systems and they stayed the same. You set your thermostat to 20 degrees and your house stayed warm enough for your comfort. Now we have "smart" things. We have unattended software updates pushed at our systems through the network. We have highly predatory companies who change their rules secretly in a constant game of "enshitification". Thus the machine is always re-negotiating with you, always boundary pushing and always wanting you to lower your shields and give more of your trust away for free. The system always wants you to "compromise" a little more.

But if you regard a computer to be a tool, as much as the interface and operating system fail to consistently enforce your long term wishes, they are defective.

As I wrote in Digital Vegan, as with technology, you are what you eat. Best to choose good wholesome information because it shapes what you become. Eating our own food is a healthy way to live, nourished from our own kitchen and allotment garden. Self-Sufficiency and self-care remain a very strong culture around the world. The same goes for the rules we set for ourselves. If we do not uphold our own will through vigilance and effort, other authoritarians (or their electronic agents) will choose for us.

When we are invested in something, understand it, and advocate for it, that changes our motivation structure. We are not "alienated" from our interests. Daniel Pink wrote an interesting book called "Drive" that explores this fundamental difference between intrinsic and extrinsic motives. It is highly relevant in computer security:

  • good security cannot be imposed.
  • security is distinct from protection.
  • so security is more an attribute of the operator than the system.
  • it is somewhat the attention of the operator to self-care

In the "Industrialised West" we struggle to get the balance right between inter-dependency and autonomy, and authority. From the premise that industrialised society is very complex we wrongly conclude that self-determination is therefore impossible. We become enmeshed in co-dependent commercial relations. In this regard digital technology is a disaster area. We know far too little about our devices and rely too much on strangers and opaque systems.

It's hard work cooking delicious, healthy, things for ourselves, so why not just buy it from the supermarket?

So we buy tinned dog-food. "Someone Else's Policy" is a popular tinned brand name. Online we call supermarkets "platforms".

Even before platforms, canned policy, and regulatory compliance by numbers, there was the vague term "best practice". By definition "best practice" does not exist, because one can always do better. "Temporarily Good Practice" would be a more humble offering, but we live in a pathologically hyperbolic world. We also live in a fast changing world. Turns out that best practice today is simply bad practice tomorrow. Password policies revised by NIST, systems for enumerating badness, or centralised push updates like "Crowdstrike" and "Solar Winds" are examples.

It is also clear that we do not live in a one-size-fits-all world. "Identity politics" is a proxy struggle against crushing of individualism by mechanical normativity in a technological society. Platforms, particularly BigTech platforms, attempt to impose "solutions" which are wholly inappropriate for some if not actually the majority of real-world use cases. Their dominance is due to their financial and political power, not the capability of their products or services. The real technological power of BigTech is to define the lens through which we see the world, set our expectations for privacy and quality very low, and put limits on what we can do. Specifically they are most keen to limit anything that might compete with their dominance, and so BigTech is serious brake on progress and innovation.

Let's try to understand why much of what we can buy-in is poor quality fat, ground bones, hooves and eyelids swept from the abattoir floor and mixed with high-fructose corn to sweeten it.

Selling functional services is basically the same across industries, you have to sell a perception of comparative quality. My favourite restaurateur sells me the idea that I can't cook as well as he can. And he's right, I can't. I enjoy trying. So my wife and I sometimes eat there, but most of the time we look after ourselves. Similarly, my accountant takes responsibility for filing government forms and reminding me of stuff. But I need to retain a command of my financial life. In professional relationships, as in our technology, we may delegate something away, but not entirely. We make someone an agent, and in doing so we place limited trust in them. We benefit from having more time for the things we're good at and enjoy.

Increasingly we are simply abrogating responsibility, dumping whole swathes of our lives onto mechanical services that promise to take care of everything. Though we inhabit a highly developed technical society, it is one with a very poor human economy. It stretches people beyond their means. Nobody has time to attend to all the things they need to do, also and make the money they need to live. Mothers and fathers cannot take care of their children because they are too "busy". Hopefully it does not need emphasising that this is neither a healthy state of affairs, nor is it sustainable. Often we'd actually do better managing things ourselves, if only to retain a vestigial skill and oversight.

A good place to start reclaiming authority in your life is with the devices you use to navigate the world. Everyone needs the computers they use to be secure. But the security industry is presently misconfiugured. It sells me the idea that I'm not very good at security. It wants to fill that gap with functional products. Despite the platforms being highly treacherous, they want to force people to trust them. Security vendors are too close to the incumbent monopolies and fiefdoms. Instead they should be antagonistic. We need security FROM the Facebooks, Googles and Amazons of this world. We also need security FROM over intrusive governments. At present each of these pretends to be a foil for the other. In practice they feed off each other to rob you of as much autonomy and responsibility as they can - all so tragically in the name of "helping you".

A different approach is to sell capability as knowledge, by education. "Principles over products". That is an approach that makes the security business more like the DIY industry. It sells me the idea, "Sure, I can do this!" Why? Because in the end the only thing we can call "security" that's really worth having is whatever you take responsibility for yourself. Otherwise you have "protection and dependency".

In other words security is a state of being, a practice not a product, and it's part of the regular life of the gentleman (and ladies) on the Clapham Omnibus. Occasionally on the Cybershow we interview people on the street. Those vox pops show us a changing landscape in civil cybersecurity culture.

So long as the defaults are basically safe, an average person trusts their technology mainly in proportion to trust in themself as a competent operator and maintainer. People also believe in having powerful independent authorities to go to for protection and justice when merchants misbehave. People broadly support laws that constrain BigTech.

Though the average person is incapable of fully understanding their technology, people quite rightly do not trust vendors, governments, agencies or anyone else to micromanage their technological affairs. We expect broad, loose rules and a large margin of freedom. This is the space within which we can choose to exercise our freedom to be secure, to be responsible, to cook and eat our own dogfood. That includes the responsibility to make mistakes.

Where people are forced to believe they "have no choice", from that point of coercion they no longer feel active participants in a liberal democratic economy, but mere "consumers" who can, in legitimate shrugging resignation, disown their civic responsibility. (I will refrain from quoting once again that essential passage from Alexis de Toqueville)

In the market, trust is eroded if vendors deliberately make products insecure so they can sell "security" as an add on. This obvious game is rife and begs for regulation to enforce strong partition of interests. Safety, or at least shelter, is what we get from regulations, but it also comes from people taking personal responsibility with tech and making strong, clear demands. If we want to avoid more regulation people need to acknowledge and stand up for their Tech Rights - ownership, repair, access to knowledge (source code) and so on.

There are so many things we don't want to be doing. So having others help us is how we've made sense of each other as people since the industrial revolution. For inter-connected modern society to prosper we need to trust one another. But that starts with self-trust, knowing what we want, setting limits and mastering our own way.

Only then do we know what things we're good at, and those we can rely on others for. However we are constantly urged to simply abdicate responsibility and hand over the keys of our kingdom to someone else.

How often do we go into a hairdresser or fast-food bar and say "Oh, you decide!" ? When did you last make a major purchase like a car or a long vacation, and just say "Hey I don't mind, pick whatever you think is best" ? The entire edifice of market capitalism rests on the fundamental assumption that our hard work buys us choices.

Why then, when it comes to the machinery right at the heart of our lives do we shrug our shoulders and basically say "Yeah. Whatever." ? It is partly cultural. We are told contantly that we are "too stupid for technology", yet at the same time that we "must rely on it". That seems like a very thinly veiled pitch from people who want to control your life.

But for the industry itself, it is extremely harmful where it is over-prescriptive and gets over-involved. Selling insecurity seems an obvious the cheap move. However, luring people into a state of not thinking disempowers them and that destroys the industry from under our feet. Without understanding there is no meaningful demand. The only sales pitch left is fear and coercion.

Have a look at your personal digital self-defence, or your company infosec posture, and ask: When given the choice do I choose a difficult but correct path which I understand, or a convenient, mysterious option that hands over responsibility somehow?

Is just clicking on an "OK" button instead of dismissing the whole site often your way forward. Do you opt out of 'intellectual stress' by simply not reading "terms and conditions"? Do you use an "AI" to write a personal letter or report even though it feels like cheating to you? You won't have to look far to see that modern digital life is a constant struggle with temptation.

We use all kinds of justifications

  • I just need to get this job done
  • Just this once and I won't do it again
  • Everybody else does…..
  • I already did this once, so it can't hurt

In a world of machines and companies hostile to your security, plenty of practical security comes from having stamina and integrity as well as skill in balancing self-care with trust. That's a life-stance where you have to find the sweet spot in your own heart, not by listening to others. If you know even a few basic principles of cyber safety you'll start to see how they help. But one of the most powerful tools is determination, it all starts with eating your own dogfood - making the machine do what you want, what you said you'd like it to do, and making it stick to that. Its a thing we call programming. Oh, and debugging, that's eating your own dogfood.

[Valid RSS]

Copyright © Cyber Show (C|S), 2025. All Rights Reserved.

Podcast by

Want to be featured? Have an idea or general query? Fill in our Enquire Form

Author: Dr. Andy Farnell

Created: 2025-01-15 Wed 09:52

Validate