Eating your own dog food
Figure 1: "Chunky Goodness"
Following a post about Gromit the Dog last week, I'm wondering, does Wallace feed Gromit a home-made recipe made by a Food-O-Matic machine?
Who eats their own dog food? Does it taste okay?
Thinking about responsibility in cybersecurity "eating your own dog food" is an expression about our level of comfort and trust in ourselves. It's about our ability to meet the standards we set, particularly those we would have others follow.
If for example we set a password policy, do we stick with it ourselves? Even when we have admin rights and can choose to bypass the rules? Even when its really inconvenient, do we stick to what we preach?
For key accounts I make myself remember hard passwords and keep only an inconvenient coded record for prompting myself. It's a thing of mine. First week back after Christmas of course I'd forgotten some passwords. Especially newer accounts. So I get angry at my own password security policy. When something like this happens its easy to start blaming other people that the dog food tastes shit even though I cooked it, at least by mu choices.
When we make rules we get tested by them, and our integrity is how much we stick to what we commit to. Blaming ISO27001 or NIST isn't going to help. Blaming the application interface isn't going to help. Caving in to the convenience but added insecurity of a shortcut, whether it be a centralised password policy or single-sign authentication tool, would be easy, but I'd specifically set out a policy for those accounts - an act of will for myself to adhere, if you like. This boundary setting should feel not too far away from the logic of boundary making when you're doing it for others, just put yourself in thier shoes, because soon enough you probably will be in them.
If you like, when you are in those shoes, revise the rules as if everything would be perfect for you and make a note, but don't implement them yet (otherwise you'll flip-flop). Take them back to the context where you're rule-setting and re-negotiate with yourself.
We'd call something like this a "active" security policy and lots of policy frameworks strive for this sort of thing. But in practice the feeback loop rarely closes and so the model rarely changes much. This happens either becuse we are not honest with ourselves or hide the relevant evidence, often unwittingly.
I think that all programmers must eat their own dogfood via debugging, or negotiation with a compiler as some might frame it. You've a clear thing in mind, what you want, and you've a clear model of how to instruct a machine to achieve that. If those instructions fail you revise and re-issue orders.
Back to security, given the number of smartphone settings and apps which attempt to enforce limited use, and their low effectiveness, it seems people fail to uphold a difficult principle when technology tempts them with convenience. The machine is always reegotating with you however, it always wants you to lower your shields and give more of your trust away for free. To compromise a little more. If you regard a computer to be a tool, as much as the interface and operating system fail to consistently enforce your long term will, they are defective.
As I wrote in Digital Vegan, as with technology, you are what you eat. Best to choose good wholesome information because it shapes what you become. Eating our own food is a healthy way to live, nourished from our own kitchen and allotment garden. Self-Sufficiency and self-care remain a very strong culture around the world.
When we are invested in something, understand it, and advocate for it, that changes our motivation structure. We are not "alienated" from what we love. Daniel Pink wrote an interesting book called "Drive" that explores this fundamental difference between intrinsic and extrinsic motives. It is highly relevant in computer security:
- good security cannot be imposed.
- security is distinct from protection.
- so security is more an attribute of the operator than the system.
- it is somewhat the attention of the operator to self-care
In the "Industrialised West" we struggle to get the balance right between inter-dependency and autonomy. From the premise that industrialised society is very complex we wrongly conclude that self-determination is therefore impossible. We become enmeshed in co-dependent commercial relations. In this regard digital technology is a disaster area. We know far too little about our devices and rely too much on strangers and opaque systems.
It's hard work cooking delicious, healthy, things for ourselves, so why not just buy it from the supermarket? So we buy our tinned dog-food.
"Someone Else's Policy" is a popular tinned brand name. Online we call supermarkets "platforms".
Even before platforms, tinned policy, and regulatory compliance by numbers, there was the vague term "best practice". By definition "best practice" does not exist, because one can always do better. "Good practice" would be a more humble offering, but we live in a pathologically hyperbolic world. We also live in a fast changing world. Turns out that best practice today is simply bad practice tomorrow. Password policies revised by NIST, systems for enumerating badness, or centralised push updates like "Crowdstrike" and "Solar Winds" are examples.
It is also clear that we do not live in a one-size-fits-all world. "Identity politics" is a proxy struggle against crushing of individualism by mechanical normativity accruing from the technological society. Platforms, particularly Bigtech platforms, attempt to impose "solutions" which are wholly inappropriate for some if not the majority of real-world use cases. Their dominance is due to their financial and political power, not the capability of their products or services.
Let's try to understand why much of what we can buy-in is poor quality fat, ground bones, hooves and eyelids swept from the abattoir floor and mixed with high-fructose corn to sweeten it.
Selling functional services is basically the same across industries, you have to sell a perception of comparative quality. My favourite restaurateur sells me the idea that I can't cook as well as he can. And he's right, I can't. I enjoy trying. So my wife and I sometimes eat there, but most of the time we look after ourselves. Similarly, my accountant takes responsibility for filing government forms and reminding me of stuff. But I need to retain a command of my financial life. In professional relationships, as in our technology, we may delegate something away, but not entirely. We make someone an agent, and in doing so we place limited trust in them. We benefit from having more time for the things we're good at and enjoy.
Though we inhabit a highly developed technical society, it is one with a very poor human economy. It stretches people beyond their means. Nobody has time to attend to all the things they need to do, also and make the money they need to live. Mothers and fathers cannot take care of their children because they are too "busy". Hopefully it does not need emphasising that this is neither a healthy state of affairs, nor is it sustainable. Often we'd actually do better managing things ourselves, if only to retain a vestigial skill and oversight.
Everyone needs the computers they use to be secure. But the security industry is presently misconfiugured. It sells me the idea that I'm not very good at security. It wants to fill that gap with functional products.
A different approach is to sell capability as knowledge, by education. "Principles over products". That is an approach that makes the security business more like the DIY industry. It sells me the idea, "Sure, I can do this!" Why? Because in the end the only thing we can call "security" that's really worth having is whatever you take responsibility for yourself. Otherwise you have "protection and dependency".
In other words security is a state of being, a practice not a product, and it's part of the regular life of the gentleman (and ladies) on the Clapham Omnibus. Occasionally on the Cybershow we interview people on the street. Those vox pops show us a changing landscape in civil cybersecurity culture.
So long as the defaults are basically safe, an average person trusts their technology mainly in proportion to trust in themself as a competent operator and maintainer. Though the average person is incapable of fully understanding their technology, people do not trust vendors, governments, agencies or anyone else to micromanage their technological affairs, except in shrugging resignation that they "have no choice". From that point of coercion, they no longer feel active participants in a liberal democratic economy, but mere "consumers".
In the market, trust is eroded if vendors deliberately make products insecure so they can sell "security" as an add on. This obvious game is rife and begs for regulation to enforce strong partition of interests. Safety, or at least shelter, is what we get from regulations, but it also comes from people taking personal responsibility with tech and making strong, clear demands. If we want to avoid more regulation people need to wise-up and stand up for their Tech Rights - ownership, repair, access to knowledge (source code) and so on.
There are so many things we don't want to be doing. So having others help us is how we've made sense of each other as people since the industrial revolution. For inter-connected modern society to prosper we need to trust one another. But that starts with self-trust, knowing what we want, setting limits and mastering our own way.
Only then do we know what things we're good at, and those we can rely on others for. However we are constantly urged to simply abdicate responsibility and hand over the keys of our kingdom to someone else.
How often do we go into a hairdresser or fast-food bar and say "Oh, you decide!" ?
A negative, harmful side of the security industry is over-prescriptive and over-involved. Selling insecurity is therefore the cheap move. Luring people into a state of not thinking disempowers them and it destroys the industry from under our feet. Without understanding there is no meaningful demand. The only sales pitch left is fear and coercion.
Have a look at your personal digital self-defence, or your company infosec posture, and ask, when given the choice do I choose a difficult but correct path which I understand, or a convenient, mysterious option that hands over responsibility somehow?
Is just clicking on an "OK" button instead of dismissing the whole site often your way forward, and out of 'intellectual stress'? Do you use an "AI" to write a personal letter or report even though it feels like cheating to you? You won't have to look far to see that modern digital life is a constant struggle with temptation.
In a world of machines hostile to your security, plenty of practical security comes from having stamina and integrity as well as skill in balancing self-care with trust. That's a life-stance where you have to find the sweet spot in your own heart, not by listening to others. If you know even a few basic principles of cyber safety you'll start to see how they help. But it all starts with eating your own dogfood - making the machine do what you want, what you said you'd like it to do, and making it stick to that. Its a thing we call programming. Oh, and debugging, that's eating your own dogfood.