Does Autonomy lead to Greater Clarity?

Mark Burgess is a theoretician and practitioner in the area of information systems, whose work has focused largely on distributed information infrastructure. He is known particularly for his work on Configuration Management and Promise Theory. He was the principal Founder of CFEngine, and is emeritus professor of Network and System Administration from Oslo University College. He is the author of numerous books, articles, and papers on topics from physics, Network and System Administration, to fiction. His new book is Thinking in Promises and which is the topic of the next two weeks on the Business901 podcast.

An excerpt from the podcast:

Joe: You make a statement in your book that ‘autonomy leads to greater clarity’, and I think it’s interesting based on this discussions. Can you explain that statement?

Mark Burgess: I think it’s the central point of promises where the Promise Theory really started from. It’s this atomic idea that brings this chemistry of intent or cooperation into focus. I think what it does is it hones your focus into what’s right in front of you. In physics, we’d call this a local theory and what it means in promise theory is that agents can only promise their own outcomes, their own behavior. You can’t make a promise on behalf of somebody else that’s not you. In Physics, we call that non-local because it’s not where you are. It’s somewhere else, and theories of obligation are non-local theories. Remote control in front of the TV, you’re trying to oblige your remote, device, object, person, whatever to behave in a way, in a certain way. We tend to imagine that these things simply must do as they’re told.

We all know that neither people nor machines nor devices even mechanisms that we design to behave and do as they’re told, they don’t always do as they’re told. We can’t keep their promise on their behalf because we’re not in control of all of their circumstances. They are surrounded by environments which are intertwined complex we call them these days, many influences, information-rich scenarios where we don’t know all that information about what’s going to go on.

We can’t make that prediction. Promise Theory tries to focus on the things that we can know with greater certainty. Autonomy leads to greater clarity because it’s focusing you on the things that you know can be delivered. Each agent makes its own promises and is therefore in control of its own faculties and it has knowledge of its own state, its own information without having to go to some other party and rely on that.

In a way, it’s about minimizing the dependencies on things around you that you’re not controlling. You know, the control thinking, the obligation thinking, the make it so thinking is you can stomp your foot and, you know shout and scream, and you still can’t make these things do what you want. But, by turning things around and thinking in terms of cooperative promises you’re maximizing in a sense the likelihood of success because you’re basing your estimations on information which is local, current, quite well-known, and it’s straight from the horse’s mouth in a sense.