Technological improvement is based on a survival of the fittest framework. As it turns out, the agile framework is a pretty good approximation for emulating evolution. Agile purposefully injects iterations in 2 to 6 week periods. This is enough time to do some quality work but not enough time to get mired in endless decision making. This is, in effect, an unstoppable agent of change for better and also, for worse. The underlying beauty of it is that if the change failed or was sub-par, a newly proposed solution is just a few weeks away.
So many parallels can be drawn between software design and public policy. Each starts with a relatively simple vision followed by layers of complexity. Each has founders or idealists who communicate their broad vision and invite others in to implement it. But there is a stark divergence in the process when expanding a software application vs. public policy. What causes this? Why are we able to apply agile principles to software but not to society? What are we missing? Is it even possible, or does human nature preclude it? Humans use software and we can use the agile method to make their experience better. Humans are subject to governmental policies throughout their daily life. Why can’t we apply agile principles to policymaking?
As I thought about writing this article, this point really jumped out at me. Trying to come up with a parallel in the software world for this point is difficult. That hasn’t always been true though. For example, a decade ago, it would have been easy to get into a heated argument about Windows vs Unix vs OS X. Today, we have differences in databases, programming languages, and even programming styles or frameworks within languages. What is interesting about today’s toolsets though is that most seasoned developers realize that you can build just about any application with any combination of software tools and it’s likely that the end-user would never even know the difference. We could certainly argue that some technologies would be easier to maintain or scale, but my point here is that as a rule, we can (and do) build amazing software on a broad array of tools based on our individual ideologies.
Something else I’ve noticed is that our philosophical differences in the software world seem to get diluted as the number of choices increase. Unix and its many variants put a nail in the coffin of Mac vs Windows. We have dozens of programming languages each with dozens of frameworks. We are free to choose from so many options that no one camp can claim to be the authority. By comparison, our political world has very few choices. We are religious, spiritual or not. We are straight or not. We are rich, middle class, or poor. We are incarcerated or not. We have such limited choice in the political world, that it essentially forces us to choose sides. We even mold ourselves to the side that approximates our ideology because the alternatives are so incompatible.
This section is difficult to communicate from the perspective of software development. There are some parallels, but I’ll admit it’s a stretch. When we write software, we are fine-tuning the contract. In some cases, we are fixing bugs. This activity is mostly, universally appreciated. In other cases, we are either adding features or modifying the user experience. Beyond the initial architectural decisions, these actions require the most diligent communication and are subject to the most scrutiny.
Social contracts are problematic in some very specific ways. First of all, they only really work when all stakeholders have a similar level of power/influence. Social contracts define the minimum set of rules that we all can agree to. Even those social contracts that on the surface feel like they might garner universal appeal quickly head down the nuanced rabbit hole. We can all agree that killing another person should be against the law right? What if they killed your loved one? What if the target is a fetus? What if it was self-defense? Social contracts are, by their very nature, a set of rules that target the least common denominator. What are rules that we can all agree to that don’t infringe on our own unique belief systems?
Adding features spawns many questions: Is this outside the scope of what our platform is supposed to do? How many people will actually use this feature? Is feature x as important as feature y? Can we integrate with some other piece of software to perform this feature rather than building it ourselves? A new feature modifies the social contract and depending on the nature of the feature has to power to make the contract unrecognizable. There are endless examples of products pivoting into a business model that is essentially unrecognizable from the original social contract. The laws in our country today have pivoted in similar ways. Women vote. Slavery was abolished. Adding features to social contracts relies on a genuine use case and demand.
Modifying User Experience is often just as difficult as adding a new feature. The engineering of it usually isn’t difficult, but the preparation and execution can be hazardous. Those comfortable with the existing experience will be put off by the change. Preparation and education is required prior to release. The planning steps must include understanding the potential objections to the new experience and minimizing the validity of those objections. The Jim Crowe era could be seen as a political representation of user experience that needed change. Although the Civil War legally abolished slavery, society systematically created a facade designed to circumvent that reality.
I’m not trying to solve the problems of the world, but I am asking the question. What is keeping us as a society from checking our policies into a GitHub repo? Why can’t we iterate in an agile way towards a more perfect set of rules to govern? The tendency to rip and replace is so heavily weighted. Is this due to limited choice? If we are always ripping and replacing, how can we trust our metrics to determine if we improving or regressing? Are politicians simply an incompatible personality type? Clearly, they can connect with people, but is there another role better suited for architecting policy?
One thing is for certain. We need tools to facilitate policy creation and distribution. We need accountability and context. Who crafted the core articles? What was the context around its creation? How did it evolve over time? Who approved each iteration? What conversations took place throughout its history? Understanding the historical significance of policy helps us understand why it’s necessary or why it may be obsolete. Let’s all spend more time thinking about how we can make incremental improvements to policy instead of ripping and replacing.