Update (April 18, 2014): provided more clear attribution to cloverlimes for ideas on realities.
This blog post was inspired by a combination of factors. In part, it’s by associating with cloverlimes1. I’ve had many great discussions with her. The ideas on individual and collective realities are hers, and I’m elaborating on them below. In part, this post is inspired by the seemingly common belief in the tech community that there are people that Don’t Make Mistakes™. Attitudes like:
Your conclusion that it is unsafe is your opinion. It might be unsafe in the hands of a newbie developer. Not me. – Twitter Talk
So here we go: let’s smash a myth!
When I speak of realities, I mean something very specific. I mean: what is it that you perceive? What does society or a particular group hold as a collective perception? These two notions are respectively the individual reality and the collective reality. Allow me to elaborate.
Everyone holds an individual reality. This is an accumulation of all the experiences, filters, biases, and glitches in our human systems that produce perception. It colors how we think, how we see, how we associate, and communicate. By definition, it also means that no other person can fully grasp the extent of another individual’s reality. Hold on to this corollary.
The collective reality is a more relative term. It refers to a consensus reached based on multiple individual perceptions. It can be applied to groups of different sizes: the collective reality of all humanity, the collective reality of a group of friends, the collective reality of a couple. It is an amalgamation of experiences, data, and knowledge on a particular subject. It is necessarily incomplete and consensus is rarely perfect. Consider as an example: the collective reality of all humanity on global climate change.
Let’s step away from perception for a bit and delve in technology.
It’s nothing to be ashamed about. Mistakes will be made. Managing the complexity of any sizable system is no short order. Considering the myriad state changes, the interplay between data formats and shapes, and the possibility of hardware faults, it is a wonder that we can make computers do what they do today.
To claim that one does not make mistakes is at best folly, and at worst, toxic. To pretend that one is infallible affects the self. To not err is to avoid and resist learning. It’s a danger to self-improvement. To defend that one makes no mistakes also affects society. It leads to hero worship, unrealistic expectations, and all the ills that entails. Future generations will try to fill these roles of perfection and suffer for it.
There are healthy ways to handle making mistakes.
This brings me to an interesting point: how can we leverage technology to mitigate our propensity to err, keeping in mind that achieving a consistent collective reality is impossible? Before I share some thoughts, let me paint a scenario.
You’ve been working on a project as the sole developer for a few weeks. Things have been stressful, and you’re fortunate that the hours remain somewhat reasonable. You’re adding features, debugging, refactoring, and doing well to keep up with demands. It’s exhilarating. Things feel great when they work.
The project grows in scope, and some of the assumptions change. A new developer is hired, and they take some time to ramp up on what you wrote.
I make no assumptions as to how that goes: it varies. In a choose your own adventure, you could choose to help, choose to do your own thing: consequently, the ending is the same in one important regard - you’re building a collective reality here of this code base.
All of the assumptions that you made while building this project must now mesh with the newcomer. That’s not an easy consensus problem. It also doesn’t help that you may have forgotten some of the assumptions you’ve made. I know I’ve forgotten many things about projects I worked on just a month ago (logging prefixes, exceptional cases, configuration parameters, how to handle that one installation error on a platform I don’t use, etc.).
So can we leverage technology to make this a little easier? I contend that we can. We can actually embed many of our assumptions into a program such that it fails to compile if those assumptions are violated. This requires a language with a modern type system, of which Haskell, Scala, F#, Ocaml, SML, and Idris immediately spring to mind.
What does that buy us?
It buys us something we can count on to be there with everyone, all the time, new or experienced, that will help to propagate that individual reality: the compiler. It is because by embedding our assumptions in a type system, we are embedding portions of our individual reality. So what happens next? If that embedding is prominent enough, it will prevent a violation of assumptions from happening at compile-time. Given enough of that, and we’ve got quite a helper towards helping the collective reality reach a consistent view.
Yes, there are limits, though those limits are getting further away each day. Yes, there are costs to this approach, but they are no higher than the cost of writing thorough test suites, and likely lower. Further, type embedded proofs can be iteratively tightened - just like with regression suites.
Safety is not a matter of opinion. It also doesn’t help to disparage or dismiss the inherent difficulty of programming by blaming the newcomer.
Remember, everyone makes mistakes:
I well remember when this realization first came on me with full force. The EDSAC was on the top floor of the building and the tape-punching and editing equipment one floor below. […] It was on one of my journeys between the EDSAC room and the punching equipment that, hesitating at the angles of [the] stairs, the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs. – Maurice Wilkes
I close with a question: why not leverage technology to make our lives easier?
Cloverlimes: spouse, partner, friend↩