One thing we know about software design is that the future is important. However, we also know that the future is very hard to predict.
I think that I have come up with a way to explain exactly how hard it is to predict the future of software. The most basic version of this theory is:
The accuracy of future predictions decreases relative to the complexity of the system and the distance into the future you are trying to predict.
As your system becomes more and more complex, you can predict smaller and smaller pieces of the future with any accuracy. As it becomes simpler, you can predict further and further into the future with accuracy.
For example, it’s fairly easy to predict the behavior of a “Hello, World” program quite far into the future. It will, most likely, continue to print “Hello, World” when you run it. Remember that this is a sliding scale–sort of a probability of how much you can say about what the future holds. You could be 99% sure that it will still work the same way two days from now, but there is still that 1% chance that it won’t.
However, after a certain point, even the behavior of “Hello World” becomes unpredictable. For example, “Hello World” in Python 2.0 the year 2000:
print "Hello, World!"
But if you tried to run that in Python 3, it would be a syntax error. In Python 3 it’s:
print("Hello, World!")
You couldn’t have predicted that in the year 2000, and there isn’t even anything you could have done about it if you did predict it. With things like this, your only hope is keeping your system simple enough that you can update it easily to use the new syntax. Not “flexible,” not “generic,” but simply simple to understand and modify.
In reality, there’s a more expanded logical sequence to the rule above:
- The difficulty of predicting the future increases relative to the total amount of change that occurs in the system and its environment across the future one is attempting to predict. (Note that the effect of the environment is inversely proportional to its logical distance from the system.)
- The amount of change a system will undergo is relative to the total complexity of that system.
- Thus: the rate at which prediction becomes difficult increases relative to the complexity of the system one is attempting to predict the behavior of.
Now, despite this rule, I want to caution you against basing design decisions around what you think will happen in the future. Remember that all of these happenings are probabilities and that any amount of prediction includes the ability to be wrong. When we look only at the present, the data that we have, and the software system that we have now, we are much more likely to make a correct decision than if we try to predict where our software is going in the future. Most mistakes in software design result from assuming that you will need to do something (or never do something) in the future.
The time that this rule is useful is when you have some piece of software that you can’t easily change as the future goes on. You can never completely avoid change, but if you simplify your software down to the level of being stupid, dumb simple then you’re less likely to have to change it. It will probably still degrade in quality and usefulness over time (because you aren’t changing it to cope with the demands of the changing environment) but it will degrade more slowly than if it were very complex.
It’s true that ideally, we’d be able to update our software whenever we like. This is one of the great promises of the web, that we can update our web apps and web sites instantaneously without having to ask anybody to “upgrade.” But this isn’t always true, for all platforms. Sometimes, we need to create some piece of code (like an API) that will have to stick around for a decade or more with very little change. In this case, we can see that if we want it to still be useful far into the future, our only hope is to simplify. Otherwise, we’re building in a future unpleasant experience for our users, and dooming our systems to obsolescence, failure, and chaos.
The funny part about all this is that writing simple software usually takes less work than writing complex software does. It sometimes requires a bit more thought, but usually less time and effort overall. So let’s take a win for ourselves, a win for our users, and a win for the future, and keep it as simple as we reasonably can.
-Max