No, John, software really *does* evolve

John Haren, of CodeSnipers.com, recently blogged about something I feel pretty strongly about:

There’s a common trope in CS education that goes something like this: “All software evolves, so be prepared for it.”

Far be it from me to imply that one shouldn’t be able to respond to change; that’s not my intention. But the idea expressed above contains a flaw: software does not evolve.

Duh, John… everyone knows that software changes. Features creep. Scope broadens. New platforms and whizbangs are targeted. Get with it!

I concede the obvious: of course software changes. But repeat after me: software does not evolve. Because change != evolution.

Evolution is a blind, natural process; the result of random mutations in an organism. Now it may just so happen that the result of the mutation is beneficial to the reproductive success of the organism, meaning we’d expect to see creatures with such a trait outperform others without it. That’s how traits are selected for. In the overwhelming majority of cases, mutations are detrimental, and they don’t stick around for long (since there are many, many more ways of being dead than alive).

Now in order to say that software evolves, you’d have to accept that your development process goes something like this: Developer opens a file at random, positions the cursor at random, punches a few keys at random. Developer then recompiles and sees what happens, hoping for the staggering luck that the resulting change actually does improve the software, and everybody loves it, so they buy it, and you’d expect to see more of it.

Okay, insert joke here about how your development process seems that way from time to time.

Jocularity aside, there’s more at issue here than a flawed analogy. Of more significance is the type of thinking it can engender. Nothing “just happens” in software. Whatever it is, somebody made it happen. Someone decided. They may very well have decided in error, but they decided. They decided “well, let’s just try and fit that feature in; it shouldn’t cause too many problems if it goes out only 70% tested… if it breaks, we’ll deal with it then.” Or they think “yeah, a talking paperclip… why not?” In other words, magical thinking. Don’t do that.

And CS departments should stop teaching that. Let them stress peopleware instead.

His presumption here, which may seem fair at first, is that all evolution is basically random. And, frankly, that’s not entirely without truth. But what he sees as the randomness in the system is different from the randomness that I see, and that’s that the users are what bring the randomness into the system.

Look, how many times hasn’t a user told you, “We need this feature”, only to discover six months after shipping that feature that nobody’s really used it, but that it in turn sort of answered a different problem that you ended up providing for them as a new feature? See, the software itself doesn’t evolve randomly, but the users’ interactions with the software do. That’s evolution, that’s healthy, and that’s how software evolves.

In short, it’s recognizing that the users are part of the system, too, part of the organism that makes up this bizaare and wonderful world we live in, and their input is often exactly that: random. Which is probably why it’s so important to have the on-site customer, as per agile development’s recommendation, because you never know when randomness will strike and make your life better/faster/easier.