WIRED Close the world, open the next

11Mar/112

Complexity and state

Simplicity

Ever noticed how we're going through a simplicity boom right now? Apple, one of the most profitable companies in the tech world is renowned for its simple, minimal products. Good software is often described as simple. Some evidence of that was provided by Marco Armenti, who compiled a list of most common words used in 5 star reviews in Apple's App Store, with simple being near the top of that list.

This is not a coincidence or just a passing trend. If you're a computer veteran you probable know how unreliable computers and their software can be, and how we made ourselves confortable on that fact. Your application crashed? Oh, don't worry, it happens — just launch it again. Why is that, why is it so hard to build reliable software?

Why we lost it

In 1987, F.P. Brooks wrote the influential article "No Silver Bullet", where he identified four reasons for these hardships: complexity, conformity, changeability and invisibility, and then divided them into essential and accidental. In particular, he argued that complexity is inherently essential to software.

Brook's article caused quite a stir at the time, with his gloom conclusion that we shouldn't expect any great advances in software engineering and productivity in the near future. The 23 years since the publication of Brook's paper seem to have proved his point. Things have improved a lot, but we're not orders of magnitude more productive than in 1987, and we are building software as unreliable and buggy as Windows 2.0.

Fortunately, not everyone agrees with Brooks and many ideas to boost realiability and productivity came out. Ben Mosely and Peter Marks, in their 2006 paper Out of the Tar Pit proposed that of the four items identified by Brooks only complexity is relevant, and then add:

Complexity is the root cause of the vast majority of problems with software today. Unreliability, late delivery, lack of security — often even poor performance in large-scale systems can all be seen as deriving ultimately from unmanageable complexity.

We need to reduce complexity to build reliable systems, but how can we do that? Tell us, Ben:

The single biggest remaining cause of complexity in most contemporary large systems is state, and the more we can do to limit and manage state, the better.

Now I've finally arrived at the main point of this post. Complexity is bad, and to reduce complexity, we need to limit state as much as we can. And yet this is not the direction software architecture has taken in the last decades. One of the culprits here is the use of object Oriented Programming (OOP) as almost the sole paradigm for programming languages and architecture.

Managing state is hard. Ask any competent C programmer and he will tell you that state leads to anger, anger leads to hate and hate leads to suffering. That's why we have OOP. Object orientation is an abstraction that's suppposed to help you to manage state, by hinding it behind objects, exposing methods to act on this data. OOP has been hugely successful at this task, but this success becomes detrimental at the point people lose sight of other programming thecniques.

State leads to anger. Anger leads to hate. Hate leads to suffering.

While OOP does help to manage and hide state, there's a terrible side effect: adding state to your system suddenly becomes easy, too easy. Large systems maintained by many programmers will quickly develop so much state that in no time it will become impossible to reason about how the system works. Encapsulation will act like the proverbial carpet where all your dirt will be swept under.

Taking it back

So everyone is doing OOP and I'm arguing that this is bad. What else can we do? My answer is Functional Programming (FP). With roots in lambda calculus, functional languages are around since the late 1950s, with LISP. Functional languages turn the table around and instead of trying to manage state, they try to do away with it completely. Functional systems are defined not by state, but by a series of functions and their application.

The problem with functional programming today is that it's too far removed from the everyday practices of most programmers. Undergrads hear about FP only in passing, if that much, and after starting their jobs, Java and C# are all they care about. You can't hope to teach someone like this to think in functions overnight. Or can you?

The programming landscape is changing in the last couple of years. Slowly at first, but rapidly gaining steam. Languages like Ruby, fully object oriented but with lots of functional constructs have warmed people up to the idea of functional programming. C# added lambda expressions in 3.0, following with dynamic binding in 4.0. Groovy has them too, and is winning a lots of mindshare among JVM programmers. Scala positions itself to bring the perfect balance between object oriented and functional programming, and even Java itself might get some functional constructs soon, in the next decade or two.

Finally, after 15 years of Object Orientation dominance, the programming community as a whole is recognizing it's not the silver bullet many thought to be, and adopting ideas from half a century ago might actually be our best step forward.

Merging object orientation with functional programming might lead to a "best of both worlds" situation, where we can apply all the experience acquired in the last decades to throw away bad practices, and develop new, simpler and more efficient techniques.

The trap here is that none of this guarantees that we'll tame complexity. We'll have more tools, sure, but we still need to be conscious about complexity and be always on the lookout for how to avoid it. In the next weeks I'll write a series of posts on how we can use these hybrid languages to do exactly that.

10Sep/100

How Much Software Testing is Too Much?

I've seen many, many essays about how much testing should be enough. The most recent comes from Ruben Ortega in BLOG@CACM, where he argues that developers should write tests for basic behaviour first, and then extend the test suite with regression tests as bugs emerge.

Is that right? That's not the first time I've heard this advice, and I've actually seen test  suites written so truthfully to this advice that all the thousands of tests in the suite are exactly that — basic — and useless.

Testing is useless

At least, it can be. The problem is, testing can tell you what will happen to your system for a certain set of inputs, but it tells you nothing about other inputs. E.W. Dijkstra explored this using as example a simple multiplication operation, and showed how it would take impossibly long to test all possible inputs to this operation. That's why tests never test all possible inputs, just a hopefully representative sample of them, but how to determine this sample? Dijkstra says:

Sampling testing is hopelessly inadequate to convince ourselves of the correctness even of a simple piece of equipment as a multiplier: whole classes of in some sense critical cases can and will be missed! (...)  Testing can be used very effectively to show the presence of bugs but never to show their absence.

What this means is that you can write a million tests, and still miss the one test that would catch that bug that crashed the production server and forced you to work the whole weekend. No amount of testing will ever guarantee your system will work correctly, so why bother?

Because no matter how woefully inadequate, testing is one of the few things we can do right now to help us deliver (more) reliable systems. We can't test everything, but we can hope to test the bits that actually matter.

It's not the size that matters

This is true for many things in life, including your test suite. What matters is not how many tests are in your suite, but the quality of those tests. In an ideal world you would have inifinite time to write tests covering every possible input to your system, and inifinite funds to pay developers for those hours worked. Unfortunately it doesn't work like that.

So what tests should you write? Ortega says that you should start by writing tests for your basic behaviour, but doesn't elaborate on what he means by basic. Here's my take on it: first write tests for everything that's specified. This means different things at different testing levels.

On the unit testing level, this means testing against interfaces, and even then only those that sit at the edge of a conceptual layer or reusable software component. These layers or components form a specification between the many parts of your system, and as such should be tested. Don't bother testing the internal workings of a larger component, test only it's public façade.

On the integration testing level, this means testing that your system behaves as specified. Here you should read your functional specifications and try to distill these to something that can be automated. For example, if you have a spec that says a record should be disabled when the user presses the Delete button, write a test that simulates a click on that button and then verify that the relevant record was indeed marked as disabled. Don't get carried away and test that the record values are still the same, that the last updated date was changed, or anything else not specified. These tests might be relevant, but remember: you don't have infinite time to write them all, so stick to what's important to the user, and that's the specification.

As we get to higher level testing it gets easier to determine what should be tested, as system tests are almost always written with the user in mind and as such, perfectly relevant. Usually it's only on unit and integration testing that developers get carried away and write thousands of useless tests.

Also, please, by all means, every time a bug is found, write a test case that triggers the bug before fixing it, that's not optional. The very problem of testing is that we don't know what inputs will cause the system to fail and thus what we should test. Tests for known bugs are then most valuable and relevant, and you should never miss the chance to write a regression test case after a bug has been found. If you hate testing and don't want to write a single test case, swallow at least this one bitter pill down and write regression tests.

The root cause

There's one thing I've completely omitted when writing this post, and it's the root cause of bugs, what makes software unreliable and forces us to lose so much time writing tests. I believe this cause is complexity. Dijkstra wrote:

In the past ten, fifteen years, the power of commonly available computers has increased by a factor of a thousand. The ambition of society to apply these wonderful pieces of equipment has grown in proportion and the poor programmer, with his duties in this field of tension between equipment and goals, finds his task exploded in size, scope and sophistication. And the poor programmer just has not caught up.

And that was in 1971, imagine how many times the power of computers have increased since then. But this whole talk about complexity is for another post. For now, just remember to keep your tests simple — if you can't understand your tests, sure as hell you can't understand your system.