How to manage accidental complexity in software projects?

Practice makes better software.


Stack Exchange
This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

davidk01 asks:

When Murray Gell-Mann was asked how Richard Feynman managed to solve so many hard problems Gell-Mann responded that Feynman had an algorithm:

  • Write down the problem.
  • Think real hard.
  • Write down the solution.

Gell-Mann was trying to explain that Feynman was a different kind of problem solver and there were no insights to be gained from studying his methods. I kinda feel the same way about managing complexity in medium/large software projects. The people who are good are just inherently good at it and somehow manage to layer and stack various abstractions to make the whole thing manageable without introducing any extraneous cruft.

So is the Feynman algorithm the only way to manage accidental complexity or are there actual methods that software engineers can consistently apply to tame accidental complexity?

Look for a better move

Karl Bielefeldt answers (78 votes):

When you see a good move, look for a better one.

—Emanuel Lasker, 27-year world chess champion

In my experience, the biggest driver of accidental complexity is programmers sticking with the first draft, just because it happens to work. This is something we can learn from our English composition classes. They build in time to go through several drafts in their assignments, incorporating teacher feedback. Programming classes, for some reason, don’t.

There are books full of concrete and objective ways to recognize, articulate, and fix suboptimal code:Clean Code, Working Effectively with Legacy Code, and many others. Many programmers are familiar with these techniques, but don’t always take the time to apply them. They are perfectly capable of reducing accidental complexity, they just haven’t made it a habit to try.

Part of the problem is we don’t often see the intermediate complexity of other people’s code, unless it has gone through peer review at an early stage. Clean code looks like it was easy to write, when in fact it usually involves several drafts. You write the best way that comes into your head at first, notice unnecessary complexities the code introduces, then “look for a better move” and refactor to remove those complexities. Then you keep on “looking for a better move” until you are unable to find one.

However, you don’t put the code out for review until after all that churn, so externally it looks like it may as well have been a Feynman-like process. You have a tendency to think you can’t do it all one chunk like that, so you don’t bother trying, but the truth is the author of that beautifully simple code you just read usually can’t write it all in one chunk like that either, or if they can, it’s only because they have experience writing similar code many times before, and can now see the pattern without the intermediate stages. Either way, you can’t avoid the drafts.

Related: “How do you manage a complexity jump?”

Keep practicing

Kilian Foth answers (39 votes):

“Software architecture skill cannot be taught” is a widespread fallacy.

It is easy to understand why many people believe it (those who are good at it want to believe they’re mystically special, and those who aren’t want to believe that it’s not their fault that they’re aren’t.) It is nevertheless wrong; the skill is just somewhat more practice-intensive than other software skills (e.g. understanding loops, dealing with pointers etc.)

I firmly believe that constructing large systems is susceptible to repeated practice and learning from experience in the same way that becoming a great musician or public speaker is: a minimum amount of talent is a precondition, but it’s not a depressingly huge minimum that is out of reach of most practitioners.

Dealing with complexity is a skill you acquire largely by trying and failing a few times. It’s just that the many general guidelines that the community has discovered for programming in the large (use layers, fight duplication wherever it rears its head, adhere religiously to 0/1/infinity…) are not as obviously correct and necessary to a beginner until they actually do program something that is large. Until you have actually been bitten by duplication that caused problems only months later, you simply cannot understand the importance of such principles.

View the original post. See more Q&A like this at Programmers, a question and answer site for professional programmers interested in conceptual questions about software development. If you’ve got your own programming problem that requires a solution, log in to Programmers and ask a question (it’s free).