Codosaurus 🦖

How can we evolve you today?

Blog: ACRUMEN Efficiency

Despite how much we developers tend to worship ths, Efficiency is is in fact the least most important aspect, out of the ones in the ACRUMEN definition!  It used to be much more important, when the CPU speeds were measured in kilohertz.  (Most of you probably don’t even remember speeds lower than hundreds of megahertz, and today, if your CPU isn’t running at several gigahertz, plus having at least two cores, it’s considered obsolete.)

So what does it mean, in ACRUMEN terms, for a piece of software to be Efficient?

The short answer is that it’s easy on resources.  So what kind of “resources” are we talking about?  What leaps immediately to most people’s minds, whether developers or users or whatever, would be the CPU.  (Okay, non-technical users might not say CPU, but it’s what they usually mean, or at least think they mean, when they speak of speed.)  But there are many other technical resources we should go easy on.  Most developers would also be aware of things such as memory, I/O bandwidth, and disk space.&nsbp; Another one annoyingly wasted lately, in the world of web-apps, is screen space.  But there are many non-technical resources we should be aware of as well, such as the user’s brainpower and stamina, and the company’s money.  (Yes, this overlaps quite a lot with Usability, and other concerns beyond the scope of ACRUMEN.)

So how can we ensure that our software is Efficient?  There are many different ways, appropriate for different resources.  Most of the time, though, we mean “it’s slow”.  The cause of slowness is usually either something architectural, which is more complex than I want to delve into in this post, or an inefficient algorithm.

So, more specifically, how can we ensure that our algorithms are Efficient?  The usual approach is to stare at the code, spot where we think it’s inefficient, spent a lot of time optimizing the proverbial snot out of that piece, run the program again, and find that the program is . . . still slow.

Don’t do that!  Measure it instead!  Humans aren’t really very good at spotting the inefficiencies, but there are profilers and packet analyzers and so on, that will tell us exactly where (or at least when) we’re using too much CPU, RAM, bandwidth, etc.

Once we find where (or when) it’s slow, there’s still the question of why.  Different kinds of systems are susceptible to different causes of slowness.  For instance, a distributed system may be doing too much communication (too many messages, or they’re too large), or using a slow network.  A database-driven system may have an inefficient data model or query.  And so on.  But usually it boils down to a bad algorithm.  There’s no simple easy fix, but it really helps if we get familiar with the common basic data structures and algorithms.  We should be able to:

If we’re lucky, we can select one that has a ready-made implementation in some library or framework, that is known to be efficient enough, has stood the test of time, and might even be well-tested in the software sense.  Then we don’t have to waste the resource of development time (and thus the company’s money) reinventing that wheel.