Actually you should optimize, but not at the wrong places, or for the wrong reasons. But I’ll get back to that in a minute.
I recently released a small XNA-based game together with my friends at Badgerpunch Games, and have been following the indie game development community through forums and twitter. Game developers are very concerned about performance, mostly for good reason. No one wants a game with choppy framerate. Because of all this worrying about performance, there is an array of optimization tips and articles that get passed around to alleviate the problem. The majority of the tips and articles are informative and useful, but hardly any of these articles touch on the main issue of optimization: When not to optimize and why not.
The thing about optimization is that you almost always can optimize code, but time-spent vs pay-off quickly goes bad. I remember when I was active in the Amiga Demo scene back in the early nineties. I had this 3D rotation assembly routine that I spent probably half a year optimizing. It ended up being about as optimized as was possible. The first few weeks I was cutting away at cpu cycles at a fantastic rate! The last couple of months I didn’t really save any cycles, and in the end I gave up.. My routine was super-fast, but still, other coders had 3D graphics that were faster, and I couldn’t understand how that was possible.
I found out how it was possible a few years later when learning about matrices in college. My routine with 9 multiplications per 3D point was an unoptimized matrix, and could be shortened to 6 multiplications and two adds, saving hundreds of cycles per point.. man that pissed me off!
The moral of that story? You can optimize your code till it shines like a star, but if someone else has a better algorithm that does the same job faster, then you still lose.
Or do you lose? Only if it matters. In the story of optimizing performance of 3d-rotation on a limited 16-bit machine, where the fastest routine makes the most leet programmer, then it matters a lot. 😉
Which brings us back to the beginning. Don’t optimize – if it doesn’t matter. The important thing is keeping your code simple, easy to read and easy to change! When your code has these three properties it doesn’t matter that it isn’t optimized.
If the code is too slow, pick up a profiler and find out where you should optimize. Sometimes you don’t need a profiler, but you *always* need to base optimization on real data. When you identify the problem areas, you fix them in the simplest way possible, and see what effect it has. Eventually you should get to the point that the code does what it should with acceptable performance. And if it doesn’t, you might need to change the algorithm you based your code on. That is one reason to keep code simple and easy to change.
The main reason to keep the code simple, easy to read and easy to change is that finding bugs is an act of reading and changing the code. The easier it is to read, the easier it is to fix. It’s a no-brainer.. and still people insist on making things as complicated as possible just to stroke their own egos! I once saw a piece of Java code with recursive multi-layered inline if-statements. One of the worst examples of unintended code-mangling. And of course full of bugs… almost made me cry.
Another reason to keep code simple is to inform the compiler in the simplest way what your intent is. A complier will have a better chance at optimizing simple code. And if you are running under a virtual machine with a JIT compiler, it is ever more important. With virtual machines and on-the-fly compilation then your code might run on a plethora of different versions of the VM. Most likely the newer the VM and the simpler your code is, the better chance that it will be optimized when run.
The early Java Virtual Machine versions did little compiling and no optimization, so things like backwards-counting for-loops were early tricks to save a few cycles. But the latest versions compile and optimize on the fly, and they optimize the most common for-loop variations. If you mangled your for-loop to get it running a few cycles faster in an early VM, your code might actually run slower today than a more common for-loop variant. Performance is a moving target on Virtual Machines, and code trickery might not pay off in the long run.
It all boils down to this: Don’t optimize until you know what to optimize. This does not mean you shouldn’t be thinking of performance. You should always have performance as a consideration. It should be a concern when choosing algorithms, designing and implementing, but your main focus should be keeping the code simple, easy to read, and easy to change.
Interesting read, reminds me of when I started to program a pong clone a few years ago making the ball move pixel by pixel by adding and subtracting. Did you try optimizing anything in your code?
I get so tired of these posts.
You went from optimization to “keep code simple, easy to use, and easy to change”. The very act of optimization is making assumptions based upon usage patterns and coupling your code tightly to the hardware. Optimized code is not simple, easy to use, or easy to change, if it were really possible to do that very often, the entire point of your blog post would be invalid. In other words, if you could do all those things, and optimize, then you would just optimize.
But you can’t, and so your blog post was created.
@Anon:
I agree that optimizing often makes code more complicated. That is why I am advocating keeping the code as simple and clean as possible, and then optimizing where it matters.