Saturday, June 10, 2006

Empirical Performance

Everyone tells you to do performance tuning last. Why?

Because you can make performance tuning empirical. If you've done your tests first you'll be able to change the code to be performant, without breaking it.

The following process works for me as follows:

0) Prioritize and Instrument

Record how long things take in your app. This can be as simple as paper and pencil, or as sophisticated as automated performance tests suites.

Look at what is slow. Where would you get your biggest bang for the buck? What would make your users the happiest? Where will the load be in your system (back of envelope math works well here)?

Are they having trouble with the GUI, or the calculations taking to long? Which page? Help them.

1) Investigation and Hypothesis

You need to understand what the application is doing and how it does it. Use a profiler such as JProfiler. Eclipse has a free but slow one (tptp).

Look at time spent in methods - can you reduce the time?
Look at who is allocating objects - can you reduce the number of objects?
Look at number of calls to a method - does it make sense?

Hypothesize where you can save time, have rough guess of proportion of time it will save.

Example: Alan and I found a method was called 20x more often than we expected. It was a bug in the code that was doing unnecessary calculations.

2) Experiment

Do actual timings on the code you are running (I use println like utilities). Youcan't use a profiler for this because of dialation effects (e.g. the OS and Database are not being slowed down, only your java code - the profiler may be lying to you).

Do back of envelope calculations to see if what you're doing makes sense.

Calculate: Theoretical maximum you can save, and likely amount you can save.

For example: assume you find a method you can double in speed. Wow that seems like alot of savings. So lets say this method is taking 10 seconds out of a 100 second process.

Now lets assume you speed it up so it takes zero time:

Theoretical maximum: 10/100 = 10%. Will that satisfy your users?
Likely savings (Double speed of method) = 5/100 = 5%. Will your users notice?

3) Conclusion

Is it worth doing?

If you know your users need a lot more speed 5% won't cut it, you'll have to go back to investigating and finding where to improve.

Otherwise implement the change and then measure your actual speed gain.

Empirical Performance improvements. I find them worth the wait.


Blogger Vladimir Levin said...

Does everyone really say that performance tuning should be saved for last? I understand this argument to a certain extent: When people try to optimize every line, it leads to much more complicated code, and it may not provide much of a performance benefit in the long run. I can see how it could even hurt performance in some ways since a bad algorithm or design can be obscured by all sorts of minor localized tweaks. However, leaving performance tuning until late in the project strikes me as being similar to doing all of the data conversion after the project has been "completed." I've been thinking about the idea of *continuous* performance monitoring. Set up some performance tests and whenever they fail, take the time to find out why and to boos the performance back to acceptable levels. That way there is never a situation where the performance is so awful that the users complain about it. Also, it prevents design problems from being pervasive to the point where it's very hard to improve the performance.

2:05 p.m.  

Post a Comment

<< Home