Profiling to Improve Application Performance
Guessing what is making a program slow is hit or miss, at best. Since it can be difficult to understand what makes software applications slow, developers invented profiling tools.
Assist's profiling tool is an addition to the debugger. It captures the time to execute each line of code in a program.
When to Optimize
It is a common mistake to try to build performance into a program up front. Some kinds of software may seem to require this especially in the case of machine control, games or other time sensitive applications. In practice it is usually better to write code in as clear and understandable a style as possible and optimize later because optimizations usually make the code less clear and maintainable.
"Make it work. Make it right. Make it fast"
Make it work - First make the software work. What else could be more important? If it doesn't work, the second and third points are meaningless, right?
Make it right - After it works, take a good hard look at it and clean it up to that it is easy to read. This will help you to maintain the program months or years later, and if it is clear and easy to read it will also be easier to add new features without accidentally breaking things.
Make it fast - Once the software is properly built you will have a better idea which parts really need to be fast. Many optimizations make code uglier and more complicated, so don't optimize every single thing in your programs, but only the things which really are too slow. In some programs you won't need to optimize anything and on the flipside some programs will need to you fight for every ounce of speed you can get.
A Profiling Example
Let's try the profiler out on SIEVE2.BAS, one of the small example programs that comes with Liberty BASIC.
Start Liberty BASIC and open SIEVE2.BAS as shown.
The SIEVE2.BAS example
First run the program. It has code to show how long it takes to run its calculations. Take note of the result. Several runs on a Pentium 3 PC running at 500MHz the time averaged approximately 810 milliseconds.
Okay, now let's start the debugger using the Run+Debug menu (or type Alt+F5). By default the debugger will not profile code, so we need to click on the Profiler checkbox as shown.
Click on the Profiler checkbox
Activating the profiler makes some room on the left for elapsed time totals for each line. Now let's get some time measurements by clicking on the Resume button
and let the program run to completion. This will take a little longer than just running the program normally since the profiling process slows things down while it is collecting profiling information.
Done collecting execution profile information
Now we see some numbers to the left of the program code. These are measurements in milliseconds (thousandths of a second). Those lines with 0 next to them actually do take some time to execute, but the total time spent was less than a millisecond. We see here a total time spent of only milliseconds, so it's clear that time expense is incurred further down in the program.
Now we'll scroll down to where the action takes place. The program is very short, so it's hard to miss.
The sieve() function
Now we're getting somewhere. To target the slow spots we need to look for the biggest numbers. What jumps out right away is that this loop is the slowest part:
while k <= size
flags(k) = 1
k = k + prime
Together, the WHILE and WEND statements account for 334 milliseconds, which is pretty close to half the time spent executing this program. We can substitute a simple IF/THEN loop for the WHILE/WEND loop and this should save us some time. We need to rewrite the function and rerun the profiler to see if our theory is correct. Let's close the debugger and edit the function so that it looks like this:
for i = 0 to size
if flags(i) = 0 then
prime = i + i + 3
k = i + prime
if k > size then [skip]
flags(k) = 1
k = k + prime
if k <= size then [loopBack]
sieve = sieve + 1
New execution numbers
It looks like we got a pretty good result from our code change. The code for the WHILE/WEND loop consumed 334 milliseconds, but the two IF/THEN statements we substituted for them consumed 9 milliseconds and 118 milliseconds respectively for a total of 127 milliseconds. That shaves off more than 200 milliseconds!
Running our modified program straight from the Liberty BASIC editor, we get a reported execution time of 611 milliseconds over 810 milliseconds for our unoptimized versoin. This is nearly a 25% increase in performance!
We gained in performance, but look at the code. First of all, there's more code now. The longer a program gets, the harder it will be for us to understand and maintain later. Also when code gets more complicated it will need more comments to explain what it is doing, which makes it even longer.
So was this optimization worth it? We'll have to make that judgement case by case. For example, it looks like we could probably win a little more speed by replacing the FOR/NEXT loop with a similar IF/THEN optimization. However, the gain would be small. If the only means by which to measure the difference in performance is the user's perception of speed then this next optimization probably isn't worth doing.
Sometimes a program must absolutely run as fast as possible and so every optimization must be pursued. In this case, do not forget to test the effect of the optimization using the profiler to make sure it really does make the software run faster.
Also, we need to remember that when we optimize any program to be careful that our new code doesn't break things that we carefully made to work. A program that runs faster incorrectly is not the desired result.
Profile more than once - Variations in runtime performance are expected when using software in multitasking systems and when using languages that have automatic memory management (like most versions of BASIC). Because of this if a optimization seems to produce only a very small difference in performance, it can help to try running it in the profiler two or three times and comparing the results of each run with each other. This may confirm whether the optimization is really accomplishing anything. If it isn't, then the best policy is to stick with the most readable code.
Activate profiling later - In some cases the profiler makes a program run so slowly that it takes a long time to get to the code section in question. In this case it can help to start in the debugger without activating the Profiler checkbox until just before the point where your code is. In order to do this it can help to put a breakpoint in your program right before the code you want to profile. Some program features don't need the breakpoint because they can be executed from a menu or other user action.
Try the program on a slow machine - When software is destined for use on many kinds of computers both new and old, it can be useful to have an older, slower machine to try it on. This way it will be easier to detect if something in the software is slow. Only using the software on a new, fast computer may hide performance issues.
Copyright 1992-2009 Shoptalk Systems