...even if they don’t seem to consume too much time. I have finally found some ‘time slots’ to assign to code profiling activities, so I decided to take advantage of the dotTrace License that was kindly awarded to me by JetBrains (thanks again to JetBrains for supporting DotNetMarche and our workshops). I’m a newbie when it comes to profiling so I’ll just share my experiences and I’ll show how everyone can take advantage of these tools to improve the performances and sometimes the quality of your code.

In this post I’ll show you a very basic usage of the dotTrace profiler, but it helped me a lot in finding some bad spots in my code.

Let’s kick off by starting the profiler, I’m lazy so I’ll just use the Visual Studio integration and use the menu ‘dotTrace -> Profile the Startup Project’; it will rebuild the project and show you the following option dialog box:

dotTraceStartUp

Figure 1 - dotTrace startup dialog.

I’m crazy and I want very high precision and accuracy, so I set the profiling type to ‘Line-By-Line’ and asked to start the profiling immediately. Note that setting this profiling type will make your application painfully slow but it will give you the most accurate results. Play with your application a bit using the forms and the functions you want to profile, when you are ready take a snapshot of the application using the dotTrace control window.

This will open up the profiler main window and you can use that to have different views of the data it gathered.

For a fist shot analysis I’m just interested in looking at the most called functions, dotTrace can show me a plain list of all the function calls which I can sort and group by class name or namespace. You can think of this particular view as ‘having a look at the most active functions, classes or namespace in your application’. Here’s what I got during my first attempt, I like to have the result grouped by namespace and sorted by the number of calls:

dotTraceNamespaceView

Figure 2 - dotTrace Plain List view grouped by namespace.

As you can see the first places of the list are all taken by NHibernate and Castle functions, at this point I’m not interested in those...but you can also see a rgmComponents namespace that ‘is making’ a lot of calls (even if the time consumed is small); I expanded it too see the list of functions in detail and you can see that a single color conversion function was called loads of times (18k calls!) in a very shot running time of the application.

This application shouldn’t be so graphically intensive...so it rang a warning bell to me. This very high amount of calls must be an incorrect use of the function itself or some side effect that makes it to be called when it should not be needed; what you can do is right-click -> ‘open in New Tab’ in the most called function to dig into it even more, here you can have different views of the related data. One of the most interesting is the ‘Back Tracking’ view, which shows you who is calling the function you are watching (actually that’s the info I’m looking forward to have):

dotTraceDetail

Figure 3 - Back Trace of a function.

Looking at this data you can see that all the 18k+ calls of this function were originated by the setter of the property ‘SecondHeaderColor’ of the XpTaskBox control: it’s now time to take a look at the control’s code because it really smells.

I will skip all the code analysis here, but looking at the code it was clear that that the control was generating cached copies of an icon changing the colors to represent different states (enabled, inactive, etc...) but it was generating them even if the icon wasn’t actually rendered (it was an expand/collapse icon) nor used. Using the profiler in this way helped me to find out some poorly designed code, I just changed the implementation to use lazy initialization for those images and compute them only if really needed...this allowed me to cut down the number of call to this function from 18k times to 5k times only...which in turn lead to a global faster rendering of the whole UI of the application and better performances in the long run (no more computations if not needed).

So a profiler not only helps you in finding weak spots in algorithms showing you the most time consuming functions, but can also be used to find weak spots in your code design.

Related Content