Archive for January, 2008

Measure Twice, Code Once

I have had the pleasure of spending the last few days improving application performance. Specifically, my job is to improve the ‘speed’ dimension of the application. In doing this, I’ve been getting reacquainted with some well used tools: SQL Query Analyzer and dotTrace Profiler. (For those of you familiar with when SQL Query Analyzer was last available under that name, I’m working with a completely functional, happy, SQL Server 2000 installation.) It’s been quite a few months since I last did this. Given that not many people get a chance to do performance analysis and improvement on a regular basis, I think that now might be a good time to rehash some common mistakes and the way to fix those mistakes.

Mistake 1: Lack of Focus

When asked to work on performance, one will often be told to just make the product faster. This typically leads a developer to work on the areas of the application known best to the developer. Even then, the focus may be on features that don’t get used too often. At the end of the day, some obscure button click may work a bit faster and no user, no manager, no customer gets a better product. How do you get focus?

To find out where to make things faster, I spend some time finding out which features are used the most frequently. I then find out which features seem to be slow in responding. Of those features, I work with a user representative to rank the features in order of importance. Finally, I do some analysis to see which features have common code paths. If five of the top ten items on the list all have a common code path, I can deliver a big improvement with a smaller effort. At this point, I have some focus and I’m ready to make mistake 2.

Mistake 2: Failure to Measure

A lot of times, a developer will not have the tools or the knowledge of how to use some tools to measure their code. Instead, they will single step through the code looking for lines that seem to take “a while” to execute. The developer then takes pains to make the associated code take “less time” to execute. For what it’s worth, I’m not exaggerating here. “a while” and “less time” are actual measures I see folks use. When I hear a co-worker using this type of measurement, I know who I’ll be mentoring for the next few days. (Over the last ten or so years, I’ve discovered that cursing the people and calling them ‘morons’ doesn’t make me feel any better and doesn’t improve the situation. Mentoring builds relationships and improves the strength of the team. “Be part of the solution” and all thatJ.) To overcome this issue, we have to overcome the reason people don’t perform good measurements. The reason people fail to measure is because they don’t know what measurement will give them the right answer. So, what measurement is that?

I will start out by saying that I learned the fine art of how to measure from the very patient and capable performance team on WCF v 3.0. The first thing they taught me was to spread as wide a net as is reasonable and collect lots of data. Here, we can measure working set, execution time, or both. Working set measures how many resources, typically memory, that the application consumes. Execution time measures clock time to complete a task. On desktop and enterprise/web applications, the focus is typically on execution time. Here, where do you spread the net?

I fire up a tool like dotTrace Profiler, NProf, or, when I was a Microsoftie, OffProf (aka Office Profiler-Microsoft, please release this tool into the wild as part of the Debugging Tools for Windows!!!). All of these tools collect data about method execution times, time relative to the rest of the call tree execution, and so on. With the tool attached to your target, cause the application to follow one of the ‘slow paths’ and then stop data capture. You’ve just measured and identified the parts of the code that are interesting. For example, these tools will show when a particular method consumes an inordinate amount of processing time in the code path. Following the code path, you may see some of the following behavior:

  1. Method gets called an inordinately large number of times.
  2. Method consumes a large percentage of overall processing time.
  3. Method is waiting a long time for some synchronous item to return. (Think database, web service, or other out of process call.)

All of the tools I mention will show processing times, percentage of data collection time spent in a method, and number of times a method gets called. Because the tools gather data, they will make things take longer to execute. That said, they make everything, except out of process calls, take proportionately longer to execute (every call pays the same n% penalty). Armed with this information, you can go and make the application faster.

Mistake 3: Fearing Database Analysis

Once people measure their code, they may see that a database call causes the execution speed issues. At this point, they start looking at the query. Most queries are under 30 lines of SQL. As a result, the developer applies the same tactics they did when measuring: they look at the code and guess where issues lie. The thing is, SQL typically performs set based operations over thousands or millions of records. The SQL will not tell you that the table requires a table scan, bookmark lookup, or unusually large hashtable to compute the result set. In this case, you need to take out a tool like SQL Query Analyzer for SQL Server 2000 or the Database Engine Tuning Advisor for SQL Server 2005/8. In Query Analyzer, make sure to select ‘Query Analysis’ and then execute the queries that are running slowly. You will be looking for the items that take up a significant part of the execution time and then tune those items. You will add new indexes, precompute values, etc. Like with your source code, your measurements will tell you what to address. The Database Engine Tuning Advisor will go a few steps further and suggest the changes you should make.

Do you have any tips you’d like to share?

Leave a comment

What does a technical writer do when book sales are down?

I have a sickness. My illness causes me to skip sleep, to skip meals, and to allocate no time to play video games. Occasionally, enablers have given me money to encourage me to indulge in this illness. I have no interest in getting ‘help’. I am an author and I write about technology topics. In doing so, I’ve discovered that IT books, in general, sell poorly. When one signs a contract to write a book, the publisher and author typically agree to something like the following:

·         Author gives all publishing rights to the content to the publisher.

·         Author commits to have the book done by some date, typically six months from starting the project.

·         Publisher commits to give the author an advance on royalties. About $5000 was normal for my three books.

·         Publisher commits to provide editorial, marketing, sales people, and other resources to sell the book.

·         Publisher promises some royalty based on numbers of books sold. The royalty increases as sales volumes increase with better royalties when the title cracks the 10000 mark. If things are set up well, a book that has a cover price of $50 will put around $2.50 in the author’s pocket. (Note: this means that the book needs to sell 2000 copies just to cover a $5000 advance.)

Over the last 10 years, the quality of material on the Internet has put a serious dent in book sales. My understanding is that selling 5000 copies of a title means you have done particularly well. This means that, on average, an author can expect $12500 for efforts expended writing a book. Of the three books I worked on, I wrote 100% of two and 75% of the third. Of the 4200 hours I put in on these projects, I averaged about $7/hour. Writing books isn’t something one does to get rich. One writes to share information. Unfortunately, when a person chooses to write a book, they are choosing to limit how many people will be able to access their words. Because the pay is so poor, I’m going to choose to increase the reach of what I write by publishing to the web. For me, this will have the following advantages:

1.       No editors asking me where my content is.

2.       I can reach all English speaking people around the globe (this maps well to the population of developers in the world, allowing me to reach well over 50% of the potential audience).

3.       I control how my work is distributed. That is, all the work is owned by me and I can choose to share it with the world.

4.       I can put ads on the pages. If the content is any good, advertisers will pay for my hosting costs.

I plan on spending the foreseeable future writing articles on ‘How it Works’ type topics. I’ll be hitting things that I feel haven’t been explained well enough. The first area I want to hit is System.Configuration. I spent a lot of time learning this namespace when I wrote the initial versions of the System.ServiceModel.Configuration.*, System.Net.Configuration.*, System.Transactions.Configuration.*, System.Web.Services.Configuration.*, and System.Runtime.Serialization.Configuration.* classes for Microsoft. I need to dump all that info out and share it. I’ll be posting these items in article format. The plan I have in my head is to develop an article plan where each article is five to twenty five pages in length (as indicated by Microsoft Word). In other words, I’ll type away and when I hit twenty pages, I’ll look for a way to wrap it up and push something out. My guess is that I have somewhere around 150 pages of content on System.Configuration and how it works.

 

Leave a comment

SOA Projects

Tony Baer has an interesting post at http://www.onstrategies.com/blog/?p=251, ‘SOA in a Recession?’. The question here is ‘what will SOA investments look like during a recession?’ Having been a part of the big client server moves of the mid to late 1990s, web deployments of 1997 to the present, and someone who has done training on SOA across the country for Wintellect, I have to say that SOA feels different from the previous two items. For client server, we had to start thinking about our applications differently. Bits of the application lived in different processes on different machines. Here, we had to rearchitect applications to deal with a new security model and to deal with the greater latencies involved in method and database calls.

The web-based applications extend those latencies all the way out to the user interfaces and brought about new design issues. Web designers have to worry about browser compatibility, screen refreshes, etc. This change was easier than client server because we already knew how to handle latencies. The web only required rewrites of the user interfaces. Back-end tiers could be left alone.

SOA is even gentler. The SOA experiments in flight at this time all revolve around exposing existing client server deployments to new platforms. Select subsystems are being exposed to new development via Web services. The discussion folks are having is more like ‘let’s put a Web service wrapper around system X’ instead of ‘let’s rewrite for SOA.’ I see SOA as being an enabler for continued development even in the face of a recession since it allows one to reuse existing code investments with minimal effort. On my current project, we are regularly exposing data between the Java and .NET worlds via Web services. Coding, testing, and deploying the Web services is usually a sub 1 hour task.

Leave a comment