Your Ad Here Visit new version of this Blog

Make your web application run faster

Introduction

It is easy to develop your own ASP.NET web application. But making it do some useful things for your users while keeping the design simple and elegant is not so easy. If you are lucky, your web application will be used by more than a handful of users, in that case, performance can become important. For some of the web applications I worked on, performance is vital: the company will lose money if users get frustrated with the slow response.

There are many factors that can result in bad performance, the number of users is just one of them. As a developer in a big corporation, you usually don't have a chance to mess with real production servers. However, I think it is very helpful for developers to take a look at the servers that are hosting their applications.

Your server spends most of its time waiting

Production servers usually host many applications. One of our web applications was not performing well, I suspect that other applications running on the server were using memory and CPU resources that "should" be devoted to our application. The admin allowed me to look at the server machine, what I found was not what I expected: the server had plenty of unused memory and the CPU usage was pretty low, too. It seems the server was idle most of the time.

That means if we design the application differently, we may be able to trade CPU and memory resources for better performance.

Application dependencies

It is typical for web applications to depend on many services running on remote servers. The slow response from those remote servers is likely the real cause of bad performance for a web application. For example, one of our web applications needs to request data from a remote server, a single request alone takes about 3 to 5 seconds. If my application has to make 5 to 7 different requests from remote servers in order to display a web page, then the performance will not be good even if only one user is using the application!

My approach for solving the performance problem was to design the application in a way that each page will make as few requests to remote services as possible. Which means the application will not make a remote request to a backend server until the data is really needed and once the data is retrieved, it will be cached within the application so that it doesn't need to request the same data more than once. This approach worked fine for us until ...

The management decided to change to a new design that would kill our application

What they want is a more user friendly interface. The first page will be designed in a way that as soon as a user landed on that page, he/she will see a summary of all the important information right away. If more detail is desired, the user can click tabs, links, or buttons on that page to display more data.

The problem is, information requested on the first page can only be extracted from data items returned by various remote service calls. There is no single service that can give us such a "summary" of the data.

So there is no choice but retrieving all the data items from remote servers before displaying the first page. The performance became so bad that even developers hated to use the application.

The Solution

Fortunately, our server has extra power to spare and the remote services we need do not depend on each other. After some research, I devised a new way to retrieve data from remote services. Previously, the sequence of steps to get data was as follows:

  • Step 1. If data item 1 is not in cache already, retrieve it by calling service 1 synchronously
  • Step 2. If data item 2 is not in cache already, retrieve it by calling service 2 synchronously
  • Step 3. If data item 3 is not in cache already, retrieve it by calling service 3 synchronously

My idea is, in step 1 while the application is retrieving data item 1, we also let it retrieve other data items in the background asynchronously (and cache the data items once they are received). By the time the app moves to step 2 and step 3, the data items will already be available in cache. Here is the new approach:

  • Step1. In this first step we do multiple things:
    • If data item 1 is not in cache already, retrieve it by calling service 1 synchronously
    • In addition, service calls for data item 2 and 3 are issued simultaneously and asynchronously if they are not in cache already
    • Data retrieved with the above asynchronous requests will be cached
  • Step 2. If data item 2 is not in cache already, retrieve it by calling service 2 synchronously
  • Step 3. If data item 3 is not in cache already, retrieve it by calling service 3 synchronously

Now, let's see the potential difference in performance. With the old approach, suppose it takes 5, 2, 3 seconds to retrieve data items 1, 2, 3, respectively, the total time will be at least 5+2+3 = 10 seconds. With the new approach, since we assume extra server power is available and the remote services are unrelated/independent, the ideal total time will be a little more than the longest of all data requests, which is 5 seconds in this example. So we can reduce the response time by almost 50%!

Let me explain the idea again in using plain English (no plain English compiler needed). Let's say you are ordering 3 dishes in a restaurant.

  • The old way: You order from the same waitress 3 times, each time the waitress will bring back a dish from the kitchen and put it on your table.
  • The new way: You order from three waitresses at once, they will be working simultaneously to bring three dishes from the kitchen, put the first dish on your table and the other two dishes on the table next to you. When you need the second and the third dish, a waitress will retrieve it from the next table and put it on your table, there is no need to go back into the kitchen again.

Assuming going back to the kitchen is the most time consuming work, we can save a significant amount of time with the new approach.

To see the Implementation follow the link
http://www.codeproject.com/aspnet/PEDLL.asp

0 comments: