Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Beginning ASP.NET 2

.0.pdf
Скачиваний:
23
Добавлен:
17.08.2013
Размер:
24.67 Mб
Скачать

Performance

Now you have a great situation, because subsequent requests for the page simply get the HTML from the cache. All areas of compilation are avoided, as illustrated in Figure 14-7.

Disk

ASP.NET page

Compiler

 

Disk

.NET CLR

Runtime

Intermediate Code

Figure 14-7

Now take a look at how you can implement this type of caching.

Output Caching

Output caching is the simplest of the page caching techniques shown in the previous section, whereby the entire page is cached. Give this a go in the next Try It Out to see how easy it is.

Try It Out

Output Caching

1.In VWD, open History.aspx.

2.As the first line in the content, above the existing h1 element, add the following:

<div class=”boxFloatRight”><%=DateTime.Now%></div>

3.Save the file and run the application, navigating to the History page under the About menu. Notice the time to the right of the title, and press F5 to refresh the page. Press F5 several times, noting that the time is updated.

4.Close the browser, and back in VWD edit the History.aspx page again, adding the following line:

<%@ OutputCache Duration=”30” VaryByParam=”none” %>

549

Chapter 14

5.Save the file and re-run the application. Try pressing F5 several times and notice that now the time isn’t updated. Wait at least 30 seconds (or go make some coffee/tea/favorite beverage — you deserve it) and refresh the page again — notice that the time has changed.

How It Works

Output caching works in exactly the way shown in Figures 14-4 through 14-7. The OutputCache directive tells ASP.NET that once the output of the page has been created it is to be stored in the cache. The Duration is the time in seconds that the page is to remain in the cache; any requests within those 30 seconds are served from the cache. Once the 30 seconds is up, the cached page expires and is removed from the cache. The next request will re-execute the page, whereupon it is placed into the cache once more. The VaryByParam attribute is discussed in a just a bit.

The div displaying the current time shows the time the page was executed. So the first time you request the page the current time is shown, because this is when the HTML is generated. But because the HTML is cached, subsequent views of the page receive the cached HTML, which of course has the time fixed.

It’s only when the page is removed from the cache that it’s re-executed and the time updated.

The advantage of this system is that pages that are used frequently will be often cached, whereas pages that aren’t used often get discarded from the cache.

The VaryByParam attribute dictates whether any external influences affect how the page is cached. Setting the value to none means that nothing affects the caching — only one copy will be cached. However, what about pages that allow selections and fetch data based upon those selections? One example is the Fixtures page, which can either show past fixtures or future fixtures, depending on a value passed in as part of the querystring. If VaryByParam=”none” is used, only one copy of the page would be cached, so the first Fixtures page requested would be cached. Say two people view the fixtures one after another, and that the first view is for future fixtures, which would be cached. If the second person requests past fixtures he or she would be returned the cached page, which was for future fixtures. In fact, with this model the past fixtures wouldn’t be viewable until the first page was evicted from the cache.

To get around this VaryByParam can be set to the name of the querystring variable, which means that a copy of the page would be cached for each different value of the variable. This could be implemented in the Fixtures page simply by adding the following cache directive:

<%@ OutputCache Duration=”30” VaryByParam=”type” %>

The two menu entries for fixtures have the following as their URLs:

Fixtures.aspx?type=future

Fixtures.aspx?type=past

Now when two people request the different fixture pages, two copies are stored in the cache, one for each type.

550

Performance

Fragment Caching

Fragment caching allows portions of a page to be cached while the rest of the page remains dynamic. This is ideal for those pages that contain a mixture of static and dynamic content, and is achieved by having the content that is to be cached contained within a user control. The user control then uses the OutputCache directive to dictate how it should be cached, in the same way that pages use it:

<%@ Control OutputCache Duration=”30” VaryByParam=”none” %>

When the user control is placed on the page, only the content for the user control will be cached, allowing the rest of the page to remain dynamic.

This is useful for pages where the data is very dynamic and changing often and you don’t want the page cached, but there is some content (perhaps from a database) that doesn’t change often and can be cached.

Post Cache Substitution

Post cache substitution is the opposite of fragment caching, where the page is cached but a portion of the page is dynamic. This is achieved by use of the Substitution control, which works differently from other forms of caching. You still use the OutputCache directive on the page to determine the caching, but the Substitution control is simply a placeholder into which the content is to be placed, and this content has to be created manually. The Substitution control has a property called MethodName that indicates a function that will return a string of data to be substituted into the cached page. For example the page would be as follows:

<%@ Page CodeFile=”PCS.aspx.vb” Inherits=”PCS” %> <%@ OutputCache Duration=”30” VaryByParam=”none” %> <html>

<form runat=”server”>

<div class=”boxFloatRight”><%=DateTime.Now%></div>

<asp:Substitution id=”sub1” runat=”server” MethodName=”Substitute” /> </form>

</html>

The code-behind would be as follows:

Partial Class PCS

Inherits System.Web.UI.Page

Shared Function Substitute(ByVal myContext As HttpContext) As String Return “Date fetched on “ & DateTime.Now.ToString()

End Function End Class

When this page is run it will be cached so the div with the date and time will remain the same, but each time the page is requested the Substitute method will be called, so the data that it returns will not be cached. Here’s how it works.

The first time the page is requested by a user, the normal caching regime is applied — the page will be placed into the output cache. However, because there is a Substitution control on the page the output cache keeps a note of the MethodName (for each Substitution control if there is more than one on the page). When the page is next requested, instead of being returned directly from the cache, the method

551

Chapter 14

detailed in MethodName is called, which changes the contents of the Substitution control, at which point it is integrated with the copy of the page in the output cache and returned to the user. In this case, you get the benefit of output caching but with the flexibility of dynamic content.

What’s interesting about post cache substitution is that it’s available to you if you write custom controls, meaning you can drop controls onto a page that automatically display up-to-date content irrespective of whether or not the page is cached. The AdRotator control actually works like this, so if you place an AdRotator onto a page, the ads will always be updated even if the page is cached. This works because within the AdRotator code there is a post cache substitution that is dependent on the file that stores the ads. When the AdRotator is on a page and that page is rendered, the AdRotator always checks the cache because that’s what the post cache substitution does. If the ad’s file hasn’t changed, the cache entry will still be present, but if the file has changed the cache entry will have been removed, so the ad’s file is re-read. With the AdRotator you get the benefits of caching but without the problem of stale ads, an important consideration if customers are paying for ad space.

Designing for Performance

This is the most vague area of the book, because much of the topic varies upon the particular application. While there are general techniques that can be used, some specific things depend upon your site. For example, you wouldn’t want to go to extreme lengths to extract the last ounce of performance from an intranet site with only 50 users. Sure, make it perform as best as you can, but sometimes there are tradeoffs that aren’t really justified. Is it worth spending a week of your time to gain an extra couple of seconds? Possibly, but not always.

When designing for performance you have to think ahead and perhaps answer the following questions:

How many people are going to use the site? For an intranet site you’ll have a fixed number of users — the employees in the company. For an Internet site the user base is unlimited. More users mean more resources will be required on the web server.

How much page interaction is there going to be? Do the pages just fetch and display data, or will the user be answering questions, making selections, and so on? You obviously want your pages to respond quickly whichever the interaction type, but for the latter, shaving off those extra seconds might be worthwhile.

Can any of the pages be pure HTML? If there is no reason for having an ASP.NET page, then don’t. HTML and ASP.NET pages can interact, and HTML pages are faster because they don’t require processing by ASP.NET. Also, when using Internet Information Server 6 on Windows Server 2003, HTML pages are particularly fast because of the way in which they are stored and processed by the web server.

How much database access will there be? This may affect how you configure some of the caching options shown earlier.

One thing you have to be aware of is what ASP.NET does and what the web server does. For example, in Chapter 4 you looked at security and saw how to protect sites and pages so that only certain users can access them. This security only works for ASP.NET pages and not HTML pages, so you have to weigh up any performance improvements you might gain from HTML pages over the security implications. If you have pages that need to be secure then they need to be ASP.NET pages, even if they don’t contain

552

Performance

any dynamic content. Using ASP.NET pages also allows you to take advantage of other aspects of ASP.NET, such as Master pages, themes, and navigation. Alternatively, you can configure Internet Information Server so that HTML pages are processed by ASP.NET as well as ASP.NET pages. This brings the addition of security, but at the loss of performance.

By and large you should use the topics shown in this chapter, such as stored procedures and caching. There are other things that you shouldn’t do, though, such as allowing users to free-search on a large database; it’s inevitable that users will at some stage try to query for more data than they really need, causing a slowdown on the database and a slow-loading page. This they’ll complain about. You may think it’s their fault for running that query, but it reality it would be your fault (or rather the designer of the application, which may not be you). You can avoid this situation by designing the application so this sort of long-running query isn’t possible, or that if it is necessary you ensure your database tables are indexed adequately.

Web Server Hardware and Software

This is always a controversial topic, but the physical platform shouldn’t be ignored. Though it’s easy to throw more hardware at the problem of poor performance, spending money is never the entire solution, nor is it always possible; the bean counters who control the money often can’t see the justification or just don’t have the money. But, that shouldn’t mean it’s not a valuable topic to think about, and act on if possible. You don’t necessarily have to spend a lot on hardware to benefit. Swapping your processor for a faster one is a choice many people would consider, but could be a waste; not only is it relatively expensive (a new motherboard might also be required, as well as the expense of the processor), but it also might not be required. Memory, on the other hand, is nearly always a cost-effective upgrade. Memory is cheap, and a web server with lots of memory will perform better than one with little memory; there’s more memory for the web server to work with plus more available for caching.

The software also shouldn’t be ignored. Windows Server 2003 uses Internet Information Server 6 and in a like-for-like situation will generally perform better than version 5 (on Windows Server 2000). There are obvious licensing costs involved, but if you have the option IIS6 is a worthwhile upgrade.

Testing Performance

So how do you go about testing the performance of applications? Perhaps you want to test a new application or an application you’ve inherited, and you want to see if it’s performing to its best capability. A number of techniques and tools are available that you can use, not all of them computer-based. One of them is just natural common sense, and this involves being realistic about your expectations, and the expectations of the web site users. For example, if you were creating an application that showed a picture gallery, you’d expect there to be some delay because pictures can be large and take time to load. But the aforementioned free database query is another matter; whereas you understand the complexities involved your users may not, and they’ll expect all queries to return with the same speed.

One of the first things you need to do is establish a baseline; this is the set of expectations and numbers that detail how the site performs currently: how fast pages are returned to the user, the number of pages served within a time period, and so on. This is important, because without any numbers you have no accuracy; you’re in the realms of guesswork. With numbers you can make changes and see whether they improve performance (more pages served over a given timeframe means more users). The following sections look at some simple ways of getting these numbers.

553

Chapter 14

Tracing

The simplest method is to use ASP.NET tracing, and though not designed as a performance tool it can be used to gain a great deal of understanding into how your pages are performing. One of the primary uses for tracing is to help with debugging, and you’ll look at that in detail in the next chapter, but for now you can concentrate on its use for analyzing performance.

Tracing works by a simple configuration change, either in the page or in Web.config, which instructs ASP.NET to output additional information at the bottom of the page. You do this in the following Try It Out.

Try It Out

Tracing

1.From the Wrox United web site in VWD open the Shop.aspx file — it’s in the main directory.

2.Add the Trace attribute to the Page directive. It doesn’t matter where in the Page directive it goes:

<%@ Page Language=”VB” Trace=”True” MasterPageFile=”...”

3.Save the file and run the application. You’ll see something resembling Figure 14-8. You may see more or less depending on your screen resolution:

Figure 14-8

554

Performance

How It Works

It’s not so much how it works as what it does, adding several pages worth of information to the end of the page. Chapter 15 looks at some of these sections in more detail, but for now let’s concentrate on the Trace Information and the Control Tree.

The Trace Information shows four columns:

Category, which is the category of the message. All messages generated by ASP.NET for the page have the page name as the category.

Message, which is an individual trace message. By default some of the ASP.NET events are shown, with two messages for each, one for the start of the event and one for the end.

From First(s) is the time in seconds since the first message was displayed.

From Last(s) is the time in seconds since the last message was displayed.

Immediately you can see that you have some rough performance information, because the total time taken to render the page is shown in the From First column for the End Render method (shown in the Message column). In general this isn’t a great guide to performance, but the trace information can help you narrow down slow running areas when you realize you can add your own messages. This is achieved by use of the Write method of the Trace class, for example:

Trace.Write(“My Category”, “My Message”)

This means you can wrap sections of code within trace statements to see which perform poorly. Chapter 15 looks at the tracing in more detail.

The second section that is of use for performance analysis is the Control Tree, which shows all of the controls on the page as a hierarchical list. Earlier in the chapter we talked about view state and how minimizing it meant that less was being sent to the browser, and it’s this section that allows you to see the view state. There are five columns:

Control UniqueID is the unique ID of the control on the page. This will differ from the ID you’ve given a control, because it is a combination of the supplied ID plus the IDs of parent controls.

Type is the data type of the control. You can clearly see from this that all content, even straight text, is converted to a control.

Render Size Bytes (including children) is the size, in bytes, of the control. For the entire page, which shows as __Page, the first control in the Control Tree section, this tells you the size of the page.

ViewState Size Byes (excluding children) is the size, in bytes, of the view state for this control. Note that this doesn’t include child controls.

ControlState Size Bytes (excluding children) is the size, in bytes, of the control state for this control. Note that this doesn’t include child controls.

555

Chapter 14

For performance purposes it’s the numbers that are useful. The render size indicates the size of the controls, so you can see if controls are outputting more code than necessary. The view state can be turned off in many areas, further reducing the size of the page (which in turn reduces the time taken to render it). The control state can’t be turned off, but in general, controls use very little control state so this shouldn’t be taken as a performance issue.

Tracing only gives rudimentary information about the performance of a site, but it’s a useful starting point for analyzing pages. What it doesn’t give is how well a site performs as a whole, or when multiple people access it. For that you need specialist tools.

Stress Testing Tools

Stress testing is the term given to running an application under high load, with lots of users. You could try to persuade lots of people to access the site, but that’s not generally practical, so tools are used to simulate multiple users. Stress testing tools work by accessing pages continuously for a defined period of time, and recording the statistics for those pages. This gives accurate data on how a site performs under stress, and because the tools can take into account things such as certain pages being accessed more than others, the data is very accurate. Another great feature is that some of these tools can read existing web log files and build a stress test from them, which means they are using real-life data to perform testing.

A detailed look at stress testing tools is outside the scope of this book, but if you want to look into this in more detail, Visual Studio 2005 comes with a stress testing tool. If you’re working in a corporate environment you might already have all of the tools you need. You can find a cut-down version of the Visual Studio tool from the Microsoft Download site at www.microsoft.com/downloads. Search for

“Web Application Stress Tool.” The related resources on the download page point to documents showing how this tool works.

One very important change to make before running any stress tests is to turn off debugging. So make sure that the debug attribute is either removed from a page or is set to False:

<%@ Page debug=”False” ... %>

In the Web.config file you should also ensure that the debug attribute of the compilation element is set to false:

<compilation debug=”false” ... />

The importance of this cannot be over-emphasized, because an application with debugging will perform slower than one without. This is because the .NET runtime tracks debug code, and ASP.NET doesn’t batch compile pages and also creates additional temporary files. All of these can have a significant impact on performance tests.

556

Performance

Performance Monitor

Another area that’s really too detailed to go into here is that of performance counters, which can be viewed from the Performance Monitor tool in the Windows Administrative Tools folder. This allows you to measure all sorts of statistics about the workings of your computer, including the CPU usage, the amount of memory being used, and so on. There are also groups and counters for ASP.NET so you can see how well ASP.NET is performing. For more details on performance monitoring, see the Performance Counters for ASP.NET topic in the ASP.NET documentation for more details.

Summar y

This chapter looked at a variety of topics that will help your web sites perform to their best capability. It started by looking at some SQL issues, such as database connections and stored procedures, and showed how they not only will help with performance, but also make your code easier to read and maintain. The latter of these is worthwhile achieving on its own, so these techniques really are useful. Additionally, this chapter covered the following topics:

Generic collections, which you won’t encounter too many times as a beginner, but certainly one that you should strive to use as you grow in expertise. Generics, as a specific area of programming, offers much more that just the collections, but these alone bring great benefits such as readability and improved performance.

Session state and view state, showing how both can be turned off to reduce the amount of processing that ASP.NET needs to do when running pages.

Binding, object references, concatenation, and collections. All of these are topics that will make you think about your code as you design and write it; stepping back and thinking about code is often a good exercise because it makes you think about the site as a whole.

Caching, which is an easy thing to implement, yet brings great performance improvements. Caching can happen at many levels, from ASP.NET to the database, and all reduce the amount of work required to create pages, and caching should be used wherever data isn’t changing frequently.

Finally, you took a brief look at design and testing, to see what you have to think about and how you go about seeing if your site can perform faster.

The next chapter looks at what happens when things go wrong, examining things like debugging and error handling.

Exercises

1.Convert the two shop pages, Shop.aspx and ShopItem.aspx from using SQL statements to stored procedures.

2.Add caching to the ShopItem.aspx page, so that the page is cached. Note that you need to take into account that this page shows different products, so the cache needs to be varied by the product being shown.

557