Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Beginning ASP.NET 2.0 With CSharp (2006) [eng]

.pdf
Скачиваний:
84
Добавлен:
16.08.2013
Размер:
20.33 Mб
Скачать

Chapter 14

What’s interesting about post cache substitution is that it’s available to you if you write custom controls, meaning you can drop controls onto a page that automatically display up-to-date content irrespective of whether or not the page is cached. The AdRotator control works like this, so if you place an AdRotator onto a page, the ads will always be updated even if the page is cached. This works because within the AdRotator code there is a post cache substitution that is dependent on the file that stores the ads. When the AdRotator is on a page and that page is rendered, the AdRotator always checks the cache because that’s what the post cache substitution does. If the ad’s file hasn’t changed, the cache entry will still be present, but if the file has changed, the cache entry will have been removed, so the ad’s file is re-read. With the AdRotator you get the benefits of caching but without the problem of stale ads — an important consideration if customers are paying for ad space.

Designing for Performance

This is the vaguest area of the book, because much of the topic varies upon the particular application. There are general techniques that can be used, but some specific things depend on your site. For example, you wouldn’t want to go to extreme lengths to extract the last ounce of performance from an intranet site with only 50 users. Sure, make it perform as best as you can, but sometimes there are tradeoffs that aren’t really justified. Is it worth spending a week of your time to gain an extra couple of seconds? Possibly, but not always.

When designing for performance, you have to think ahead and answer the following questions:

How many people are going to use the site? For an intranet site, you’ll have a fixed number of users — the employees in the company. For an Internet site, the user base is unlimited. More users mean more resources will be required on the web server.

How much page interaction is there going to be? Do the pages just fetch and display data, or will the user be answering questions, making selections, and so on? You obviously want your pages to respond quickly whichever the interaction type, but for the latter, shaving off those extra seconds might be worthwhile.

Can any of the pages be pure HTML? If there is no reason for having an ASP.NET page, then don’t. HTML and ASP.NET pages can interact, and HTML pages are faster because they don’t require processing by ASP.NET. Also, when you’re using Internet Information Server 6 on Windows Server 2003, HTML pages are particularly fast because of the way in which they are stored and processed by the web server.

How much database access will there be? This may affect how you configure some of the caching options shown earlier.

One thing you have to be aware of is what ASP.NET does and what the web server does. For example, in Chapter 4 you looked at security and saw how to protect sites and pages so that only certain users can access them. This security only works for ASP.NET pages and not HTML pages, so you have to weigh any performance improvements you might gain from HTML pages against the security implications. If you have pages that need to be secure, then they need to be ASP.NET pages, even if they don’t contain any dynamic content. Using ASP.NET pages also allows you to take advantage of other aspects of ASP.NET, such as Master pages, themes, and navigation. Alternatively, you can configure Internet Information Server so that HTML pages are processed by ASP.NET as well as ASP.NET pages. This brings the addition of security, but at the loss of performance.

548

Performance

Generally, you should use the topics shown in this chapter, such as stored procedures and caching. There are other things that you shouldn’t do, though, such as allowing users to free-search on a large database — it’s inevitable that users will at some stage try to query for more data than they really need, causing a slowdown on the database and a slow-loading page. This they’ll complain about. You may think it’s their fault for running that query, but in reality, it would be your fault (or rather the designer of the application, which may not be you). You can avoid this situation by designing the application so this sort of long-running query isn’t possible, or that if it is necessary, you ensure your database tables are indexed adequately.

Web Server Hardware and Software

This is always a controversial topic, but the physical platform shouldn’t be ignored. Though it’s easy to throw more hardware at the problem of poor performance, spending money is never the entire solution, nor is it always possible — the bean counters who control the money often can’t see the justification or just don’t have the money. But that shouldn’t mean it’s not a valuable topic to think about, and act on if possible. You don’t necessarily have to spend a lot on hardware to benefit. Swapping your processor for a faster one is a choice many people would consider, but it could be a waste — not only is it relatively expensive (a new motherboard might also be required, as well as the expense of the processor), but it also might not be required. Memory, on the other hand, is nearly always a cost-effective upgrade. Memory is cheap, and a web server with lots of memory will perform better than one with little memory — there’s more memory for the web server to work with plus more available for caching.

The software also shouldn’t be ignored. Windows Server 2003 uses Internet Information Server 6 and in a like-for-like situation will generally perform better than version 5 (on Windows Server 2000). There are obvious licensing costs involved, but if you have the option, IIS6 is a worthwhile upgrade.

Testing Performance

How do you go about testing the performance of applications? Perhaps you want to test a new application or an application you’ve inherited, and you want to see if it’s performing to its best ability. A number of techniques and tools are available that you can use, not all of them computer-based. One of them is just natural common sense, and this involves being realistic about your expectations, and the expectations of the web site users. For example, if you were creating an application that showed a picture gallery, you’d expect there to be some delay because pictures can be large and take time to load. But the aforementioned free database query is another matter — you understand the complexities involved, but your users may not, and they’ll expect all queries to return with the same speed.

One of the first things you need to do is establish a baseline.tThis is the set of expectations and numbers that detail how the site performs currently: how fast pages are returned to the user, the number of pages served within a time period, and so on. This is important, because without any numbers you have no accuracy — you’re in the realms of guesswork. With numbers, you can make changes and see whether they improve performance (more pages served over a given timeframe means more users). The following sections look at some simple ways of getting these numbers.

Tracing

The simplest method is to use ASP.NET tracing, and though it is not designed as a performance tool, it can be used to gain a great deal of understanding into how your pages are performing. One of the primary

549

Chapter 14

uses for tracing is to help with debugging, and you’ll look at that in detail in the next chapter, but for now you can concentrate on its use for analyzing performance.

Tracing works by a simple configuration change, either in the page or in Web.config, which instructs ASP.NET to output additional information at the bottom of the page. You do this in the following

Try It Out.

Try It Out

Tracing

1.From the Wrox United web site in VWD, open the Shop.aspx file (it’s in the main directory).

2.Add the Trace attribute to the Page directive (it doesn’t matter where in the Page directive it goes):

<%@ Page Language=”VB” Trace=”True” MasterPageFile=”...”

3.Save the file and run the application. You’ll see something resembling Figure 14-8. You may see more or less depending on your screen resolution.

Figure 14-8

550

Performance

How It Works

It’s not so much how it works as what it does, adding several pages worth of information to the end of the page. Chapter 15 looks at some of these sections in more detail, but for now let’s concentrate on the Trace Information and the Control Tree.

The Trace Information shows four columns:

Category: The category of the message. All messages generated by ASP.NET for the page have the page name as the category.

Message: An individual trace message. By default, some of the ASP.NET events are shown, with two messages for each, one for the start of the event and one for the end.

From First(s): The time in seconds since the first message was displayed.

From Last(s): The time in seconds since the last message was displayed.

Immediately you can see that you have some rough performance information, because the total time taken to render the page is shown in the From First column for the End Render method (shown in the Message column). In general, this isn’t a great guide to performance, but the trace information can help you narrow down slow-running areas when you realize you can add your own messages. This is achieved with the Write method of the Trace class. For example:

Trace.Write(“My Category”, “My Message”);

This means you can wrap sections of code within trace statements to see which perform poorly. Chapter 15 looks at the tracing in more detail.

The second section that is of use for performance analysis is the Control Tree, which shows all of the controls on the page as a hierarchical list. Earlier in the chapter, we talked about view state and how minimizing it meant that less was being sent to the browser, and it’s this section that allows you to see the view state. There are five columns:

Control UniqueID: The unique ID of the control on the page. This will differ from the ID you’ve given a control, because it is a combination of the supplied ID plus the IDs of parent controls.

Type: The data type of the control. You can clearly see from this that all content, even straight text, is converted to a control.

Render Size Bytes (including children): The size, in bytes, of the control. For the entire page, which shows as __Page, the first control in the Control Tree section, this tells you the size of the page.

ViewState Size Byes (excluding children): The size, in bytes, of the view state for this control. Note that this doesn’t include child controls.

ControlState Size Bytes (excluding children): The size, in bytes, of the control state for this control. Note that this doesn’t include child controls.

For performance purposes, it’s the numbers that are useful. The render size indicates the size of the controls, so you can see if controls are outputting more code than necessary. The view state can be turned off in many areas, further reducing the size of the page (which in turn reduces the time taken to render it).

551

Chapter 14

The control state can’t be turned off, but in general, controls use very little control state so this shouldn’t be taken as a performance issue.

Tracing only gives rudimentary information about the performance of a site, but it’s a useful starting point for analyzing pages. What it doesn’t tell you is how well a site performs as a whole, or when multiple people access it. For that you need specialist tools.

Stress Testing Tools

Stress testing is the term given to running an application under high load, with lots of users. You could try to persuade lots of people to access the site, but that’s not generally practical, so tools are used to simulate multiple users. Stress testing tools work by accessing pages continuously for a defined period of time, and recording the statistics for those pages. This gives accurate data on how a site performs under stress, and because the tools can take into account things such as certain pages being accessed more than others, the data is very accurate. Another great feature is that some of these tools can read existing web log files and build a stress test from them, which means they are using real-life data to perform testing.

A detailed look at stress testing tools is outside the scope of this book, but if you want to look into this in more detail, Visual Studio 2005 comes with a stress testing tool. If you’re working in a corporate environment, you might already have all of the tools you need. You can find a scaled-down version of the Visual Studio tool from the Microsoft Download site at www.microsoft.com/downloads. Search for “Web Application Stress Tool.” The related resources on the download page point to documents showing how this tool works.

One very important change to make before running any stress tests is to turn off debugging, so make sure that the debug attribute is either removed from a page or is set to False:

<%@ Page debug=”False” ... %>

In the Web.config file, you should also ensure that the debug attribute of the compilation element is set to false:

<compilation debug=”false” ... />

The importance of this cannot be over-emphasized, because an application with debugging will perform slower than one without. This is because the .NET runtime tracks debug code, and ASP.NET doesn’t batch compile pages. It also creates additional temporary files. All of these can have a significant impact on performance tests.

Performance Monitor

Another area that’s really too detailed to go into here is that of performance counters, which can be viewed from the Performance Monitor tool in the Windows Administrative Tools folder. These enable you to measure all sorts of statistics about the workings of your computer, including the CPU usage, the amount of memory being used, and so on. There are also groups and counters for ASP.NET so you can see how well ASP.NET is performing. For more details on performance monitoring, see the “Performance Counters for ASP.NET” topic in the ASP.NET documentation for more details.

552

Performance

Summar y

This chapter looked at a variety of topics that will help your web sites perform to their best ability. It started by looking at some SQL issues, such as database connections and stored procedures, and showed how they not only will help with performance, but also make your code easier to read and maintain. The latter of these is worthwhile achieving on its own, so these techniques really are useful. Additionally, this chapter covered the following topics:

Generic collections, which you won’t encounter too many times as a beginner, but should certainly strive to use as you grow in expertise. Generics, as a specific area of programming, offer much more that just the collections, but these alone bring great benefits such as readability and improved performance.

Session state and view state, showing how both can be turned off to reduce the amount of processing that ASP.NET needs to do when running pages.

Binding, object references, concatenation, and collections. All of these are topics that will make you think about your code as you design and write it. Stepping back and thinking about code is often a good exercise because it makes you think about the site as a whole.

Caching, which is an easy thing to implement, yet brings great performance improvements. Caching can happen at many levels, from ASP.NET to the database, and all reduce the amount of work required to create pages. Caching should be used wherever data isn’t changing frequently.

A brief look at design and testing, to see what you have to think about and how you go about seeing if your site can perform faster.

The next chapter looks at what happens when things go wrong, examining things like debugging and error handling.

Exercises

1.Convert the two shop pages, Shop.aspx and ShopItem.aspx, from using SQL statements to stored procedures.

2.Add caching to the ShopItem.aspx page, so that the page is cached. Note that you need to take into account that this page shows different products, so the cache needs to be varied by the product being shown.

553

15

Dealing with Errors

This chapter covers another topic that you need to think about during the whole of site construction — how to deal with errors. In many ways, this chapter could fit at the beginning of the book, because it’s highly likely you’ll get errors as you work through the book. What this chapter covers could be useful, but some of what’s discussed here uses code and depends on other chapters, so we’ve left it until now.

It’s a fact that you will get errors when creating applications, and that’s okay. We all make mistakes, so this is nothing to be ashamed of or worried about. Some will be simply typing mistakes and some will be more complex, maybe due to lack of practice, but these go away with time. So what this chapter looks at is a variety of topics covering all aspects of handling errors. In particular, it examines the following:

How to write code so that it is error proof

What exceptions are and how they can be handled

How to centrally handle exceptions

How to use debugging and tracing to work out where errors are occurring

The first section looks at how to bulletproof code.

Defensive Coding

Defensive coding is all about anticipation — working out what could possibly go wrong and coding to prevent it. One of the precepts of defensive coding is that you should never assume anything, especially if user input is involved. Most users will be quite happy to use the site as intended, but hackers will search for ways to break into sites, so you have to do anything you can to minimize this risk.

Being hacked isn’t the only reason to code defensively. A coding bug or vulnerability may not be found by a user, but by yourself or a tester. Fixing this bug, then, involves resources — perhaps a project manager, a developer to fix the bug, or a tester to retest the application, all of which take

Chapter 15

time and money. Also, any change to the code leads to potential bugs — there may not be any, but there’s always the possibility. As you add code to correct bugs, the original code becomes more complex, and you occasionally end up with imperfect solutions because you had to code around existing code.

So what can you do to protect your code? Well, there are several techniques to defensive coding.

Parameter Checking

The first of the defensive coding techniques is checking the parameters of methods. When writing subroutines or functions, you should never assume that a parameter has a valid value — you should check it yourself, especially if the content originates from outside of your code. Take the code for the shopping cart for an example — this is in App_Code\Shopping.cs. One of the methods of the cart allows the item to be updated, like so:

public void Update(int RowID, int ProductID, int Quantity, double Price)

{

CartItem Item = _items[RowID];

Item.ProductID = ProductID; Item.Quantity = Quantity; Item.Price = Price; _lastUpdate = DateTime.Now;

}

This is fairly simple code, but it does no checking on the parameters that are passed in. The reason for this is that the code that uses this gets the parameter values from the database before passing them in. So although you can say that the code is okay, what happens if this code is reused in another project? What happens if someone circumvents the calling code to pass in an incorrect price, one lower than that stored in the database? Or more simply, what if the RowID passed in is invalid? With this as an example, you could modify the code like so:

public void Update(int RowID, int ProductID, int Quantity, double Price)

{

if (RowID < _items.Count)

{

CartItem Item = _items[RowID];

Item.ProductID = ProductID; Item.Quantity = Quantity; Item.Price = Price; _lastUpdate = DateTime.Now;

}

}

You’ve now protected this against an incorrect RowID, so no errors will occur when this method is called. It’s a simple check, ensuring that the ID of the row to be updated isn’t larger than the number of rows.

In general, you should always check incoming parameters if the method is a public one — that is, it is called from outside the class. If it’s a method that isn’t accessible from outside of the class (private or

556

Dealing with Errors

protected), then this is less important because you’re probably supplying those parameters yourself, although this doesn’t necessarily mean the parameters will be correct — you might get the values from elsewhere before passing them into the method.

Avoiding Assumptions

In addition to checking parameters, you should avoid assumptions in your code. For example, consider an example that I actually had just now while formatting this document, where there was an error in my own code (yes, shocking isn’t it?). I have VBA macros in Word to perform formatting for the styles of the book, one of which is for the grey code block. It’s a generic routine that accepts three strings, which are the Word styles to use for the first line, the intermediate lines, and the last line, as shown here:

Private Sub FormatParas(First As String, Middle As String, Last As String)

Dim iRow As Integer

Selection.Paragraphs(1).Style = First

For iRow = 2 To Selection.Paragraphs.Count - 1

Selection.Paragraphs(iRow).Style = Middle

Next

Selection.Paragraphs(iRow).Style = Last

End Sub

This code uses Word objects to format paragraphs and assumes that there will be more than one paragraph, which seems sensible because each line is a separate paragraph and there is a separate macro to format code that is only a single line. The code works by setting the first paragraph in the selection to the style defined by First. For subsequent lines (line two downward), the Middle style is used, and for the final line, Last is used as the style. However, if there is only one paragraph in the selection, the

For...Next loop doesn’t run (Selection.Paragraphs.Count being 1), but it does set iRow to 2. The final line fails because iRow is 2 and there is only one paragraph in the selection. I failed in this code because I didn’t defend it against user failure (me being the user in this case), where the selection was only a single paragraph.

The corrected code is as follows:

Private Sub FormatParas(First As String, Middle As String, Last As String)

If Selection.Paragraphs.Count = 1 Then

Selection.Paragraphs(1).Style = Last

Exit Sub

End If

Dim iRow As Integer

Selection.Paragraphs(1).Style = First

For iRow = 2 To Selection.Paragraphs.Count - 1

Selection.Paragraphs(iRow).Style = Middle

Next

Selection.Paragraphs(iRow).Style = Last

End Sub

557