
Asp Net 2.0 Security Membership And Role Management
.pdf
Chapter 8
Many of the controls in ASP.NET 2.0 (both new and old) make use of event validation. A partial list of the ASP.NET controls that make use of event validation is:
Button
CatalogZoneBase
Checkbox
DetailsView
FormView
GridView
HiddenField
ImageButton
LinkButton
ListBox
Menu
RadioButton
TextBox
TreeView
WebPartZoneBase
Because the ClientScriptManager APIs for event validation are all public, if you author custom controls (both web controls and user controls), you can also make use of event validation. Just follow the general registration flow describer earlier. Register your control’s event data for validation when your control is setting up postback event references. In the methods where your control processes postback events, first call ValidateEvent to ensure that the postback is valid prior to carrying out the rest of your control’s event processing.
Also note that even though this discussion has been about event validation for postbacks, the event validation mechanism in ASP.NET 2.0 also works for callbacks. In fact, ASP.NET controls that support callbacks like the TreeView control make use of event validation for both postbacks and callbacks.
Site Navigation Security
ASP.NET 2.0 includes a new set of navigation controls such as Menu and TreeView that work with navigation data. One source of this navigation data is the new Site Navigation feature, which makes use of SiteMapProvider(s). There is one concrete implementation of a SiteMapProvider included in ASP.NET called the XmlSiteMapProvider. Its purpose is to parse Xml in a .sitemap file and return this information as a linked set of SiteMapNode instances that controls like the Menu control can then render. The interesting aspect of the Site Navigation feature from a security perspective is that you will likely define navigation data in a .sitemap file that closely mirrors the navigation hierarchy of your site. A potential security mismatch can occur if your navigation UI renders links to pages that normally would be inaccessible to a user. Even though an unauthorized user won’t be able to actually run such pages, you may not want to even display inaccessible links in the first place.
322

Security for Pages and Compilation
The base SiteMapProvider class has support for a feature called security trimming. If security trimming is turned on for a SiteMapProvider, prior to returning a SiteMapNode from a provider method, the SiteMapProvider first checks to see if the URL represented by the SiteMapNode is actually accessible to the current user. You enable security trimming with the securityTrimmingEnabled attribute as shown in the following sample provider definition:
<siteMap>
<providers> <clear />
<add name=”AspNetXmlSiteMapProvider” type=”System.Web.XmlSiteMapProvider, ...” siteMapFile=”web.sitemap” securityTrimmingEnabled=”true”
/>
</providers>
</siteMap>
When security trimming is enabled, the XmlSiteMapProvider, its immediate base class (StaticSiteMapProvider) and the base SiteMapProvider class all call into SiteMapProvider
.IsAccesibleToUser to determine whether a node is considered accessible. If the URL is not accessible by the current user, then the corresponding SiteMapNode is skipped and is not returned to the user. In some cases, this means a null value is returned to the calling code; in other cases, it means that the node is not included in a SiteMapNodeCollection returned to the user, and in some other cases, it means that node traversal of site map data is halted when an inaccessible node is reached. If you author a custom SiteMapProvider, you can make use of IsAccessibleToUser as well to perform authorization checks for your own node instances.
By default, security trimming is not turned on for the default XmlSiteMapProvider configured in the <sitemap /> configuration element. This means that even if you have authorization rules setup in web.config for your site, your navigation controls will render links to all of the URLs defined in a sitemap even if the current user cannot access them. Even though it would technically be more secure to have turned security trimming on, developers would probably see nodes appearing and disappearing randomly each time they edited the authorization rules in web.config. Without understanding that Site Navigation performs security trimming this would lead folks to think the navigation feature was broken.
The logic inside of the IsAccessibleToUser method uses the authorization logic contained in both
UrlAuthorizationModule and FileAuthorizationModule. It also works with optional role information defined using the roles attribute of a sitemap node in a .sitemap file. Because the authorization rules in the <authorization /> configuration element can apply to only pages inside of a web application, SiteMapNode class allows you to define additional role information about a specific URL. For example, if your .sitemap file had a node definition that pointed at www.microsoft.com, there is no way for URL authorization to decide whether a user is authorized to this URL because it lies outside the scope of your web application. To deal with these types of URLs, or to just define additional role information for an application’s URLs, you can put a semicolon or comma delimited set of roles in the roles attribute of a
<siteMapNode /> element in a .sitemap file.
<siteMapNode url=”http://www.microsoft.com” title=”External Link” roles=”Regular Users, Power Users” />
Another reason that the Site Navigation feature allows for defining roles on a <siteMapNode /> is that not all nodes represent navigable content. For example, if your navigation structure includes menu headers, these headers are only intended for organizing the display of navigation UI.
323

Chapter 8
<siteMapNode title=”Administrative Pages” roles=”Adminstrator” >
<siteMapNode url=”ManageUsers.aspx” title=”Manage Users” roles=”Adminstrator”/>
<siteMapNode url=”ManageRoles.aspx” title=”Manage Roles” roles=”Adminstrator”/>
</siteMapNode>
In this example, the first node is just being used to create a menu entry that a user can hover over. However, the entry is itself not navigable; instead, you would select either Manage Users or Manage Roles in a pop-up menu to navigate to a specific page. Because no URL is associated with the first node, the only way to have SiteMapProvider determine if a user should even see the node in navigation UI is by attributing it with the roles attribute. If you write a custom provider that loads its navigation data from somewhere else, you can also supply role information for this type of node by supplying a collection of role strings in the SiteMapNode constructor.
Also note that the role information is repeated in the two child nodes for managing users and roles. The Site Navigation feature does not have the concept of role inheritance. So, even though a role definition was added to the Administrative Pages node, you still need to mirror the role information in all of the child nodes. If you don’t do this, a piece of code that accesses one of the child nodes directly with a call to FindSiteMapNode would succeed, while node traversal starting at the parent node would fail. As a result, if you don’t copy the role definitions to the children, you end up with inconsistent results returned from the provider, depending on what methods you are calling.
This behavior means that the IsAccessibleToUser method potentially has three different sets of authorization information that it can reference when deciding whether a SiteMapNode’s URL is accessible to the current user. IsAccessibleToUser goes through the following evaluation sequence to determine whether a user is authorized to the URL of a SiteMapNode:
1.If the roles attribute was defined in the .sitemap file for a <siteMapNode /> element, then the provider calls HttpContext.Current.User.IsInRole for each and every role in the roles attribute. If the current user is in at least one of the defined roles, the provider will return the SiteMapNode. This means that the roles attribute of a <siteMapNode /> expands access beyond the authorization rules defined in an <authorization /> tag. As long as there is at least one match between the current user’s roles and the roles in the roles attribute, SiteMapProvider considers a SiteMapNode to be visible to the user.
2.If the roles attribute is set to * (i.e. roles=”*”), this means all users are allowed to see the node, and thus the provider returns the node.
3.If the site map node has no URL, and no match was found in the roles attribute for the current’s user’s roles, then the current user is considered to not have rights to the node. Depending on the provider method that was called this means either a null value is returned, or the provider skips the node and does not include it in the results. This behavior is important to keep in mind if your sitemap contains spacer or header nodes such as the Administrative Pages node shown earlier. Without a roles attribute defining at least one piece of role information on these types of nodes, all users will not have rights to view the node when security trimming is enabled.
4.If no match is found in the roles attribute or the roles attribute does not exist, and the node has a URL, the provider will call into FileAuthorizationModule if Windows authentication is enabled for the website. With Windows authentication enabled, there will be a WindowsIdentity on the context, and as a result the provider can call an internal method on the FileAuthorizationModule that performs authorization checks against the physical file
324

Security for Pages and Compilation
associated with the SiteMapNode. If the authorization check succeeds, then the SiteMapNode is returned to the caller.
5.If the file authorization check fails, or if Windows authentication is not enabled on the site, the provider calls an internal method on the UrlAuthorizationModule, passing it the URL from the SiteMapNode. This authorization check mirrors the behavior you get from the <authorization /> section in your web.config. If the check succeeds, then the SiteMapNode is returned to the caller.
6.If all of the previous checks fail, the user is considered to not have the rights to view the SiteMapNode, and either a null value will be returned by the provider or the provider will stop walking through SiteMapNode(s). On one hand, for example, if FindSiteMapNode was called, a null would be returned. On the other hand, if GetChildNodes was called and the current user did not have access to some of the children of the specified node, then those child nodes would not be included in the returned SiteMapNodeCollection.
One point of confusion about the security trimming behavior that some developers run into is that they expect the roles attribute to be the exclusive definition of authorization information for their nodes. You can end up being surprised when you see nodes still being rendered in your UI even though your roles attributes would seem to indicate that a user should not be seeing a node. What is happening in this case is that the provider falls through the roles attribute check and continues to the file and URL authorization checks. And then one of these two authorization checks succeed.
One side effect of all of this processing is that the performance of iterating through a sitemap with security trimming turned on is substantially less than when it is turned off. Because file authorization and URL authorization were really intended for authorization checks for single page, they tend to be rather inefficient when a feature like Site Navigation comes along and starts asking for hundreds of authorization checks on a single page request. You can run a sitemap with 150–300 nodes in it with security trimming turned on, and other than increased CPU utilization you shouldn’t see any effect on your application performance. However, if you plan to create a sitemap with thousands of nodes in it, the default security trimming behavior will probably be too expensive for your application.
Another issue you might run into when you turn on security trimming is that all of your navigation UI may suddenly disappear, depending on the kind of navigation structure you have in your .sitemap. If your structure has a root node that you don’t ever intend to display (that is, you set up your SiteMapDataSource to skip this node), you still need to put a roles=”*” attribute in the root node as shown here:
<?xml version=”1.0” encoding=”utf-8” ?>
<siteMap xmlns=”http://schemas.microsoft.com/AspNet/SiteMap-File-1.0” > <siteMapNode title=”hidden root” roles=”*”>
<siteMapNode title=”Administrator Pages” roles=”Administrator”> <siteMapNode url=”ManageUsers.aspx” title=”Manage Users”
roles=”Administrator” />
<siteMapNode url=”ManageRoles.aspx” title=”Manage Roles” roles=”Administrator” />
</siteMapNode>
<siteMapNode title=”Regular Pages” roles=”*”>
<siteMapNode url=”http://www.microsoft.com” title=”External link” roles=”*” />
<siteMapNode url=”Default.aspx” title=”Home Page” roles=”*” /> </siteMapNode>
</siteMapNode>
</siteMap>
325

Chapter 8
Without the bolded “roles” definition, any attempt to render the full sitemap will result in no nodes being returned. Because the root node has no URL, the provider only has the roles attribute to go against for authorization information. As a result, if you leave out the roles attribute, the provider will think that no one is authorized to that node, and node traversal through the rest of the sitemap will stop.
If you want the XmlSiteMapProvider that ships with ASP.NET 2.0 to rely only on the information contained in the roles attribute, you can derive from the provider and implement custom logic in an override of the IsAccessibleToUser method.
public class CustomAuthorization : XmlSiteMapProvider
{
public override bool IsAccessibleToUser(HttpContext context, SiteMapNode node)
{
if (node == null)
{
throw new ArgumentNullException(“You must specify a node.”);
}
if (context == null)
{
throw new ArgumentNullException(“The supplied context cannot be null”);
}
if (!SecurityTrimmingEnabled)
{
return true;
}
if (node.Roles != null && node.Roles.Count > 0)
{
foreach (string role in node.Roles)
{
// Grant access if one of the roles is a “*”. if (String.Equals(role, “*”,
StringComparison.InvariantCultureIgnoreCase))
{
return true;
}
else if (context.User != null && context.User.IsInRole(role))
{
return true;
}
}
}
//If you make it this far, the user is not authorized return false;
}
}
This code mirrors the logic inside of SiteMapProvider.IsAccessibleToUser — but instead of attempting other checks at the end of the method, this custom provider looks only at the information in the roles attribute. If you use this custom provider in your site, you will see that now the roles attribute is the only thing controlling whether a SiteMapNode is returned to calling code. A nice
326

Security for Pages and Compilation
performance benefit of this approach is that bypassing the file and URL authorization checks substantially increases the performance of security trimming. With the preceding code you could realistically accommodate a 1000 node sitemap.
This custom code brings up a very important security point though. Don’t be fooled into thinking that security trimming with the previous custom code makes your site secure. The only thing the custom code does is to give you the ability to precisely control authorization of your sitemap information independently of the authorization rules you have defined either in web.config or through NTFS ACLs. Just because Site Navigation now hides nodes based exclusively on the sitemap’s role information doesn’t mean that your pages are secure. A user who knows the correct URL for a page can always attempt to access it by typing it into a browser. As a result, if you use an approach like the custom provider you must always ensure that you have still correctly secured your pages and directories with URL authorization and file authorization.
Summar y
Since ASP.NET 1.0, page developers have benefited from the ability to hash and encrypt viewstate. Although not widely known, you could also make viewstate information unique to a specific user with the ViewStateUserKey property. With the introduction of the new viewstate encryption mode feature in ASP.NET 2.0, control developers now have the option of automatically turning on viewstate encryption when they know their controls store potentially sensitive data in viewstate.
When data is submitted to an ASP.NET page, all input should initially be considered untrusted. Although the majority of the work involved in scrubbing input data lies with the developer, ASP.NET does have some protections that work on your behalf. Since ASP.NET 1.1, the runtime validates form data, query-string values and cookie values for suspicious string sequences. Although this type of check is not exhaustive, it does cover the most likely forms of malicious input. ASP.NET 2.0 introduces new logic to protect against fraudulent postbacks. Because postbacks can be easily triggered with a few lines of JavaScript, it is possible to forge postback data to controls and events that were not rendered on the page. By default, ASP.NET 2.0 now checks for this situation and will not trigger server-side events for nonvisible or disabled controls and events that were never rendered on the client.
For more secure sites, the compilation model in ASP.NET whereby dynamically compiled pages are all placed within the common Temporary ASP.NET Files directory may not be desirable. You can change the location of this temporary folder on a per-application basis using the <compilation /> element. Secure sites that signed their code-behind assemblies in ASP.NET 1.1 for use with custom CAS policies can still follow a similar approach in ASP.NET 2.0. The precompilation feature in ASP.NET 2.0 allows you to precreate all of the assemblies needed for a site and to then sign these assemblies.
The new Site Navigation feature in ASP.NET 2.0 makes it possible to quickly and easily create rich navigation UI. However the navigation UI can represent an alternate representation of an application’s directory and page structure, which can lead to two parallel authorization approaches being used. Because it can be difficult to keep authorization rules for UI elements in sync with the authorization results enforced for individual pages, you can enable the security trimming feature for Site Navigation providers. When security trimming is turned on, a SiteMapProvider will enforce an application’s file authorization rules and URL authorization rules against the node data that is returned from the provider.
327


The Provider Model
Many of the new features in ASP.NET 2.0, including the Membership and Role Manager features, are built using the provider model. The provider model is not just an architectural model limited to ASP.NET 2.0 features; the base classes are available for you to build your own provider-based features.
This chapter covers the theory and intent behind the provider model so that you have a good idea of the patterns used by provider-based features. You will be introduced to the base provider classes, the services they provide, and the general assumptions around the ASP.NET provider model. Last, you will see some examples of how you can create your own custom feature using the provider model.
This chapter will cover the following topics:
Why have providers?
Patterns found in the Provider model
Core provider classes
Building a provider-based feature
Why Have Providers?
Traditionally, when a software vendor creates a programming framework or a software platform a good deal of the framework logic is baked into the actual binaries. If extensibility is required, then a product like an operating system incorporates a device driver model that allows third parties to extend it. For something like the .NET Framework, extensibility is usually accomplished by deriving from certain base classes and implementing the expected functionality.
The device driver model and the derivation model are two ends of the extensibility spectrum. With device drivers, higher-level functionality, like a word processor, is insulated from the specifics of how abstract commands are actually carried out. Clearly modern-day word processors

Chapter 9
are oblivious to the technical details of how any specific graphics card displays pixels or how any vendor’s printer renders fonts.
Writing software that derives from base classes defined in a framework or software development kit (SDK) usually implies another piece of code that knows about the custom classes you’re writing. For example, if you implement a custom collection class, somewhere else you have code that references the assembly containing your custom collection class and that code also contains explicit references to the custom collection class.
What happens though if you want to have the best of both worlds? How do you get the separation of functionality afforded by the device driver model, while still retaining the ability to write custom code that extends or replaces core functionality in the .NET Framework? The answer in the 2.0 Framework is the provider model that ASP.NET 2.0 relies heavily upon. The provider model allows you to swap custom logic into your application in much the same way you would install device drivers for a new graphics card. And you can swap in this custom logic in such a way that none of your existing code needs to be touched or recompiled.
Simultaneously though, there are well-defined provider APIs that you can code against to create your own custom business logic and business rules. If you choose, you can write applications to take a direct dependency on your custom code — but this is definitely not a requirement. Well-written providers can literally be transparently “snapped into” an application.
To accomplish this, the 2.0 Framework includes some base classes and helper methods that provide the basic programming structure for the provider model. Specific features within the Framework extend these base classes and build feature-specific providers. To make this all a bit more concrete, you can use the Membership feature as a sort of canonical example of a provider-based feature.
The Membership feature of course deals with the problem of creating user credentials, managing these credentials, and verifying credentials provided by applications. When the Membership feature was first designed a number of different design options were available:
Write a set of Membership related classes that contained all of the business logic and data storage functionality as designed by the ASP.NET team. This option is the “black-box” option; you would end up with functional APIs, and zero extensibility.
Keep the same set of classes from option 1, but add protected virtual methods and/or eventbased extensibility hooks. This model would be more akin to the control development model in ASP.NET. With this model you start out with either an ASP.NET control or a third-party control, and through event hookups or derivations you modify the behavior of a control to better suit your needs.
Separate the intent of the Membership feature from the actual business logic and data storage functionality necessary to get a functional Membership feature. This approach involves defining one set of classes that all developers can use, but having concrete implementations of other classes (the provider base classes) that contain very specific functionality. Along with this separation the design requires the ability to swap out concrete provider implementations without impacting the common set of classes that all developers rely upon.
Now, of course, because this book isn’t a mystery story; you know the outcome of these various design decisions. The 2.0 Framework and ASP.NET 2.0 in particular went with the third option: providing a common set of Membership classes for everyone to use, while compartmentalizing most of the business logic and data storage rules inside of various Membership providers.
330

The Provider Model
It is pretty clear why you wouldn’t want the first option. Creating useful APIs and great functionality inside of black boxes is nice until about 60 seconds after the first developer lays eyes on it and determines that for their needs they require some different logic. The second design option is actually not all that unreasonable. Clearly ASP.NET developers are comfortable with the event-based extensibility that has been around since ASP.NET 1.0 (and for that matter all the way back to earlier versions of Visual Basic).
However, event-driven extensibility and protected virtual methods have the shortcoming that if an application wants different behavior than what is built into the Framework, then some other piece of code needs to be explicitly linked or referenced. For example, using the second design approach, what happens if you want to create users somewhere other than the default SQL Server schema that ships in ASP.NET 2.0? If creating users raised some kind of event where you could create the actual MembershipUser in a back-end data store, you could hook this event and then return the new object, probably as a property on an event argument.
The shortcoming here is that now in every application where you want to use your custom data store you also need to include code that explicitly wires up the event hookups. If the extensibility mechanism used a protected virtual method instead, then each of your applications would need code that explicitly created the custom implementations of the various Membership classes. For both cases, you effectively have a compile-time dependency on your custom code. If you ever want to choose a different custom implementation of Membership, you have the hassle of recompiling each of your applications to reference the new code.
The third option — the provider-based design approach — breaks the compilation dependency. With the 2.0 Framework, you can write code against a common set of classes (that is, Membership, MembershipUser, and MembershipUserCollection). Nowhere in your code-base do you need a compile-time reference to your custom implementation of a MembershipProvider. If you wake up tomorrow and decide to throw out your custom MembershipProvider, there is no problem; you drop a different assembly onto your desktops or servers, tweak a configuration setting, and the rest of your applications continue to work. Sounds a lot like swapping out graphics cards and device drivers without the “excitement” that such upgrades usually entail.
Of course, the ability to tweak some settings in configuration requires that the Membership feature use some kind of dynamic type loading mechanism. Underneath the hood, this mechanism allows a feature to convert a configuration setting into a reference to a concrete provider class. And, of course, a dynamic type loading mechanism also requires at least a basic programming contract that defines the type signature that the Membership feature expects to dynamically load.
So, a provider-based feature in short has the following characteristics:
A well-defined set of public APIs that most application code is expected to code against.
A well-defined set of one or more interfaces or class definitions that define the extensible set of classes for the feature. In the 2.0 Framework, these are the provider base classes.
A configuration mechanism that can generically associate concrete provider implementations with each feature.
A type-loading mechanism that can read configuration and create concrete instances of the providers to hand back to the feature APIs.
331