Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Pro CSharp 2008 And The .NET 3.5 Platform [eng]

.pdf
Скачиваний:
78
Добавлен:
16.08.2013
Размер:
22.5 Mб
Скачать

612 CHAPTER 18 BUILDING MULTITHREADED APPLICATIONS

However, using manual thread management is preferred in some cases, for example:

If you require foreground threads or must set the thread priority. Pooled threads are always background threads with default priority (ThreadPriority.Normal).

If you require a thread with a fixed identity in order to abort it, suspend it, or discover it by name.

Source Code The ThreadPoolApp project is included under the Chapter 18 subdirectory.

The Role of the BackgroundWorker Component

The final threading type we will examine here is BackgroundWorker, defined in the System. ComponentModel namespace (of mscorlib.dll). BackgroundWorker is a class that is very helpful when you are building a graphical Windows Forms desktop application and need to execute a longrunning task (invoking a remote web service, performing a database transaction, downloading a large file, etc.) on a thread different from your application’s main UI thread.

While you are most certainly able to build multithreaded GUI applications by making direct use of the System.Threading types as seen in this chapter, BackgroundWorker allows you to get the job done with much less fuss and bother. Thankfully, the programming model of this type leverages much of the same threading syntax we find with asynchronous delegates, so learning how to use this type is very straightforward.

To use a BackgroundWorker, you simply tell it what method to execute in the background and call RunWorkerAsync(). The calling thread (typically the primary thread) continues to run normally while the worker method runs asynchronously. When the time-consuming method has completed, the BackgroundWorker type informs the calling thread by firing the RunWorkerCompleted event. The associated event hander provides an incoming argument that allows you to obtain the results of the operation (if any exist).

Note The following example assumes you have some familiarity with GUI desktop development using Windows Forms. If this is not the case, you may wish to return to this section once you have completed reading Chapter 27.

Working with the BackgroundWorker Type

To illustrate using this UI threading component, begin by creating a new Windows Forms application named WinFormsBackgroundWorkerThread. Staying true to the same numerical operation examples used here, construct a simple UI that allows the user to input two values to process (via TextBox types) and a Button type to begin the background operation. Be sure to give each UI element a fitting name using the Name property of the Properties window. Figure 18-12 shows one possible layout.

CHAPTER 18 BUILDING MULTITHREADED APPLICATIONS

613

Figure 18-12. Layout of the Windows Forms UI application

After you have designed your UI layout, handle the Click event of the Button type by doubleclicking the control on the form designer. This will result in a new event handler that we will implement in just a bit:

private void btnProcessData_Click(object sender, EventArgs e)

{

}

Now, open the Components region of your Toolbox, locate the BackgroundWorker component (see Figure 18-13), and drag an instance of this type onto your form designer.

Figure 18-13. The BackgroundWorker type

614 CHAPTER 18 BUILDING MULTITHREADED APPLICATIONS

You will now see a variable of this type on the designer’s component tray. Using the Properties window, rename this component to ProcessNumbersBackgroundWorker. Now, switch to the Event pane of the Properties window (by clicking the “lightning bolt” icon) and handle the DoWork and RunWorkerCompleted events by double-clicking each event name. This will result in the following new handlers added to your initial Form-derived type:

private void ProcessNumbersBackgroundWorker_DoWork(object sender, DoWorkEventArgs e)

{

}

private void ProcessNumbersBackgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)

{

}

The DoWork event handler represents the method that will be called by the BackgroundWorker on the secondary thread of execution. Notice that the second parameter of the handler is a DoWorkEventArgs type, which will contain any arguments required by the secondary thread to complete its work. As you’ll see in just a moment, when you call the RunWorkerAsync() method to spawn this thread, you have the option of passing in this related data (quite similar to working with the ParameterizedThreadStart delegate type used previously in this chapter).

The RunWorkerCompleted event represents the method that the BackgroundWorker will invoke once the background operation has completed. Using the RunWorkerCompletedEventArgs type, you are able to scrape out any return value of the asynchronous operation.

Processing Our Data with the BackgroundWorker Type

At this point, we can flesh out the details of processing the user input. Recall that when you wish to inform the BackgroundWorker type to spin up a secondary thread of execution, you must call

RunWorkerAsync(). When you do so, you have the option of passing in a System.Object type to represent any data to pass the method invoked by the DoWork event. Here, we will reuse the AddParams class we created in the ParameterizedThreadStart example:

class AddParams

{

public int a, b;

public AddParams(int numb1, int numb2)

{

a = numb1; b = numb2;

}

}

With this helper class in place, we are now able to implement the Click event handler of our Button type as follows:

private void btnProcessData_Click(object sender, EventArgs e)

{

try

{

// First get the user data (as numerical). int numbOne = int.Parse(txtFirstNumber.Text);

int numbTwo = int.Parse(this.txtSecondNumber.Text); AddParams args = new AddParams(numbOne, numbTwo);

CHAPTER 18 BUILDING MULTITHREADED APPLICATIONS

615

// Now spin up the new thread and pass args variable.

ProcessNumbersBackgroundWorker.RunWorkerAsync(args);

}

catch(Exception ex)

{

MessageBox.Show(ex.Message);

}

}

As soon as you call RunWorkerAsync(), the DoWork event fires, which will be captured by your handler. Implement this type to scrape out the AddParams object using the Argument property of the incoming DoWorkEventArgs. Again, to simulate a lengthy operation, we will put the current thread to sleep for approximately five seconds. After this point, we will return the value using the Result property of the DoWorkEventArgs type:

private void ProcessNumbersBackgroundWorker_DoWork(object sender, DoWorkEventArgs e)

{

//Get the incoming AddParam object.

AddParams args = (AddParams)e.Argument;

//Artificial lag.

System.Threading.Thread.Sleep(5000);

//Return the value.

e.Result = args.a + args.b;

}

Finally, once the BackgroundWorker type has exited the scope of the DoWork handler, the RunWorkerCompleted event will fire. Our registered handler will simply display the result of the operation using the RunWorkerCompletedEventArgs.Result property:

private void ProcessNumbersBackgroundWorker_RunWorkerCompleted( object sender, RunWorkerCompletedEventArgs e)

{

MessageBox.Show(e.Result.ToString(), "Your result is");

}

If you were to now run your application, you will find that while the data is being processed, the thread hosting the UI is still completely responsive (for example, the window can be resized, moved, minimized, etc.). If you wish to accentuate this point, you might want to add a new TextBox to the form and verify you are able to enter data within the UI area while the five-second addition operation performs asynchronously in the background.

Source Code The WinFormsBackgroundWorkerThread project is included under the Chapter 18 subdirectory.

That wraps up our examination of multithreaded programming under .NET. To be sure, the System.Threading namespace defines numerous types beyond what I had the space to cover in this chapter. Nevertheless, at this point you should have a solid foundation to build on.

616 CHAPTER 18 BUILDING MULTITHREADED APPLICATIONS

Summary

This chapter began by examining how .NET delegate types can be configured to execute a method in an asynchronous manner. As you have seen, the BeginInvoke() and EndInvoke() methods allow you to indirectly manipulate a background thread with minimum fuss and bother. During this discussion, you were also introduced to the IAsyncResult interface and AsyncResult class type. As you learned, these types provide various ways to synchronize the calling thread and obtain possible method return values.

The remainder of this chapter examined the role of the System.Threading namespace. As you learned, when an application creates additional threads of execution, the result is that the program in question is able to carry out numerous tasks at (what appears to be) the same time. You also examined several manners in which you can protect thread-sensitive blocks of code to ensure that shared resources do not become unusable units of bogus data.

This chapter also pointed out that the CLR maintains an internal pool of threads for the purposes of performance and convenience. Last but not least, you examined the use of the BackgroundWorker type, which allows you to easily spin up new threads of execution within a GUI-based application.

C H A P T E R 1 9

Understanding CIL and the Role of

Dynamic Assemblies

The goal of this chapter is twofold. In the first half, you will have a chance to examine the syntax and semantics of the common intermediate language (CIL) in much greater detail than in previous chapters. Now, to be perfectly honest, you are able to live a happy and productive life as a .NET programmer without concerning yourself too much with the details of CIL code. However, once you learn the basics of CIL, you will gain a much deeper understanding of how some of the “magical” aspects of .NET (such as cross-language inheritance) actually work.

In the remainder of this chapter, you will examine the role of the System.Reflection.Emit namespace. Using these types, you are able to build software that is capable of generating .NET assemblies in memory at runtime. Formally speaking, assemblies defined and executed in memory are termed dynamic assemblies. As you might guess, this particular aspect of .NET development requires you to speak the language of CIL, given that you will be required to specify the CIL instruction set that will be used during the assembly’s construction.

Reflecting on the Nature of CIL Programming

CIL is the true mother tongue of the .NET platform. When you build a .NET assembly using your managed language of choice (C#, VB, COBOL.NET, etc.), the associated compiler translates your source code into terms of CIL. Like any programming language, CIL provides numerous structural and implementation-centric tokens. Given that CIL is just another .NET programming language, it should come as no surprise that it is possible to build your .NET assemblies directly using CIL and the CIL compiler (ilasm.exe) that ships with the .NET Framework 3.5 SDK.

Now while it is true that few programmers would choose to build an entire .NET application directly with CIL, CIL is still an extremely interesting intellectual pursuit. Simply put, the more you understand the grammar of CIL, the better able you are to move into the realm of advanced .NET development. By way of some concrete examples, individuals who possess an understanding of CIL are capable of the following:

Talking intelligently about how different .NET programming languages map their respective keywords to CIL tokens.

Disassembling an existing .NET assembly, editing the CIL code, and recompiling the updated code base into a modified .NET binary.

Building dynamic assemblies using the System.Reflection.Emit namespace.

617

618 CHAPTER 19 UNDERSTANDING CIL AND THE ROLE OF DYNAMIC ASSEMBLIES

Leveraging aspects of the CTS that are not supported by higher-level managed languages, but do exist at the level of CIL. To be sure, CIL is the only .NET language that allows you to access each and every aspect of the CTS. For example, using raw CIL, you are able to define global-level members and fields (which are not permissible in C#).

Again, to be perfectly clear, if you choose not to concern yourself with the details of CIL code, you are absolutely able to gain mastery of C# and the .NET base class libraries. In many ways, knowledge of CIL is analogous to a C(++) programmer’s understanding of assembly language. Those who know the ins and outs of the low-level “goo” are able to create rather advanced solutions for the task at hand and gain a deeper understanding of the underlying programming (and runtime) environment. So, if you are up for the challenge, let’s begin to examine the details of CIL.

Note Understand that this chapter is not intended to be a comprehensive treatment of the syntax and semantics of CIL. If you require a full examination of the topic, check out CIL Programming: Under the Hood of .NET by Jason Bock (Apress, 2002).

Examining CIL Directives, Attributes, and Opcodes

When you begin to investigate low-level languages such as CIL, you are guaranteed to find new (and often intimidating-sounding) names for very familiar concepts. For example, at this point in the text, if you were shown the following set of items:

{new, public, this, base, get, set, explicit, unsafe, enum, operator, partial}

you would most certainly understand them to be keywords of the C# language (which is correct). However, if you look more closely at the members of this set, you may be able to see that while each item is indeed a C# keyword, it has radically different semantics. For example, the enum keyword defines a System.Enum-derived type, while the this and base keywords allow you to reference the current object or the object’s parent class, respectively. The unsafe keyword is used to establish a block of code that cannot be directly monitored by the CLR, while the operator keyword allows you to build a hidden (specially named) method that will be called when you apply a specific C# operator (such as the plus sign).

In stark contrast to a higher-level language such as C#, CIL does not just simply define a generic set of keywords, per se. Rather, the token set understood by the CIL compiler is subdivided into three broad categories based on semantics:

CIL directives

CIL attributes

CIL operation codes (opcodes)

Each category of CIL token is expressed using a particular syntax, and the tokens are combined to build a valid .NET assembly.

The Role of CIL Directives

First up, we have a set of well-known CIL tokens that are used to describe the overall structure of a

.NET assembly. These tokens are called directives. CIL directives are used to inform the CIL compiler how to define the namespaces(s), type(s), and member(s) that will populate an assembly.

CHAPTER 19 UNDERSTANDING CIL AND THE ROLE OF DYNAMIC ASSEMBLIES

619

Directives are represented syntactically using a single dot (.) prefix (e.g., .namespace, .class,

.publickeytoken, .method, .assembly, etc.). Thus, if your *.il file (the conventional extension for a file containing CIL code) has a single .namespace directive and three .class directives, the CIL compiler will generate an assembly that defines a single .NET namespace containing three .NET class types.

The Role of CIL Attributes

In many cases, CIL directives in and of themselves are not descriptive enough to fully express the definition of a given .NET type or type member. Given this fact, many CIL directives can be further specified with various CIL attributes to qualify how a directive should be processed. For example, the .class directive can be adorned with the public attribute (to establish the type visibility), the extends attribute (to explicitly specify the type’s base class), and the implements attribute (to list the set of interfaces supported by the type).

The Role of CIL Opcodes

Once a .NET assembly, namespace, and type set have been defined in terms of CIL using various directives and related attributes, the final remaining task is to provide the type’s implementation logic. This is a job for operation codes, or simply opcodes. In the tradition of other low-level languages, many CIL opcodes tend to be cryptic and completely unpronounceable by us mere humans. For example, if you need to define a string variable, you don’t use a friendly opcode named LoadString, but rather ldstr.

Now, to be fair, some CIL opcodes do map quite naturally to their C# counterparts (e.g., box, unbox, throw, and sizeof). As you will see, the opcodes of CIL are always used within the scope of a member’s implementation, and unlike CIL directives, they are never written with a dot prefix.

The CIL Opcode/CIL Mnemonic Distinction

As just explained, opcodes such as ldstr are used to implement the members of a given type. In reality, however, tokens such as ldstr are CIL mnemonics for the actual binary CIL opcodes. To clarify the distinction, assume you have authored the following method in C#:

static int Add(int x, int y)

{

return x + y;

}

The act of adding two numbers is expressed in terms of the CIL opcode 0X58. In a similar vein, subtracting two numbers is expressed using the opcode 0X59, and the act of allocating a new object on the managed heap is achieved using the 0X73 opcode. Given this reality, understand that the “CIL code” processed by a JIT compiler is actually nothing more than blobs of binary data.

Thankfully, for each binary opcode of CIL, there is a corresponding mnemonic. For example, the add mnemonic can be used rather than 0X58, sub rather than 0X59, and newobj rather than 0X73. Given this opcode/mnemonic distinction, realize that CIL decompilers such as ildasm.exe translate an assembly’s binary opcodes into their corresponding CIL mnemonics. For example, here would be the CIL presented by ildasm.exe for the previous C# Add() method:

.method private hidebysig static int32 Add(int32 x, int32 y) cil managed

{

// Code

size

9

(0x9)

 

.maxstack 2

 

 

 

.locals

init ([0]

int32

CS$1$0000)

620 CHAPTER 19 UNDERSTANDING CIL AND THE ROLE OF DYNAMIC ASSEMBLIES

IL_0000: nop IL_0001: ldarg.0 IL_0002: ldarg.1

IL_0003: add

IL_0004: stloc.0 IL_0005: br.s IL_0007 IL_0007: ldloc.0 IL_0008: ret

} // end of method MathStuff::Add

Unless you’re building some extremely low-level .NET software (such as a custom managed compiler), you’ll never need to concern yourself with the literal numeric binary opcodes of CIL. For all practical purposes, when .NET programmers speak about “CIL opcodes” they’re referring to the set of friendly string token mnemonics (as I’ve done within this text, and will do for the remainder of this chapter) rather than the underlying numerical values.

Pushing and Popping: The Stack-Based

Nature of CIL

Higher-level .NET languages (such as C#) attempt to hide low-level CIL grunge from view as much as possible. One aspect of .NET development that is particularly well hidden is the fact that CIL is a stack-based programming language. Recall from our examination of the collection namespaces (see Chapter 10) that the System.Collections.Stack type can be used to push a value onto a stack as well as pop the topmost value off of the stack for use. Of course, CIL developers do not literally use an object of type System.Collections.Stack to load and unload the values to be evaluated; however, the same pushing and popping mind-set still applies.

Formally speaking, the entity used to hold a set of values to be evaluated is termed the virtual execution stack. As you will see, CIL provides a number of opcodes that are used to push a value onto the stack; this process is termed loading. As well, CIL defines a number of additional opcodes that transfer the topmost value on the stack into memory (such as a local variable) using a process termed storing.

In the world of CIL, it is impossible to access a point of data directly, including locally defined variables, incoming method arguments, or field data of a type. Rather, you are required to explicitly load the item onto the stack, only to then pop it off for later use (keep this point in mind, as it will help explain why a given block of CIL code can look a bit redundant).

Note Recall that CIL is not directly executed, but compiled on demand. During the compilation of CIL code, many of these implementation redundancies are optimized away. Furthermore, if you enable the code optimization option for your current project (using the Build tab of the Visual Studio Project Properties window), the compiler will also remove various CIL redundancies.

To understand how CIL leverages a stack-based processing model, consider a simple C# method, PrintMessage(), which takes no arguments and returns void. Within the implementation of this method, you will simply print out the value of a local string variable to the standard output stream:

public void PrintMessage()

{

string myMessage = "Hello."; Console.WriteLine(myMessage);

}

CHAPTER 19 UNDERSTANDING CIL AND THE ROLE OF DYNAMIC ASSEMBLIES

621

If you were to examine how the C# compiler translates this method in terms of CIL, you would first find that the PrintMessage() method defines a storage slot for a local variable using the .locals directive. The local string is then loaded and stored in this local variable using the ldstr (load string) and stloc.0 opcodes (which can be read as “store the current value in a local variable at index zero”).

The value (again, at index 0) is then loaded into memory using the ldloc.0 (“load the local argument at index 0”) opcode for use by the System.Console.WriteLine() method invocation (specified using the call opcode). Finally, the function returns via the ret opcode. Here is the (annotated) CIL code for the PrintMessage() method:

.method public hidebysig instance void PrintMessage() cil managed

{

.maxstack 1

//Define a local string variable (at index 0).

.locals init ([0] string myMessage)

//Load a string on to the stack with the value "Hello." ldstr " Hello."

//Store string value on the stack in the local variable. stloc.0

//Load the value at index 0.

ldloc.0

// Call method with current value.

call void [mscorlib]System.Console::WriteLine(string) ret

}

Note As you can see, CIL supports code comments using the double-slash syntax (as well as the /*...*/ syntax, for that matter). As in C#, code comments are completely ignored by the CIL compiler.

Now that you have the basics of CIL in your mind, let’s see a practical use of CIL programming, beginning with the topic of “round-trip engineering.”

Understanding Round-Trip Engineering

You are aware of how to use ildasm.exe to view the CIL code generated by the C# compiler (see Chapter 1). What you may not know, however, is that ildasm.exe allows you to dump the CIL contained within an assembly loaded into ildasm.exe to an external file. Once you have the CIL code at your disposal, you are free to edit and recompile the code base using the CIL compiler, ilasm.exe.

Note Also recall that reflector.exe can be used to view the CIL code of a given assembly, as well as to translate the CIL code into an approximate C# code base. However, if an assembly contains CIL constructs that do not have a C# equivalent, you will need to fall back on the use of ildasm.exe.

Formally speaking, this technique is termed round-trip engineering, and it can be useful under a number of circumstances:

You need to modify an assembly for which you no longer have the source code.

You are working with a less-than-perfect .NET language compiler that has emitted ineffective (or flat-out incorrect) CIL code, and you wish to modify the code base.