Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Algorithms [Sedgewick, Robert]

.pdf
Скачиваний:
143
Добавлен:
02.05.2014
Размер:
4.8 Mб
Скачать

ADVANCED TOPICS

35. Algorithm Machines

The algorithms that we have studied are, for the most part, remarkably robust in their applicability. Most of the methods that we have seen are a decade or more old and have survived many quite radical changes in

computer hardware and software. New hardware designs and new software capabilities certainly can have a significant impact on specific algorithms, but good algorithms on old machines are, for the most part, good algorithms on new machines.

One reason for this is that the fundamental design of “conventional” computers has changed little over the years. The design of the vast majority of computing systems is guided by the same underlying principle, which was developed by the mathematician J. von Neumann in the early days of modern computing. When we speak of the von Neumann model of computation, we refer to a view of computing in which instructions and data are stored in the same memory and a single processor fetches instructions from the memory and executes them (perhaps operating on the data), one by one. Elaborate mechanisms have been developed to make computers cheaper, faster, smaller (physically), and larger (logically), but the architecture of most computer systems can be viewed as variations on the von Neumann theme.

Recently, however, radical changes in the cost of computing components have made it plausible to consider radically different types of machines, ones in which a large number of instructions can be executed at each time instant or in which the instructions are “wired in” to make special-purpose machines capable of solving only one problem or in which a large number of smaller machines can cooperate to solve the same problem. In short, rather than having a machine execute just one instruction at each time instant, we can think about having a large number of actions being performed simultaneously.

In this chapter, we shall consider the potential effect of such ideas on some of the problems and algorithms we have been considering.

457

458 CHAPTER 35

General Approaches

Certain fundamental algorithms are used so frequently or for such large problems that there is always pressure to run them on bigger and faster computers. One result of this has been a series of “supercomputers” which embody the latest technology; they make some concessions to the fundamental von Neumann concept but still are designed to be general-purpose and useful for all programs. The common approach to using such a machine for the type of problem we have been studying is to start with the algorithms that are best on conventional machines and adapt them to the particular features of the new machine. This approach encourages the persistence of old algorithms and old architecture in new machines.

Microprocessors with significant computing capabilities have become quite inexpensive. An obvious approach is to try to use a large number of processors together to solve a large problem. Some algorithms can adapt well to being “distributed” in this way; others simply are not appropriate for this kind of implementation.

The development of inexpensive, relatively powerful processors has involved the appearance of general-purpose tools for use in designing and building new processors. This has led to increased activity in the development of special-purpose machines for particular problems. If no machine is particularly well-suited to execute some important algorithm, then we can design and build one that is! For many problems, an appropriate machine can be designed and built that fits on one (very-large-scale) integrated circuit chip.

A common thread in all of these approaches is parallelism: we try to have as many different things as possible happening at any instant. This can lead to chaos if it is not done in an orderly manner. Below, we’ll consider two examples which illustrate some techniques for achieving a high degree of parallelism for some specific classes of problems. The idea is to assume that we have not just one but M processors on which our program can run. Thus, if things work out well, we can hope to have our program run M times faster than before.

There are several immediate problems involved in getting M processors to work together to solve the same problem. The most important is that they must communicate in some way: there must be wires interconnecting them and specific mechanisms for sending data back and forth along those wires. Furthermore, there are physical limitations on the type of interconnection allowed. For example, suppose that our “processors” are integrated circuit chips (these can now contain more circuitry than small computers of the past) which have, say, 32 pins to be used for interconnection. Even if we had 1000

such processors, we could connect each

to at most

32

others.

The choice

of how to interconnect the processors is

fundamental

in

parallel

computing.

Соседние файлы в предмете Алгоритмы