Thomas H. Cormen
Charles E. Leiserson
Ronald L. Rivest
Clifford Stein
Introduction to Algorithms
Third Edition
The MIT Press
Cambridge, Massachusetts London, England
27 Multithreaded Algorithms
The vast majority of algorithms in this book are serial algorithms suitable for
running on a uniprocessor computer in which only one instruction executes at a
time. In this chapter, we shall extend our algorithmic model to encompass parallel
algorithms, which can run on a multiprocessor computer that permits multiple
instructions to execute concurrently. In particular, we shall explore the elegant
model of dynamic multithreaded algorithms, which are amenable to algorithmic
design and analysis, as well as to efficient implementation in practice.
Parallel computers—computers with multiple processing units—have become
increasingly common, and they span a wide range of prices and performance. Rela-
tively inexpensive desktop and laptop chip multiprocessors contain a single multi-
core integrated-circuit chip that houses multiple processing “cores,” each of which
is a full-fledged processor that can access a common memory. At an intermedi-
ate price/performance point are clusters built from individual computers—often
simple PC-class machines—with a dedicated network interconnecting them. The
highest-priced machines are supercomputers, which often use a combination of
custom architectures and custom networks to deliver the highest performance in
terms of instructions executed per second.
Multiprocessor computers have been around, in one form or another, for
decades. Although the computing community settled on the random-access ma-
chine model for serial computing early on in the history of computer science, no
single model for parallel computing has gained as wide acceptance. A major rea-
son is that vendors have not agreed on a single architectural model for parallel
computers. For example, some parallel computers feature shared memory, where
each processor can directly access any location of memory. Other parallel com-
puters employ distributed memory, where each processor’s memory is private, and
an explicit message must be sent between processors in order for one processor to
access the memory of another. With the advent of multicore technology, however,
every new laptop and desktop machine is now a shared-memory parallel computer,
Chapter 27 Multithreaded Algorithms 773
and the trend appears to be toward shared-memory multiprocessing. Although time
will tell, that is the approach we shall take in this chapter.
One common means of programming chip multiprocessors and other shared-
memory parallel computers is by using static threading, which provides a software
abstraction of “virtual processors,” or threads, sharing a common memory. Each
thread maintains an associated program counter and can execute code indepen-
dently of the other threads. The operating system loads a thread onto a processor
for execution and switches it out when another thread needs to run. Although the
operating system allows programmers to create and destroy threads, these opera-
tions are comparatively slow. Thus, for most applications, threads persist for the
duration of a computation, which is why we call them “static.”
Unfortunately, programming a shared-memory parallel computer directly using
static threads is difficult and error-prone. One reason is that dynamically parti-
tioning the work among the threads so that each thread receives approximately
the same load turns out to be a complicated undertaking. For any but the sim-
plest of applications, the programmer must use complex communication protocols
to implement a scheduler to load-balance the work. This state of affairs has led
toward the creation of concurrency platforms, which provide a layer of software
that coordinates, schedules, and manages the parallel-computing resources. Some
concurrency platforms are built as runtime libraries, but others provide full-fledged
parallel languages with compiler and runtime support.
Dynamic multithreaded programming
One important class of concurrency platform is dynamic multithreading, which is
the model we shall adopt in this chapter. Dynamic multithreading allows program-
mers to specify parallelism in applications without worrying about communication
protocols, load balancing, and other vagaries of static-thread programming. The
concurrency platform contains a scheduler, which load-balances the computation
automatically, thereby greatly simplifying the programmer’s chore. Although the
functionality of dynamic-multithreading environments is still evolving, almost all
support two features: nested parallelism and parallel loops. Nested parallelism
allows a subroutine to be “spawned,” allowing the caller to proceed while the
spawned subroutine is computing its result. A parallel loop is like an ordinary
for loop, except that the iterations of the loop can execute concurrently.
These two features form the basis of the model for dynamic multithreading that
we shall study in this chapter. A key aspect of this model is that the programmer
needs to specify only the logical parallelism within a computation, and the threads
within the underlying concurrency platform schedule and load-balance the compu-
tation among themselves. We shall investigate multithreaded algorithms written for
774 Chapter 27 Multithreaded Algorithms
this model, as well how the underlying concurrency platform can schedule compu-
tations efficiently.
Our model for dynamic multithreading offers several important advantages:
� It is a simple extension of our serial programming model. We can describe a
multithreaded algorithm by adding to our pseudocode just three “concurrency”
keywords: parallel, spawn, and sync. Moreover, if we delete these concur-
rency keywords from the multithreaded pseudocode, the resulting text is serial
pseudocode for the same problem, which we call the “serialization” of the mul-
tithreaded algorithm.
� It provides a theoretically clean way to quantify parallelism based on the no-
tions of “work” and “span.”
� Many multithreaded algorithms involving nested parallelism follow naturally
from the divide-and-conquer paradigm. Moreover, just as serial divide-and-
conquer algorithms lend themselves to analysis by solving recurrences, so do
multithreaded algorithms.
� The model is faithful to how parallel-computing practice is evolving. A grow-
ing number of concurrency platforms support one variant or another of dynamic
multithreading, including Cilk [51, 118], Cilk++ [72], OpenMP [60], Task Par-
allel Library [230], and Threading Building Blocks [292].
Section 27.1 introduces the dynamic multithreading model and presents the met-
rics of work, span, and parallelism, which we shall use to analyze multithreaded
algorithms. Section 27.2 investigates how to multiply matrices with multithread-
ing, and Section 27.3 tackles the tougher problem of multithreading merge sort.
27.1 The basics of dynamic multithreading
We shall begin our exploration of dynamic multithreading using the example of
computing Fibonacci numbers recursively. Recall that the Fibonacci numbers are
defined by recurrence (3.22):
F0 D 0 ;
F1 D 1 ;
Fi D Fi�1 C Fi�2 for i � 2 :
Here is a simple, recursive, serial algorithm to compute the nth Fibonacci number:
27.1 The basics of dynamic multithreading 775
FIB.0/
FIB.0/FIB.0/FIB.0/
FIB.0/
FIB.1/FIB.1/
FIB.1/
FIB.1/
FIB.1/FIB.1/FIB.1/
FIB.1/
FIB.2/
FIB.2/FIB.2/FIB.2/
FIB.2/
FIB.3/FIB.3/
FIB.3/
FIB.4/
FIB.4/
FIB.5/
FIB.6/
Figure 27.1 The tree of recursive procedure instances when computing FIB.6/. Each instance of
FIB with the same argument does the same work to produce the same result, providing an inefficient
but interesting way to compute Fibonacci numbers.
FIB.n/
1 if n � 1
2 return n
3 else x D FIB.n � 1/
4 y D FIB.n � 2/
5 return x C y
You would not really want to compute large Fibonacci numbers this way, be-
cause this computation does much repeated work. Figure 27.1 shows the tree of
recursive procedure instances that are created when computing F6. For example,
a call to FIB.6/ recursively calls FIB.5/ and then FIB.4/. But, the call to FIB.5/
also results in a call to FIB.4/. Both instances of FIB.4/ return the same result
(F4 D 3). Since the FIB procedure does not memoize, the second call to FIB.4/
replicates the work that the first call performs.
Let T .n/ denote the running time of FIB.n/. Since FIB.n/ contains two recur-
sive calls plus a constant amount of extra work, we obtain the recurrence
T .n/ D T .n � 1/C T .n � 2/C‚.1/ :
This recurrence has solution T .n/ D ‚.Fn/, which we can show using the substi-
tution method. For an inductive hypothesis, assume that T .n/ � aFn � b, where
a > 1 and b > 0 are constants. Substituting, we obtain
776 Chapter 27 Multithreaded Algorithms
T .n/ � .aFn�1 � b/C .aFn�2 � b/C‚.1/
D a.Fn�1 C Fn�2/ � 2b C‚.1/
D aFn � b � .b �‚.1//
� aFn � b
if we choose b large enough to dominate the constant in the ‚.1/. We can then
choose a large enough to satisfy the initial condition. The analytical bound
T .n/ D ‚.�n/ ; (27.1)
where � D .1 C
p
5/=2 is the golden ratio, now follows from equation (3.25).
Since Fn grows exponentially in n, this procedure is a particularly slow way to
compute Fibonacci numbers. (See Problem 31-3 for much faster ways.)
Although the FIB procedure is a poor way to compute Fibonacci numbers, it
makes a good example for illustrating key concepts in the analysis of multithreaded
algorithms. Observe that within FIB.n/, the two recursive calls in lines 3 and 4 to
FIB.n� 1/ and FIB.n� 2/, respectively, are independent of each other: they could
be called in either order, and the computation performed by one in no way affects
the other. Therefore, the two recursive calls can run in parallel.
We augment our pseudocode to indicate parallelism by adding the concurrency
keywords spawn and sync. Here is how we can rewrite the FIB procedure to use
dynamic multithreading:
P-FIB.n/
1 if n � 1
2 return n
3 else x D spawn P-FIB.n � 1/
4 y D P-FIB.n � 2/
5 sync
6 return x C y
Notice that if we delete the concurrency keywords spawn and sync from P-FIB,
the resulting pseudocode text is identical to FIB (other than renaming the procedure
in the header and in the two recursive calls). We define the serialization of a mul-
tithreaded algorithm to be the serial algorithm that results from deleting the multi-
threaded keywords: spawn, sync, and when we examine parallel loops, parallel.
Indeed, our multithreaded pseudocode has the nice property that a serialization is
always ordinary serial pseudocode to solve the same problem.
Nested parallelism occurs when the keyword spawn precedes a procedure call,
as in line 3. The semantics of a spawn differs from an ordinary procedure call in
that the procedure instance that executes the spawn—the parent—may continue
to execute in parallel with the spawned subroutine—its child—instead of waiting
27.1 The basics of dynamic multithreading 777
for the child to complete, as would normally happen in a serial execution. In this
case, while the spawned child is computing P-FIB.n � 1/, the parent may go on
to compute P-FIB.n � 2/ in line 4 in parallel with the spawned child. Since the
P-FIB procedure is recursive, these two subroutine calls themselves create nested
parallelism, as do their children, thereby creating a potentially vast tree of subcom-
putations, all executing in parallel.
The keyword spawn does not say, however, that a procedure must execute con-
currently with its spawned children, only that it may. The concurrency keywords
express the logical parallelism of the computation, indicating which parts of the
computation may proceed in parallel. At runtime, it is up to a scheduler to deter-
mine which subcomputations actually run concurrently by assigning them to avail-
able processors as the computation unfolds. We shall discuss the theory behind
schedulers shortly.
A procedure cannot safely use the values returned by its spawned children until
after it executes a sync statement, as in line 5. The keyword sync indicates that
the procedure must wait as necessary for all its spawned children to complete be-
fore proceeding to the statement after the sync. In the P-FIB procedure, a sync
is required before the return statement in line 6 to avoid the anomaly that would
occur if x and y were summed before x was computed. In addition to explicit
synchronization provided by the sync statement, every procedure executes a sync
implicitly before it returns, thus ensuring that all its children terminate before it
does.
A model for multithreaded execution
It helps to think of a multithreaded computation—the set of runtime instruc-
tions executed by a processor on behalf of a multithreaded program—as a directed
acyclic graph G D .V;E/, called a computation dag. As an example, Figure 27.2
shows the computation dag that results from computing P-FIB.4/. Conceptually,
the vertices in V are instructions, and the edges in E represent dependencies be-
tween instructions, where .u; �/ 2 E means that instruction u must execute before
instruction �. For convenience, however, if a chain of instructions contains no
parallel control (no spawn, sync, or return from a spawn—via either an explicit
return statement or the return that happens implicitly upon reaching the end of
a procedure), we may group them into a single strand, each of which represents
one or more instructions. Instructions involving parallel control are not included
in strands, but are represented in the structure of the dag. For example, if a strand
has two successors, one of them must have been spawned, and a strand with mul-
tiple predecessors indicates the predecessors joined because of a sync statement.
Thus, in the general case, the set V forms the set of strands, and the set E of di-
rected edges represents dependencies between strands induced by parallel control.
778 Chapter 27 Multithreaded Algorithms
P-FIB(1) P-FIB(0)
P-FIB(3)
P-FIB(4)
P-FIB(1)
P-FIB(1)
P-FIB(0)
P-FIB(2)
P-FIB(2)
Figure 27.2 A directed acyclic graph representing the computation of P-FIB.4/. Each circle rep-
resents one strand, with black circles representing either base cases or the part of the procedure
(instance) up to the spawn of P-FIB.n � 1/ in line 3, shaded circles representing the part of the pro-
cedure that calls P-FIB.n� 2/ in line 4 up to the sync in line 5, where it suspends until the spawn of
P-FIB.n � 1/ returns, and white circles representing the part of the procedure after the sync where
it sums x and y up to the point where it returns the result. Each group of strands belonging to the
same procedure is surrounded by a rounded rectangle, lightly shaded for spawned procedures and
heavily shaded for called procedures. Spawn edges and call edges point downward, continuation
edges point horizontally to the right, and return edges point upward. Assuming that each strand takes
unit time, the work equals 17 time units, since there are 17 strands, and the span is 8 time units, since
the critical path—shown with shaded edges—contains 8 strands.
If G has a directed path from strand u to strand �, we say that the two strands are
(logically) in series. Otherwise, strands u and � are (logically) in parallel.
We can picture a multithreaded computation as a dag of strands embedded in a
tree of procedure instances. For example, Figure 27.1 shows the tree of procedure
instances for P-FIB.6/ without the detailed structure showing strands. Figure 27.2
zooms in on a section of that tree, showing the strands that constitute each proce-
dure. All directed edges connecting strands run either within a procedure or along
undirected edges in the procedure tree.
We can classify the edges of a computation dag to indicate the kind of dependen-
cies between the various strands. A continuation edge .u; u0/, drawn horizontally
in Figure 27.2, connects a strand u to its successor u0 within the same procedure
instance. When a strand u spawns a strand �, the dag contains a spawn edge .u; �/,
which points downward in the figure. Call edges, representing normal procedure
calls, also point downward. Strand u spawning strand � differs from u calling �
in that a spawn induces a horizontal continuation edge from u to the strand u0 fol-
27.1 The basics of dynamic multithreading 779
lowing u in its procedure, indicating that u0 is free to execute at the same time
as �, whereas a call induces no such edge. When a strand u returns to its calling
procedure and x is the strand immediately following the next sync in the calling
procedure, the computation dag contains return edge .u; x/, which points upward.
A computation starts with a single initial strand—the black vertex in the procedure
labeled P-FIB.4/ in Figure 27.2—and ends with a single final strand—the white
vertex in the procedure labeled P-FIB.4/.
We shall study the execution of multithreaded algorithms on an ideal paral-
lel computer, which consists of a set of processors and a sequentially consistent
shared memory. Sequential consistency means that the shared memory, which may
in reality be performing many loads and stores from the processors at the same
time, produces the same results as if at each step, exactly one instruction from one
of the processors is executed. That is, the memory behaves as if the instructions
were executed sequentially according to some global linear order that preserves the
individual orders in which each processor issues its own instructions. For dynamic
multithreaded computations, which are scheduled onto processors automatically
by the concurrency platform, the shared memory behaves as if the multithreaded
computation’s instructions were interleaved to produce a linear order that preserves
the partial order of the computation dag. Depending on scheduling, the ordering
could differ from one run of the program to another, but the behavior of any exe-
cution can be understood by assuming that the instructions are executed in some
linear order consistent with the computation dag.
In addition to making assumptions about semantics, the ideal-parallel-computer
model makes some performance assumptions. Specifically, it assumes that each
processor in the machine has equal computing power, and it ignores the cost of
scheduling. Although this last assumption may sound optimistic, it turns out that
for algorithms with sufficient “parallelism” (a term we shall define precisely in a
moment), the overhead of scheduling is generally minimal in practice.
Performance measures
We can gauge the theoretical efficiency of a multithreaded algorithm by using two
metrics: “work” and “span.” The work of a multithreaded computation is the total
time to execute the entire computation on one processor. In other words, the work
is the sum of the times taken by each of the strands. For a computation dag in
which each strand takes unit time, the work is just the number of vertices in the
dag. The span is the longest time to execute the strands along any path in the dag.
Again, for a dag in which each strand takes unit time, the span equals the number of
vertices on a longest or critical path in the dag. (Recall from Section 24.2 that we
can find a critical path in a dag G D .V;E/ in ‚.V C E/ time.) For example, the
computation dag of Figure 27.2 has 17 vertices in all and 8 vertices on its critical
780 Chapter 27 Multithreaded Algorithms
path, so that if each strand takes unit time, its work is 17 time units and its span
is 8 time units.
The actual running time of a multithreaded computation depends not only on
its work and its span, but also on how many processors are available and how
the scheduler allocates strands to processors. To denote the running time of a
multithreaded computation on P processors, we shall subscript byP . For example,
we might denote the running time of an algorithm on P processors by TP . The
work is the runnin
本文档为【算法导论的 并行算法】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。