- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
48 |
CHAPTER 3. POINT-TO-POINT COMMUNICATION |
1MPI_PACK_SIZE(count, datatype, comm, size), with the count, datatype and comm
2arguments used in the MPI_BSEND call, returns an upper bound on the amount
3of space needed to bu er the message data (see Section 4.2). The MPI constant
4MPI_BSEND_OVERHEAD provides an upper bound on the additional space consumed
5
6
by the entry (e.g., for pointers or envelope information).
7Find the next contiguous empty space of n bytes in bu er (space following queue tail,
8or space at start of bu er if queue tail is too close to end of bu er). If space is not
9
10
11
12
13
14
found then raise bu er over ow error.
Append to end of PME queue in contiguous space the new entry that contains request handle, next pointer and packed message data; MPI_PACK is used to pack data.
Post nonblocking send (standard mode) for packed data.
15
16
Return
17 |
3.7 Nonblocking Communication |
|
|
18 |
|
19One can improve performance on many systems by overlapping communication and com-
20putation. This is especially true on systems where communication can be executed au-
21tonomously by an intelligent communication controller. Light-weight threads are one mech-
22anism for achieving such overlap. An alternative mechanism that often leads to better
23performance is to use nonblocking communication. A nonblocking send start call ini-
24tiates the send operation, but does not complete it. The send start call can return before
25the message was copied out of the send bu er. A separate send complete call is needed
26to complete the communication, i.e., to verify that the data has been copied out of the send
27bu er. With suitable hardware, the transfer of data out of the sender memory may proceed
28concurrently with computations done at the sender after the send was initiated and before it
29completed. Similarly, a nonblocking receive start call initiates the receive operation, but
30does not complete it. The call can return before a message is stored into the receive bu er.
31A separate receive complete call is needed to complete the receive operation and verify
32that the data has been received into the receive bu er. With suitable hardware, the transfer
33of data into the receiver memory may proceed concurrently with computations done after
34the receive was initiated and before it completed. The use of nonblocking receives may also
35avoid system bu ering and memory-to-memory copying, as information is provided early
36on the location of the receive bu er.
37Nonblocking send start calls can use the same four modes as blocking sends: standard,
38bu ered, synchronous and ready. These carry the same meaning. Sends of all modes, ready
39excepted, can be started whether a matching receive has been posted or not; a nonblocking
40ready send can be started only if a matching receive is posted. In all cases, the send start call
41is local: it returns immediately, irrespective of the status of other processes. If the call causes
42some system resource to be exhausted, then it will fail and return an error code. Quality
43implementations of MPI should ensure that this happens only in \pathological" cases. That
44is, an MPI implementation should be able to support a large number of pending nonblocking
45operations.
46The send-complete call returns when data has been copied out of the send bu er. It
47may carry additional meaning, depending on the send mode.
48
3.7. NONBLOCKING COMMUNICATION |
49 |
If the send mode is synchronous, then the send can complete only if a matching receive has started. That is, a receive has been posted, and has been matched with the send. In this case, the send-complete call is non-local. Note that a synchronous, nonblocking send may complete, if matched by a nonblocking receive, before the receive complete call occurs. (It can complete as soon as the sender \knows" the transfer will complete, but before the receiver \knows" the transfer will complete.)
If the send mode is bu ered then the message must be bu ered if there is no pending receive. In this case, the send-complete call is local, and must succeed irrespective of the status of a matching receive.
If the send mode is standard then the send-complete call may return before a matching receive is posted, if the message is bu ered. On the other hand, the send-complete may not complete until a matching receive is posted, and the message was copied into the receive bu er.
Nonblocking sends can be matched with blocking receives, and vice-versa.
Advice to users. The completion of a send operation may be delayed, for standard mode, and must be delayed, for synchronous mode, until a matching receive is posted. The use of nonblocking sends in these two cases allows the sender to proceed ahead of the receiver, so that the computation is more tolerant of uctuations in the speeds of the two processes.
Nonblocking sends in the bu ered and ready modes have a more limited impact, e.g., the blocking version of bu ered send is capable of completing regardless of when a matching receive call is made. However, separating the start from the completion of these sends still gives some opportunity for optimization within the MPI library. For example, starting a bu ered send gives an implementation more exibility in determining if and how the message is bu ered. There are also advantages for both nonblocking bu ered and ready modes when data copying can be done concurrently with computation.
The message-passing model implies that communication is initiated by the sender. The communication will generally have lower overhead if a receive is already posted when the sender initiates the communication (data can be moved directly to the receive bu er, and there is no need to queue a pending send request). However, a receive operation can complete only after the matching send has occurred. The use of nonblocking receives allows one to achieve lower communication overheads without blocking the receiver while it waits for the send. (End of advice to users.)
3.7.1 Communication Request Objects
Nonblocking communications use opaque request objects to identify communication operations and match the operation that initiates the communication with the operation that terminates it. These are system objects that are accessed via a handle. A request object identi es various properties of a communication operation, such as the send mode, the communication bu er that is associated with it, its context, the tag and destination arguments to be used for a send, or the tag and source arguments to be used for a receive. In addition, this object stores information about the status of the pending communication operation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
5
6
7
8
50 |
CHAPTER 3. POINT-TO-POINT COMMUNICATION |
3.7.2 Communication Initiation
We use the same naming conventions as for blocking communication: a pre x of B, S, or R is used for bu ered, synchronous or ready mode. In addition a pre x of I (for immediate) indicates that the call is nonblocking.
MPI_ISEND(buf, count, datatype, dest, tag, comm, request)
9
10
11
12
13
14
15
16
17
18
19
IN |
buf |
initial address of send bu er (choice) |
IN |
count |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
datatype |
datatype of each send bu er element (handle) |
IN |
dest |
rank of destination (integer) |
IN |
tag |
message tag (integer) |
IN |
comm |
communicator (handle) |
OUT |
request |
communication request (handle) |
20
21
22
23
24
25
int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type> BUF(*)
INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
26 |
fMPI::Request MPI::Comm::Isend(const void* buf, int count, const |
27 |
MPI::Datatype& datatype, int dest, int tag) const (binding |
28 |
deprecated, see Section 15.2) g |
29
30
31
Start a standard mode, nonblocking send.
32 |
MPI_IBSEND(buf, count, datatype, dest, tag, comm, request) |
33
34
35
36
37
38
39
40
41
42
43
44
IN |
buf |
initial address of send bu er (choice) |
IN |
count |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
datatype |
datatype of each send bu er element (handle) |
IN |
dest |
rank of destination (integer) |
IN |
tag |
message tag (integer) |
IN |
comm |
communicator (handle) |
OUT |
request |
communication request (handle) |
int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest,
45
int tag, MPI_Comm comm, MPI_Request *request)
46
47MPI_IBSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)
48<type> BUF(*)
3.7. NONBLOCKING COMMUNICATION |
51 |
INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
fMPI::Request MPI::Comm::Ibsend(const void* buf, int count, const MPI::Datatype& datatype, int dest, int tag) const (binding deprecated, see Section 15.2) g
Start a bu ered mode, nonblocking send.
MPI_ISSEND(buf, count, datatype, dest, tag, comm, request)
IN |
buf |
initial address of send bu er (choice) |
IN |
count |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
datatype |
datatype of each send bu er element (handle) |
IN |
dest |
rank of destination (integer) |
IN |
tag |
message tag (integer) |
IN |
comm |
communicator (handle) |
OUT |
request |
communication request (handle) |
int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
MPI_ISSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type> BUF(*)
INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
fMPI::Request MPI::Comm::Issend(const void* buf, int count, const MPI::Datatype& datatype, int dest, int tag) const (binding deprecated, see Section 15.2) g
Start a synchronous mode, nonblocking send.
MPI_IRSEND(buf, count, datatype, dest, tag, comm, request)
IN |
buf |
initial address of send bu er (choice) |
IN |
count |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
datatype |
datatype of each send bu er element (handle) |
IN |
dest |
rank of destination (integer) |
IN |
tag |
message tag (integer) |
IN |
comm |
communicator (handle) |
OUT |
request |
communication request (handle) |
int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
52 |
CHAPTER 3. POINT-TO-POINT COMMUNICATION |
1MPI_IRSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)
2<type> BUF(*)
3INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
4
5fMPI::Request MPI::Comm::Irsend(const void* buf, int count, const
6
7
8
9
10
MPI::Datatype& datatype, int dest, int tag) const (binding deprecated, see Section 15.2) g
Start a ready mode nonblocking send.
11
12
13
14
15
16
17
18
19
20
21
22
23
MPI_IRECV (buf, count, datatype, source, tag, comm, request)
OUT |
buf |
initial address of receive bu er (choice) |
IN |
count |
number of elements in receive bu er (non-negative in- |
|
|
teger) |
IN |
datatype |
datatype of each receive bu er element (handle) |
IN |
source |
rank of source or MPI_ANY_SOURCE (integer) |
IN |
tag |
message tag or MPI_ANY_TAG (integer) |
IN |
comm |
communicator (handle) |
OUT |
request |
communication request (handle) |
int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source,
24
int tag, MPI_Comm comm, MPI_Request *request)
25
26MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR)
27<type> BUF(*)
28INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR
29
30
31
32
fMPI::Request MPI::Comm::Irecv(void* buf, int count, const MPI::Datatype& datatype, int source, int tag) const (binding deprecated, see Section 15.2) g
33Start a nonblocking receive.
34These calls allocate a communication request object and associate it with the request
35handle (the argument request). The request can be used later to query the status of the
36communication or wait for its completion.
37A nonblocking send call indicates that the system may start copying data out of the
38send bu er. The sender should not modify any part of the send bu er after a nonblocking
39send operation is called, until the send completes.
40A nonblocking receive call indicates that the system may start writing data into the re-
41ceive bu er. The receiver should not access any part of the receive bu er after a nonblocking
42receive operation is called, until the receive completes.
43
44Advice to users. To prevent problems with the argument copying and register opti-
45mization done by Fortran compilers, please note the hints in subsections \Problems
46Due to Data Copying and Sequence Association," and \A Problem with Register
47Optimization" in Section 16.2.2 on pages 482 and 485. (End of advice to users.)
48