Blocking communication is done using MPI_Send()
and MPI_Recv()
. These functions do not return (i.e., they block) until the communication is finished. Simplifying somewhat, this means that the buffer passed to MPI_Send()
can be reused, either because MPI saved it somewhere, or because it has been received by the destination. Similarly, MPI_Recv()
returns when the receive buffer has been filled with valid data.
In contrast, non-blocking communication is done using MPI_Isend()
and MPI_Irecv()
. These function return immediately (i.e., they do not block) even if the communication is not finished yet. You must call MPI_Wait()
or MPI_Test()
to see whether the communication has finished.
Blocking communication is used when it is sufficient, since it is somewhat easier to use. Non-blocking communication is used when necessary, for example, you may call MPI_Isend()
, do some computations, then do MPI_Wait()
. This allows computations and communication to overlap, which generally leads to improved performance.
Note that collective communication (e.g., all-reduce) is only available in its blocking version up to MPIv2. IIRC, MPIv3 introduces non-blocking collective communication.
A quick overview of MPI's send modes can be seen here.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…