Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
324 views
in Technique[技术] by (71.8m points)

c - Sending columns of a matrix using MPI_Scatter

I'm trying to write a matrix-vector multiplication program using MPI. I'm trying to send columns of the matrix to separate processes and locally calculate the result. In the end I do a MPI_Reduce using MPI_SUM operation.

Sending rows of a matrix is easy as C stores arrays in row-major order, but columns is not (if you don't send them one by one). I read the question here:

MPI_Scatter - sending columns of 2D array

Jonathan Dursi suggested using new MPI datatypes and here's what I've done by adapting his code to my own needs:

  double matrix[10][10];
  double mytype[10][10];
  int part_size; // stores how many cols a process needs to work on
  MPI_Datatype col, coltype;
  // ...
  MPI_Type_vector(N, 1, N, MPI_DOUBLE, &col);
  MPI_Type_commit(&col);
  MPI_Type_create_resized(col, 0, 1*sizeof(double), &coltype);
  MPI_Type_commit(&coltype);
  // ...
  MPI_Scatter(matrix, part_size, coltype,
              mypart, part_size, coltype,
              0, MPI_COMM_WORLD);

  // calculations...
  MPI_Reduce(local_result, global_result,
             N, MPI_DOUBLE,
             MPI_SUM,
             0, MPI_COMM_WORLD);

This works perfectly, but I can't say I really understand how it works.

  1. How is MPI_Type_vector stored in the memory?
  2. How does MPI_Type_create_resized() work and what does it exactly do?

Please bear in mind that I'm a total beginner in MPI. Thanks in advance.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

There's a long description of this issue in my answer to this question: the fact that many people have these questions is proof that it's not obvious and the ideas take some getting used to.

The important thing to know is what memory layout the MPI datatype describes. The calling sequence to MPI_Type_vector is:

int MPI_Type_vector(int count,
                   int blocklength,
                   int stride, 
                   MPI_Datatype old_type,
                   MPI_Datatype *newtype_p)

It creates a new type which describes a layout of memory where every stride items, there is a block of blocklength items pulled out, and a total of count of these blocks. Items here are in units of whatever the old_type was. So for instance, if you called (naming the parameters here, which you can't actually do in C, but:)

 MPI_Type_vector(count=3, blocklength=2, stride=5, old_type=MPI_INT, &newtype);

Then newtype would describe a layout in memory like this:

   |<----->|  block length

   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
   | X | X |   |   |   | X | X |   |   |   | X | X |   |   |   |
   +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+

   |<---- stride ----->|

   count = 3

where each square is one integer-sized chunk of memory, presumably 4 bytes. Note that the stride is the distance in integers from the start of one block to the start of the next, not the distance between blocks.

Ok, so in your case you called

  MPI_Type_vector(N, 1, N, MPI_DOUBLE, &col);

which will take count = N blocks, each of size blocklength=1 MPI_DOUBLEs, with a space between the start of each block of stride=N MPI_DOUBLEs. In other words, it'll take every N'th double, a total of N times; perfect for extracting one column out of a (contiguously-stored) NxN array of doubles. A handy check is to see how much data is being strided over (count*stride = N*N which is the full size of the matrix, check) and how much data is actually included (count*blocksize = N, which is the size of a column, check.)

If all you had to do was call MPI_Send and MPI_Recv to exchange individual columns, you'd be done; you could use this type to describe the layout of the column and you'd be fine. But there's one more thing.

You want to call MPI_Scatter, which sends the first coltype (say) to processor 0, the next coltype to processor 1, etc. If you're doing that with a simple 1d array, it's easy to figure out where the "next" data type is; if you're scattering 1 int to each processor, the "next" int starts immediately after the first int ends.

But your new coltype column has a total extent that goes from the start of the column to N*N MPI_DOUBLEs later -- if MPI_Scatter follows the same logic (it does), it would start looking for the "next" column outside of the matrices memory entirely, and so on with the next and next. Not only would you not get the answer you wanted, the program would likely crash.

The way to fix this is to tell MPI that the "size" of this data type for the purposes of calculating where the "next" one lies is the size in memory between where one column starts and the next column starts; that is, exactly one MPI_DOUBLE. This doesn't effect the amount of data sent, which is still 1 columns worth of data; it only affects the "next one in line" calculation. With columns (or rows) in an array, you can just sent this size to be the appropriate step size in memory, and MPI will choose the correct next column to send. Without this resize operator, your program would likely crash.

When you have more complicated data layouts, like in the 2d-blocks of a 2d-array example linked to above, then there is no one single step size between "next" items; you still need to do the resizing trick to make the size be some useful unit, but then you need to use MPI_Scatterv rather than scatter to explicitly specify the locations to send from.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...