Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
555 views
in Technique[技术] by (71.8m points)

c - OpenMP multiple threads update same array

I have the following code in my program and I want to accelerate it using OpenMP.

...
for(i=curr_index; i < curr_index + rx_size; i+=2){ 
    int64_t tgt = rcvq[i];
    int64_t src = rcvq[i+1];
    if (!TEST(tgt)) {
        pred[tgt] = src;
        newq[newq_count++] = tgt;
    }
} 

Currently, I have a version as follows:

...
chunk = rx_sz / omp_nthreads;

#pragma omp parallel for num_threads(omp_nthreads)
for (ii = 0; ii < omp_nthreads; ii++) { 
    int start = curr_index + ii * chunk;
    for (index = start; index < start + chunk; index +=2) { 
        int64_t tgt = rcvq[index];
        int64_t src = rcvq[index+1];
        if (!TEST(tgt)) {
            pred[tgt] = src;

            #pragma omp critical 
            newq[newq_count++] = tgt;
        }
    }
}

When I run the OpenMP version, I see a big performance degradation compared to the original version. I think the issue could be because of "omp critical" which prevents parallel processing. I want to know what could be enhanced with my code, so I could get better performance over the serial version. In the code, rx_sz is always a multiple of omp_nthreads.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Yes, the critical section is limiting your performance. You should collect the results locally for each thread and then merge them.

size_t newq_offset = 0;
#pragma omp parallel
{
    // Figure out something clever here...
    const size_t max_newq_per_thread = max_newq / omp_get_num_threads();
    int64_t* local_newq = malloc(max_results_per_thread * sizeof(int64_t));
    size_t local_newq_count = 0;

    #pragma omp parallel for
    for (i=curr_index; i < curr_index + rx_size; i+=2)
        int64_t tgt = rcvq[2*index];
        int64_t src = rcvq[2*index+1];
        if (!TEST(tgt)) {
            pred[tgt] = src;
            local_newq_count++;
            assert(local_newq_count < max_newq_per_thread);
            local_newq[local_newq_count] = tgt;
        }
    }
    int local_offset;
    #pragma omp atomic capture
    {
        local_offset = offset;
        offset += local_newq_count;
    }
    for (size_t i = 0; i < counter; i++)
    {   
        res_global[i + local_offset] = res[i];
    }
}

With this approach, all threads work in parallel on the merging and there is only minimal contention on the atomic capture. Note that you can also make a simple version with atomic capture, that is more efficient than the critical section, but will still become a bottleneck quickly:

size_t newq_count_local;
#pragma omp atomic capture
newq_count_local = newq_count++;
newq[newq_count_local] = tgt;
  • There is no guarantee about order within newq in any of the versions
  • Always declare variables as local as possible! Especially when using OpenMP. The critical-version you posted is wrong, because index (defined in an outer scope) is implictly shared among threads.
  • All of this assumes that there are no duplicates within rcvq. Otherwise, you get a race condition on pred[tgt] = src;.
  • Your approach of slicing up the loop manually is unnecessarily complicated. No need to do two loops, just use one pragma omp for loop.

The other answer gets the idea right. However, it is C++, not, as tagged, C. There is also a subtle yet significant performance issue with using std::vector<std::vector<>>. Usually a vector is implemented with three pointers, a total of 24 byte. Upon push_back one of the pointers is incremented. This means that, a) the pointers of local vectors from multiple threads reside on the same cache line, and b) on every successful TEST, push_back reads and writes to a cache line that is used by other thread(s). This cache-line will have to move around between cores all the time, greatly limiting the scalability of this approach. This is called false sharing.

I implemented a small test based on the other answer giving the following performance:

  • 0.99 s - single thread
  • 1.58 s - two threads on two neighbouring cores of the same socket
  • 2.13 s - two threads on two cores of different sockets
  • 0.99 s - two threads sharing a single core
  • 0.62 s - 24 threads on two sockets

Whereas above C version scales much better:

  • 0.46 s - single thread (not really comparable C vs C++)
  • 0.24 s - two threads
  • 0.04 s - 24 threads

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...