Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
319 views
in Technique[技术] by (71.8m points)

Fill histograms (array reduction) in parallel with OpenMP without using a critical section

I would like to fill histograms in parallel using OpenMP. I have come up with two different methods of doing this with OpenMP in C/C++.

The first method proccess_data_v1 makes a private histogram variable hist_private for each thread, fills them in prallel, and then sums the private histograms into the shared histogram hist in a critical section.

The second method proccess_data_v2 makes a shared array of histograms with array size equal to the number of threads, fills this array in parallel, and then sums the shared histogram hist in parallel.

The second method seems superior to me since it avoids a critical section and sums the histograms in parallel. However, it requires knowing the number of threads and calling omp_get_thread_num(). I generally try to avoid this. Is there better way to do the second method without referencing the thread numbers and using a shared array with size equal to the number of threads?

void proccess_data_v1(float *data, int *hist, const int n, const int nbins, float max) {
    #pragma omp parallel 
    {
        int *hist_private = new int[nbins];
        for(int i=0; i<nbins; i++) hist_private[i] = 0;
        #pragma omp for nowait
        for(int i=0; i<n; i++) {
            float x = reconstruct_data(data[i]);
            fill_hist(hist_private, nbins, max, x);
        }
        #pragma omp critical 
        {
            for(int i=0; i<nbins; i++) {
                hist[i] += hist_private[i];
            }
        }
        delete[] hist_private;
    }
}

void proccess_data_v2(float *data, int *hist, const int n, const int nbins, float max) {
    const int nthreads = 8;
    omp_set_num_threads(nthreads);
    int *hista = new int[nbins*nthreads];

    #pragma omp parallel 
    {
        const int ithread = omp_get_thread_num();
        for(int i=0; i<nbins; i++) hista[nbins*ithread+i] = 0;
        #pragma omp for
        for(int i=0; i<n; i++) {
            float x = reconstruct_data(data[i]);
            fill_hist(&hista[nbins*ithread], nbins, max, x);
        }

        #pragma omp for
        for(int i=0; i<nbins; i++) {
            for(int t=0; t<nthreads; t++) {
                hist[i] += hista[nbins*t + i];
            }
        }

    }
    delete[] hista;
}

Edit: Based on a suggestion by @HristoIliev I have created an improved method called process_data_v3

#define ROUND_DOWN(x, s) ((x) & ~((s)-1))
void proccess_data_v2(float *data, int *hist, const int n, const int nbins, float max) {
    int* hista;
    #pragma omp parallel 
    {
        const int nthreads = omp_get_num_threads();
        const int ithread = omp_get_thread_num();

        int lda = ROUND_DOWN(nbins+1023, 1024);  //1024 ints = 4096 bytes -> round to a multiple of page size
        #pragma omp single
        hista = (int*)_mm_malloc(lda*sizeof(int)*nthreads, 4096);  //align memory to page size

        for(int i=0; i<nbins; i++) hista[lda*ithread+i] = 0;
        #pragma omp for
        for(int i=0; i<n; i++) {
            float x = reconstruct_data(data[i]);
            fill_hist(&hista[lda*ithread], nbins, max, x);
        }

        #pragma omp for
        for(int i=0; i<nbins; i++) {
            for(int t=0; t<nthreads; t++) {
                hist[i] += hista[lda*t + i];
            }
        }

    }
    _mm_free(hista);
}
Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You could allocate the big array inside the parallel region, where you can query about the actual number of threads being used:

int *hista;
#pragma omp parallel 
{
    const int nthreads = omp_get_num_threads();
    const int ithread = omp_get_thread_num();

    #pragma omp single
    hista = new int[nbins*nthreads];

    ...
}
delete[] hista;

For better performance I would advise that you round the size of each thread's chunk in hista to a multiple of the system's memory page size, even if this could potentially leave holes between the different partial histograms. This way you will prevent both false sharing and remote memory access on NUMA systems (but not in the final reduction phase).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...