#pragma omp parallel for
for(i=0;i<N ; i++)
{
R1 = BilinearInterpolation(Table[i].x1, Table[i].Q11,Table[i].x2, Table[i].Q21,Table[i].x);
R2 = BilinearInterpolation(Table[i].x1, Table[i].Q12, Table[i].x2, Table[i].Q22,Table[i].x);
P = BilinearInterpolation(Table[i].y1, R1, Table[i].y2, R2, Table[i].y);
TableInter[i] = P;
}
The problem with your code is that R1
, R2
, and P
are shared and updated by multiple threads, therefore you have a race condition. For example, one thread might be changing P
while another adds P
to the TableInter[i]
. Nonetheless, you can easily solve this race condition by declaring those variables as private, i.e., declaring them inside the parallel region, or by using the OpenMP's private
clause (#pragma omp parallel for private(R1, R2, P
).
#pragma omp parallel for private(R1, R2)
for(i=0;i<N ; i++)
{
R1 = BilinearInterpolation(Table[i].x1, Table[i].Q11,Table[i].x2, Table[i].Q21,Table[i].x);
R2 = BilinearInterpolation(Table[i].x1, Table[i].Q12, Table[i].x2, Table[i].Q22,Table[i].x);
TableInter[i] = BilinearInterpolation(Table[i].y1, R1, Table[i].y2, R2, Table[i].y);
}
As long as the BilinearInterpolation
method does not modify shared state among threads, this code is race-condition free.
calculate the time of execution sometimes it gives me 0 in the code,
To calculate the time one can use the OpenMP function omp_get_wtime, as follows:
double start = omp_get_wtime();
// the code that you want to measure.
double end = omp_get_wtime();
printf("%f
",end-start);