Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
706 views
in Technique[技术] by (71.8m points)

cuda - NSIGHT compute: SOL SM versus Roofline

I ran cuda-11.2 nsight-compute on my cuda kernel.

It reports that SOL SM is at 79.44% which I interpret as being pretty close to maximum. SOL L1 is at 48.38%

When I examine the Roofline chart, I see that my measured result is very far away from peak performance.

Achieved: 4.7 GFlop/s.

Peak at roofline: 93 GFlop/s or so.

I also see ALU pipe utilization at 80+%

So, if the ALU pipe is fully utilized, why is the achieved performance so much lower, according to the roofline chart?

profile result

Note that this is on a RTX 3070, which peaks for single precision at 17.6 TFlop/s: peak

UPDATE

I think I figured out what is going on here... @robert-crovella put me on the right track showing that ALU are integer ops, thus not included. And that those are not the only operations not included!

Roofline charts only show fp32 and fp64 operations and not fp16 operations.

My code works with half-precision floats, making the Roofline chart not applicable for my code, I suspect.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

So, if the ALU pipe is fully utilized, why is the achieved performance so much lower, according to the roofline chart?

Because the ALU pipe has nothing to do with floating point and the roofline chart is essentially only about floating point.

As indicated in the answer I linked, the ALU pipe handles:

most integer instructions, bit manipulation instructions, and logic instructions

It's quite possible that this pipe utilization is a factor that is limiting the performance of your kernel, and as a result of that, you are running at a lower FLOPs/s rate/throughput than what might be otherwise achievable, floating-point wise (i.e. the roofline).

The items that are (FP32/FP64) floating-point related are fma, fmaheavy, fp32, and potentially Tensor. These are all at around 40% or below active, so you're not maxing out any of those pipes.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...