Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
265 views
in Technique[技术] by (71.8m points)

c++ - Is cpu clock time returned by have to be exactly same among runs?

I have big project written in C++. It might have some stability problems (i.e. random runtime), but I'm not sure about it. I understand that execution time, measured by wall clock time might be different among runs, because of OS multitasking. But I don't know, whether it's normal for stable program, to have varying execution time measured by cpu clock time among runs with same input. I tried to use clock() from time.h, and

boost::chrono:::process_user_cpu_clock::now();

But in both cases I see spikes on a graph. I'll give you an example of such graphs. Here Y axis - execution time, X axis - consecutive runs of a same program, on same input data. Red graph - wall clock time, red - cpu clock time, taken by clock() from time.h

enter image description here

Of course we assume that our program is stable, and doesn't have any random behaviour. So, is it possible? Platform is Windows 7.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Of course we assume that our program is stable, and doesn't have any random behaviour. So, is it possible?

If you're program is running on a desktop, this variability is typical, and I would say unavoidable. Interrupts, i/o channel activity, and Ethernet itself consume cpu time, often with surprisingly large 'blocks-of-time' (see tcp / ip SAR, cache misses, etc), most of which is beyond your program's control and not in-synch with your timing efforts.

I have seen only one example of software running in a 'stable' way that you hint at. That computer was a SBC (single board computer), with 1 cpu (not Intel or AMD), all static ram (so no dynamic ram, and no refresh activity), no Ethernet, but two channels of i/o at fixed rate, and it ran a single program on a scaled down op system (not linux, not a desktop os) ... the precision was as if the behaviour was simple hw logic.

As team lead, I recognized the unusual, so I asked her if she had time to attach a logic analyzer and scope ... she demonstrated that neither tool showed any variance in time, edge to edge, message to message. Her software logic was, to me, impressively straight forward. In that system, if you did not need an interrupt, you simply did not enable it.

A desktop is a remarkably different beast ... so many things going on at the same time, most of which can not be stifled.


Yes. It is not only possible but unavoidable that a desktop has the kinds of variance (in timing) you are seeing.

And yet it is possible to achieve the stability you have hinted at, just not on a desktop. It takes special hardware, and careful coding.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...