Of course we assume that our program is stable, and doesn't have any
random behaviour. So, is it possible?
If you're program is running on a desktop, this variability is typical, and I would say unavoidable. Interrupts, i/o channel activity, and Ethernet itself consume cpu time, often with surprisingly large 'blocks-of-time' (see tcp / ip SAR, cache misses, etc), most of which is beyond your program's control and not in-synch with your timing efforts.
I have seen only one example of software running in a 'stable' way that you hint at. That computer was a SBC (single board computer), with 1 cpu (not Intel or AMD), all static ram (so no dynamic ram, and no refresh activity), no Ethernet, but two channels of i/o at fixed rate, and it ran a single program on a scaled down op system (not linux, not a desktop os) ... the precision was as if the behaviour was simple hw logic.
As team lead, I recognized the unusual, so I asked her if she had time to attach a logic analyzer and scope ... she demonstrated that neither tool showed any variance in time, edge to edge, message to message. Her software logic was, to me, impressively straight forward. In that system, if you did not need an interrupt, you simply did not enable it.
A desktop is a remarkably different beast ... so many things going on at the same time, most of which can not be stifled.
Yes. It is not only possible but unavoidable that a desktop has the kinds of variance (in timing) you are seeing.
And yet it is possible to achieve the stability you have hinted at, just not on a desktop. It takes special hardware, and careful coding.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…