I had to write a very simple console program for university that had to measure the time required to make an input.
Therefor I used clock()
in front and after an fgets()
call. When running on my Windows computer it worked perfectly. However when running on my friends Mac-Book and Linux-PC it gave extremely small results (a few micro seconds of time only).
I tried the following code on all 3 OS:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
void main()
{
clock_t t;
printf("Sleeping for a bit
");
t = clock();
// Alternatively some fgets(...)
usleep(999999);
t = clock() - t;
printf("Processor time spent: %lf", ((double)t) / CLOCKS_PER_SEC);
}
On windows the output shows 1 second (or the amount of time you took to type when using fgets
), on the other two OS not much more than 0 seconds.
Now my question is why there is such a difference in implementation of clock()
on these OS. For windows it seems like the clock keeps ticking while the thread is sleeping/waiting but for Linux and Mac isn't?
Edit:
Thank you for the answers so far guys, so it's just Microsoft's faulty implementation really.
Could anyone please answer my last question:
Also is there a way to measure what I wanted do measure on all 3 systems using C-standard libraries since clock()
only seems to work this way on Windows?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…