I have a piece of code that runs 2x faster on windows than on linux.
Here are the times I measured:
g++ -Ofast -march=native -m64
29.1123
g++ -Ofast -march=native
29.0497
clang++ -Ofast -march=native
28.9192
visual studio 2013 Debug 32b
13.8802
visual studio 2013 Release 32b
12.5569
It really seems to be too huge a difference.
Here is the code:
#include <iostream>
#include <map>
#include <chrono>
static std::size_t Count = 1000;
static std::size_t MaxNum = 50000000;
bool IsPrime(std::size_t num)
{
for (std::size_t i = 2; i < num; i++)
{
if (num % i == 0)
return false;
}
return true;
}
int main()
{
auto start = std::chrono::steady_clock::now();
std::map<std::size_t, bool> value;
for (std::size_t i = 0; i < Count; i++)
{
value[i] = IsPrime(i);
value[MaxNum - i] = IsPrime(MaxNum - i);
}
std::chrono::duration<double> serialTime = std::chrono::steady_clock::now() - start;
std::cout << "Serial time = " << serialTime.count() << std::endl;
system("pause");
return 0;
}
All of this was measured on the same machine with windows 8 vs linux 3.19.5(gcc 4.9.2, clang 3.5.0). Both linux and windows are 64bit.
What could be the reason for this? Some scheduler issues?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…