>>50
I'm not
>>49, but I agree with him, although the results come out the same. printf() is extraordinarily slow, pushing any interesting results into the noise. Same can be said for rand(), although it's not as bad as printf(). Using time from the shell includes the startup time, so it reduces accuracy.
Here's the results I get with what I wrote:
C:\Devel\src>gcc t.c -std=c99 -o t.exe
C:\Devel\src>t
& 1: 15515 ticks
% 2: 25168 ticks
C:\Devel\src>gcc t.c -std=c99 -o t.exe -O2
C:\Devel\src>t
& 1: 2658 ticks
% 2: 2649 ticks
And here's the code. Note that it's
still far from ideal, but at least the interesting bits are completely vanishing into the margin of error:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <limits.h>
#define ITER 10
int main(void)
{
int tmp; // prevent certain optimizers from eliminating the loops
int stub[2];
clock_t start_time, avg_time;
// timing for & 1
start_time = clock();
for(int i=0; i < ITER; i++) {
for(int j=0; j < INT_MAX; j++) {
tmp = stub[j & 1]; // cheaper than zOMG rand()
}
}
avg_time = (clock() - start_time) / ITER;
printf("& 1: %i ticks\n", avg_time);
// timing for % 2
start_time = clock();
for(int i=0; i < ITER; i++) {
for(int j=0; j < INT_MAX; j++) {
tmp = stub[j % 2];
}
}
avg_time = (clock() - start_time) / ITER;
printf("%% 2: %i ticks\n", avg_time);
return 0;
}
So, it's clear the result is the same after optimization. Modulus is slower, but the optimizer changes %2 to & 1.