Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Timer function in C++

Name: Ekce 2006-02-03 23:19

alright, so I got my program written out in c++, but I want to know how long it takes to run. So I'm wondering if anyone knows anysort of timer functions? I need the most time effecient one, granted it would only run once (or twice depending on the function), but there is still an emphasis on time. as for operating system I am using Yoper linux, I don't know if it matters, but I've heard it does.

Name: Anonymous 2006-02-03 23:26

gettimeofday (in sys/time.h) is accurate to microseconds

Name: Anonymous 2006-02-03 23:29

If all you want is to find out how long it takes for the entire program to execute, there's an even easier way:

$ time a.out

Run that four times, and use the average of the last three runs.

Name: Anonymous 2006-02-05 8:55

>>3
To explain the "last three of four trials" part, the first run puts as much of the program as possible into the computer's RAM, which removes variables like disk read times.  After removing some variables, the last three runs should be a more representative sample of your program's execution time.

Name: Anonymous 2006-02-08 11:24

I love FAGS

Name: Anonymous 2006-02-08 18:13

>>5
That makes you one

Name: Anonymous 2006-02-09 8:12 (sage)

>>6
Ahahaha! Best reply ever!

Name: Anonymous 2009-03-06 2:37

NESTEDTAG

http://4chan.org

http://4ch[/spoiler]a[spoiler]n.org

http://4chan.org

http://4ch
an.org

http://4chan.org

http://4chan.org

http://4chan.org

http://4chan.org))))

mailto:derp@shitbag.fag nor mal> ,.?)([]{}\|"';:!@#$%^&*<

FIRSTSECONDTHIRD

http://4ch[spoiler]an.org

http://4chan.org

http://4ch
an.org

http://4chan.org

Name: Anonymous 2010-12-10 0:22

Name: Anonymous 2010-12-23 2:42

Name: Anonymous 2011-10-07 23:16

I just necro'd the fuck out of this thread. Gotta take a massive shit now.

kthnxbye

Name: Anonymous 2011-10-08 1:44

In C++11, there is std::chrono::high_resolution_clock which is typically accurate to nanoseconds on POSIX systems, or hundreds of nanoseconds on Windows.

Name: FrozenVoid 2011-10-08 5:39

>>13
In asm there is RDTSC which accurate to a single CPU cycle,and more portable, does not require any "OS interface".

static __inline__ unsigned long long rdtsc(void){
  unsigned long long int x;
     __asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
     return x;}
#define RDTSC_DELAY 0// RDTSC opcode latency in cycles for this CPU
a=rdtsc();
some code;
b=rstsc();printf("%llu",b-a-RDTSC_DELAY);

Name: Anonymous 2011-10-08 13:54

>>14
That's not portable at all, it's x86/x86-64 only. Your code only works with GCC. And you're not using it correctly, you're not handling the case for when an interrupt occurs, such as a thread-context switch, which will result in incorrect timing values.

Don't change these.
Name: Email:
Entire Thread Thread List