Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

hay look at my recursive pow

Name: Anonymous 2009-03-04 20:14


#include <stdio.h>
#include <stdlib.h>

unsigned long long rpow(int n, int p) {
  return p ? p==1 ? n : n * rpow(n,p-1) : 1;
}

int main(int argc, char **argv) {
  if(argv[1])
    if(argv[2]) {
      int n = atoi(argv[1]);
      int p = atoi(argv[2]);
      printf("%llu\n",rpow(n,p));
    }
  return 0;
}

Name: Anonymous 2009-03-11 19:18

square number get

Name: Anonymous 2009-03-11 19:20

nontotient get

Name: Anonymous 2009-03-11 19:22

11th Mian-Chowla get

Name: Anonymous 2009-03-11 20:21

number of 1 bits in the 360th fibonacci number in base 2 get

Name: Anonymous 2009-03-11 22:19

COOL FREE NUMBERS

Name: Anonymous 2009-03-12 15:06

Computer science

Name: Anonymous 2009-03-12 16:02

All of these algorithms fail because they don't run in O(log n) time.

Name: Anonymous 2009-03-12 16:17

>>27
O(log n) is the best you can do for pow.

Name: Anonymous 2009-03-12 16:47


int main() {
  main();
}

Name: Anonymous 2009-03-12 17:34

>>128
Wrong.
A 32 bit integer pow library could run in constant (amortized) time if you had the neccesary memory to store lookup tables of all the results, and the means with which to distribute the somewhat large source/compiled code. Then again, it would be very easy and small to store if done with self modifying code or code that generates the code needed, then compiles and dynamically includes that, but then you would have an obscenely large load time followed by constant time pow calculations. Obviously this would be less practical to attempt with 64 bit or higher machines.

Name: Anonymous 2009-03-12 18:01

>>130
if you had the neccesary memory to store lookup tables of all the results
~264 bits? WHBT

Name: Anonymous 2009-03-12 18:06

>>131
2 exabytes?
It's not as far fetched as you would think.

Name: Anonymous 2009-03-12 18:14

>>132
To clarify, if extrapolating moores law (18 months version), assuming ~4gb as the standard in memory as of current. Then we will have that much ram in 43.5 years. And lets be honest, the advancements possible in permanent digital storage that will cut deeply into that are many.

Name: Anonymous 2009-03-12 18:22

>>130
Wrong.
It will only run in constant amortized time if you assume that every program reuses every pre-computed value at least one time.

Name: Anonymous 2009-03-12 18:34

>>134
That isn't how time complexity works.
For example, time complexity taken to solve a maze does not include the time taken to generate the maze you are solving or the time taken to read the maze structure from an input file if that is the case. In a similar way, fetching a value from a lookup table (which is what needs to be done inside the pow() function in this case) is a constant time operation, where as generating and making said table usable may not be.

Name: Anonymous 2009-03-12 18:39

>>135
It does if the algorithm includes maze generation code. Generating and making said table usable has to be part of the pow algorithm.

Name: Anonymous 2009-03-12 18:42

>>135
Also if the algorithm doesn't have to include the table generation code, then it would be worst-case constant time, not even amortized. YHBT.

Name: Anonymous 2009-03-12 20:51

>>137
Table lookup is not constant unamortized time, you are wrong. The process of calculating this result is somewhat complex (due to all the seeming dynamic factors) and described in several papers on the subject, which I get the feeling you have not read. Go and troll in the FrozenVoid threads.

>>136
I said quite clearly the algorithm SOLVES THE MAZE. You would have a completely seperate algorithm for generating it, that would be to say the pow() function GENERATES the lookup tables it wants to use EVERY time it is called. Your argument is completely invalid. Is the time complexity of "Hello World!" in a Java Application O(n^3), because we have to account for the time complexity of the JVM loading sequence and complexities of the classloader, garbage collection and other such systems? I don't think so. The time complexity of a pow function that looks up pre computed values in a table is constant.

Name: Anonymous 2009-03-12 23:25

a large enough lookup table is not going to fit in the processor's cache, and will probably be a lot slower than the algorithm >>7 uses.

Name: Anonymous 2009-03-12 23:29

>>138
Wow you are the biggest idiot ever. Let me model this a bit more clearly so you're retard brain can handle it.

Before you use the pow function, you have to load or generate the table. Never did I say that you would do this more than once or every time you call pow. The point is, the work you do in loading or generating the table will not be paid off by a single call to pow, you will have to an equivalent amount of work in pow calls as you did in loading the table for the amortization argument to work at all.

I suggest you take some time off work/school and re-learn amortized analysis. You definitely wouldn't cut it in the real world. Stick to writing and re-writing factorial and Fibonacci functions.

Name: Anonymous 2009-03-13 0:37

>>140
so you're
Stopped reading right there sorry.
Disregard that, I didn't but I will point it out anyway so that I win the argument even if I am wrong. On to more important matters:

If you want to be an absolute pedant about it, you could of inferred that I was meani... oh wait, no you couldn't. I said the time complexity of a function pow() that performs a lookup on a table is constant. Giving the user a constant time pow() function. And even if I had meant overall amortized time of pow calculations, then your amortized time complexity is wrong anyway. The worst case sequence of events therein would be that the table is loaded, and then pow() is called a limitless number of times. This gives the cost of the load operation as 1/∞ along with the constant time cost of the lookups, so we get O(1) again.

Name: Anonymous 2009-03-13 0:41

>>141
an O(log n) algorithm that takes 5 seconds in the worse case is a lot better than an O(1) algorithm that takes 10 seconds every time.

Name: Anonymous 2009-03-13 1:07

>>141
This is an example of how not to do amortized analysis. The great Computer Scientists are laughing at you.

Name: Anonymous 2009-03-13 1:07

>>142
Fortunately for the planned implementation that uses a LOOKUP TABLE. It will be able to directly access the area of memory it requires. Taking infact, a single clock cycle to complete (well, probably a few more if done a more intuitive way of double indexing where a multiplication and addition would be required in addition to acessing the memory). And if this wasn't the case and it did take ten seconds; it would be infinitely times more useful than a regular power function. PROTIP: Peicewise functions.

Name: Anonymous 2009-03-13 1:28

>>144
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>

uint64_t rpow_aux(uint64_t, uint64_t, uint64_t) __attribute__ ((const, always_inline, flatten));
uint64_t rpow(uint64_t, uint64_t) __attribute__ ((const, always_inline, flatten));

uint64_t rpow_aux(uint64_t n, uint64_t p, uint64_t a)
{ return p == 0 ? a : rpow_aux(n, p - 1, a * n); }

uint64_t rpow(uint64_t n, uint64_t p)
{ return p < 1 ? 1 : rpow_aux(n, p, 1); }

int main(void)
{ printf("%" PRIu64 "\n", rpow(2, 63));
  return 0; }


lets see your lookup table and do some benchmarks, idiot.

Name: Anonymous 2009-03-13 7:42

all this O(n) is a lot of shit sometimes. On paper nothing matters. It's only when you've hacked it out into the machine that you can see what's going on.

Name: Anonymous 2009-03-13 7:56

>>136
>>138
Obviously the pow() lookup table would be implemented in hardware, not computed by the compiler

Name: Anonymous 2009-03-13 17:45

>>1

GET THE FUCK Out OF /PROG/ YOU WORTHLESS FUCKING SCUM SUCKING BITCH PIECE OF SHIT. I HOPE YOU FUCKING ROT AND WORMS EAT YOUR INSIDES YOU DISGUST ME!!!! DO NOT FUCKING ASK /PROG/ TO HELP YOU EVER MOTHERFUCKER

Name: Anonymous 2009-03-13 17:48

>>148
loeb :: Functor a => a (a x -> x) -> a x
loeb x = fmap (\a -> a (loeb x)) x

Name: Anonymous 2009-03-13 17:52

>>148
GO BACK TO /b/!!!

Name: Anonymous 2009-03-13 17:59

>>150
Have some manners, [u]please[/u].

Name: Anonymous 2009-03-13 18:03

>>151
lrn2BBCODE, please.

Name: Anonymous 2009-03-13 18:03

Go back to /b/, please.

Name: Anonymous 2009-03-13 18:06

>>149
loeb f = fmap ($ loeb f) f!!!

Name: Anonymous 2009-03-13 18:25

loeb = fix (fmap . flip id =<<)

Name: Anonymous 2009-03-13 18:37

>>155
I think I just understood loeb. Thank you.

Name: Anonymous 2009-03-13 20:04

>>156
you're welcome, that's why i posted it. it's a lot easier to understand when it's not full of lispish lambda bullshit and explicit recursion.

Name: Anonymous 2009-03-14 1:55

>>157
Heaven forbid you should use the same identifier twice within an expression.

It is beautiful, though.

Name: Anonymous 2009-03-14 3:01

fibs = loeb (map const [0, 1] ++ map (uncurry (flip (.)) . ((take 2 .) . genericDrop *** genericIndex [\[a, b] -> b * (2 * a + b), sum . map(^2)]) . (\(q, r) -> (q + r - 1, r)) . flip divMod 2)[2..])

Name: Anonymous 2009-03-14 9:08

>>157
Now it's full of pointless curried partial application. Ah, the wonders of abstraction.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List