>>1 Does few LoC == faster?
No. Faster = more specialized = more SLOC.
I.e. FPGA code is fastest, but requires the more SLOC than assembly code, which requires more SLOC than C/C++, which requires more SLOC than languages with garbage collection.
Name:
Anonymous2013-05-25 9:25
Well, in C, you can compare a dumb strlen implementation with one using a Duff's device. The latter will need more LOCs, but should be faster (depending in the compiler).
It doesn't really matter really because computers run so fast nowadays. Just use whatever algorithm work best for you. Let's say you were to write a FizzBuzz program.
Which is faster? Using array-look up or an if-else clause-terfuck? You can't tell right?
Is there an instance where more LoC is faster than the shorter one?
Yes, compare an adder written in C, and an adder using interfaces, abstract methods and tight coupling cloud enabled business logic performance boosting stock market potential customer leverager in Java.
>>15
DON'T MOCK KEINE ASFASDFASDFASDFASDFASDF I FUCKING HATE YOU CRETIN ASDFASDFASDFASDFASDFAKHDSFQILWEUHFIAUHSDFKLJADSHFKLAHDSFASK MEET THE NASTY END OF MY SHOTGUN AND DIE IN A FIRE CRETIN
Extensive loop unrolling is counterproductive on any good architecture, where the hardware will unroll the loop automatically inside the CPU instead of having to waste cycles fetching each instruction over and over again from the massive amount of wasted cache space.
It might look a little faster in microbenchmarks on that piece of code because you've reduced the barely existent branch delay, but in doing so, you've also evicted a bunch of useful code from the cache --- if it's not a microbenchmark --- and branch overhead is TINY compared to a cache miss. Those idiots who write 4KB memcpy()s have this problem; yes, your code probably is the fastest way to copy a chunk of data around. No, it's not going to make all programs that use it faster, because now the code in the 1/8th of the L1 cache that it evicted is going to need to be read back in.
Data alignment requirements: another one of those profoundly stupid RISC design decisions. Maybe aligned access really was faster back when hardware was still relatively dumb but it doesn't matter a gnat's ass in these days of super-wide datapaths and Intelligent multilevel caching, instead all it does is uselessly consume memory bandwidth.
Scenario 1: processor never supported unaligned accesses but now it does. As a result all existing code would use only aligned accesses, and allowing unaligned accesses gains no benefit from that. Memory bandwidth becoming more important? Too bad, that old code is wasting it!
Scenario 2: processor always supported unaligned accesses, but a little slower, now unaligned accesses get faster. Existing code can never get slower, only faster. Those who used unaligned accesses will see performance improvements, those who didn't won't. Not a problem for memory bandwidth becoming the bottleneck.
"It looks like even the data alignment requirements of SSE instructions will be lifted in the future AMD and Intel processors." See where things are headed? Not allowing unaligned accesses in some of the SSE moves was a huge mistake, and they're trying to fix that now, although I don't really think it'll be easily fixable - existing code won't benefit for the reason stated above.
tl;dr: RISC ISAs are held back by restrictive design decisions, CISCs have a lot more flexibility they can optimise on. "Implement a rich ISA and focus on making it faster over time" is clearly the best strategy here, not "make things simple and horrible now and hope we can increase the clock frequency over time"!
>>44 implying intel would emloy a non-halakha jew
le master ruseman face
Name:
Anonymous2013-05-29 16:28
>>44
Big design up front, that is. Which is partly justified because hardware interfaces are much less flexible than software ones.
However, you say it like people throw away their source code. Even so, compiling x86 to whatever may be difficult, but it isn't impossible. Also, backcompat is very desirable, but at some point you have to start from scratch to be able to evolve. Someone must climb the hills.
Another spine-chilling behaviour is how often people say how much important is to know how computers work to make better and efficient programs. It's funny because this excerpt suggests the other way around: that the second scenario is better and programmers shouldn't care how computers work because the implementation details can be worked out in the long run. So don't blame me for a decision I didn't made.