>>40
When I have use of one of the university's supercomputers, I think it makes more sense to write something that will take advantage of its capabilities than to pretend a teenager's gaming rig is going to outperform it if I sprinkle CUDA fairy dust on it.
Anyway, the code in >>29 isn't a serious attempt at anything. I just wrote some code to check if >>11 was legitimate, and then I remembered I had an old project I could just plug that into, so I did.
Findings: OpenSSL's SHA1 is about as fast as its DES_crypt.
>>42
Given the price of the hardware, that's actually less cost-effective than doing it the normal way. I'm surprised at how sucky CUDA SHA1 turns out to be.
>>53
Sure, but since >>45,50 is a teenager who came here from the imageboards, I doubt he's familiar with Dawkins' work (beyond maybe The God Delusion).
A stricter way to phrase it would perhaps be that the reason 2ch-style anonymity is valued isn't because it's a meme. That way either definition works.
>>57
Don't waste your time and effort posting about internet arguments!
Name:
Anonymous2010-06-07 14:03
>the reason 2ch-style anonymity is valued isn't because it's a meme
Thats what you think. I see a pattern of behavior replicated subconsciously. Like a new fashion...
P.S. Xarn: It seems that you have some cynicism towards the the whole idea of CUDA and GPU computing. I assure you that it is a glimpse of the future. As technology progresses, we are going to have many multi-thread, multi-core, et cetra features that will succeed current technology. If we want to reap the rewards of our innovations, we are going to have to write code that supports and fully utilizes these new technologies. What's the point of having multi-core and 64-bit processors if they aren't even going to be fully utilized? At some point we are going to have to make machine code core-scaleable so that we can fully use how ever many cores we have.
>>63
CUDA and GPU computing is not the same as general-purpose multicore computing. GPU computing is more specialized.
I also believe that we'll be seeing a lot more parallelization in the future. And not just CPU parallelization, but probably to the same level that ASIC's and FPGA's are.
>>64
Of course, GPU computing is not the same as ``general-purpose multicore computing''. But what I'm saying is that GPU computing currently reflects the future of what CPU computing should be: scalability to the hardware.