Is OpenMP actually usable? I had a very simply parallelizable problem written in C that executed fast enough (under 2 minutes), but I thought it would make a nice benchmark. Basically, it had n (roughly equally sized, n being < 200) sections that could be executed totally independently, and only needed a join at the end.
I installed the latest gcc with the OpenMP `task' support to test it, and the execution got only around five seconds faster compared to the non-MP version with the same compiler. Moreover, only one of the two CPUs was active during the #ompd part. This stuff is supposed to be simple. What gives?
Name:
Anonymous2012-01-05 15:07
You probably did something wrong.
Name:
Anonymous2012-01-05 15:12
That was my first thought, but after thinking it through, there wasn't much that could go wrong. There were < 200 independent, roughly equally sized OMP tasks that only needed to wait until each of them had completed. If this isn't a case of trivial parallelization, there isn't one.
Name:
Anonymous2012-01-05 15:27
If you don't post your code, how you compiled it and how you ran it there is nothing we can tell you beyond what >>2-san said.
To answer your first question, yes, OpenMP is very usable and extremely useful.
Name:
Anonymous2012-01-05 15:37
I can't post the code because I wrote it at work, and I don't have access to the source at home. I used gcc 4.6something for the compilation, because the ancient version that came with OSX didn't support tasks, which seemed like the thing I needed.
It's still very probable that I something wrong, but the OpenMP tutorials I found weren't particularly useful.
Did you compile with -fopenmp?
Why didn't you just use omp sections?
What did the omp directives look like?
Why are you posting this if you are unable to provide vital information?
Name:
Anonymous2012-01-05 15:58
>>6 Did you compile with -fopenmp?
Of course I did. Why didn't you just use omp sections?
I couldn't please gcc well enough for it to accept my omp sections. I have no idea what it expected, but it seems that error-reporting hasn't been their first priority. What did the omp directives look like? #pragma omp task before each task (a single function call) and #pragma omp taskwait at the end before output. Why are you posting this if you are unable to provide vital information?
Because I'm drunk and mildly intrigued by the prospect of trivial parallellization of trivially parallelizable programs in C? This is /prog/, did you expect something deeper?
>>7
Try this next time.
#pragma omp parallel sections
{
#pragma omp section
{
do something;
}
#pragma omp section
{
do something else;
}
}
Try to set CPU affinity if the kernel decides to run it on a single core.
And yes, I did expect more, of course my assumption is that you're not retarded. What you did was ask "what's wrong with what I did" without actually posting what you did, we're not magicians so how are we supposed to tell?
Name:
Anonymous2012-01-05 16:20
>>8
Well, that's exactly what I did, and gcc spouted out some nonsense that pretty much made it clear that it implements OpenMP as a dirty hack.
of course my assumption is that you're not retarded.
At least it seems that you've eased on your assumptions by now.
>>9
You didn't do that because that would work perfectly fine. You must have done something else.
Name:
Anonymous2012-01-05 16:49
>>10
Considering I stated there were < 200 tasks (nitpicking aside, you know what that means), it should be obvious I used a loop. So the code was something like:
while (whatever) {
#pragma omp task
whatever_as_long_as_it_doesn't_depend_on_anything_or_fuck_up_anything();
some_crap_no_one_cares_about++;
}
#pragma omp taskwait
Name:
Anonymous2012-01-05 18:58
HUDRUR WHAT'S WRONG WITH MY CODE
I WON'T POST IT THOUGH
I JUST DID EVERYTHING RIGHT, BUT IT'S WRONG
inb4 some /g/ nigger bullshit like op thinks hyperthreading gives him another core