Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Concurrent Programming

Name: Anonymous 2011-12-18 15:51

So /prog/, what do you think about current situation of concurrent programming?

Isn't it shit and pain in the ass?

Name: Anonymous 2011-12-21 1:36

>>40

There is function composition, but just because you define a function to be the composition of many other functions doesn't mean that evaluating the composition will be fast. You can't immediately apply parallelization. In evaluating g*f(x) = g(f(x), g cannot be appled until the return value from f is known.

But depending on the function being composed, the result might be equivalent to a different parallelizable solution, or you might be able to get an approximation algorithm that is parallelizable.

If the function is defined on a small finite domain, then you could use a bruteforce approach to calculate f^N(x) in log(N) time, given that you have enough processors. You could enumerate the members of f's domain into 1...k, and then express f as a mapping on these numbers. This mapping can be expressed as an array of ints, A, where A[i] = j when f(num->object(i)) = num->object(j). You can then apply A to itself, to get an array for f^2. You can apply that array to itself to get an array for f^4. And so on. Doing this n times will give you a definition for f^(2^n). You can then take the base 2 decomposition of N, and compose the needed f^(2^i) functions to give you f^N. This requires having a processor for every single element in the domain, and the domain can be very large, especially if f is operating on some kind of data structure. So you can't do this very often.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List