Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Concurrent Programming

Name: Anonymous 2011-12-18 15:51

So /prog/, what do you think about current situation of concurrent programming?

Isn't it shit and pain in the ass?

Name: Anonymous 2011-12-20 18:59

>>39
It has about the same speed as racket, is still in development (better that than to be permanently stuck with shit features) and is actually quite well designed (of course, you're a Java faggot).
It's the higher-level C-like language we need. When it's finished and the compiler is improved, its speed will easily match other higher level languages, such as the overly-optimized Java (which, despite what trolls may say, is not all that slow these days).

Name: Anonymous 2011-12-20 20:14

>>41
and is actually quite well designed
N. O. P. E.

Name: Anonymous 2011-12-20 20:15

Is the fact that no one seemed to take any interest in my concept for a concurrent language that o opte earlier a sign for the future?  You're all saying this problem needs to be solved at the language levelled, is anyone interested in my language?

Name: Anonymous 2011-12-20 20:17

>>43
One word - erlang. But I'm still interested in what you can produce.

Name: Anonymous 2011-12-20 20:50

>>41
It's the higher-level C-like language we need.
It hardly has anything in common with C. They share braces and static typing and that's it.

We don't need it. It isn't applicable in many of the areas where C has been a good choice and it has one of the worst concurrency model of any languages that have been designed to address the issue of concurrency.

People keep trying to kill C with a language that depends on a runtime system and uncontrollable GC. That's just not going to happen. C is being displace by these languages, but it won't be replaced by them. Go is probably going to displace Python more than C in that regard.

Name: Anonymous 2011-12-20 20:57

>>44
After studying Erlang, I discovered it has almost the same approach as what I am trying to create, but I'm taking a more imperial approach. Right now I'm not very comfortablewith Erlang, as C++ is my home and I'm moving to D, but I guess Ill see what happens. ISO sill thin there is room for my contributions.

Name: Anonymous 2011-12-20 21:03

>>46
s/ISO still thin/I still think/
Goddamned gingerbread keyboard.

Name: Anonymous 2011-12-20 21:06

>>46
U MENA ``imperative''.
But seriously, there's nothing particularly scary about Erlang's approach... I guess if you're looking for an imperative language with nice concurrency support, try Ada. I've barely uses `tasks' in it yet, but the rest of the language is damn nice.

Name: Anonymous 2011-12-20 21:33

>>48
Its like learning French. Its not hostile, but its different, and takes a bit to get uses to, despite not being very different. And Jesus, even Ada has components for concurrency? Everyone's trying.to solve a problem but no one is doing itbrifht.

Name: Anonymous 2011-12-20 21:43

Name: Anonymous 2011-12-21 1:36

>>40

There is function composition, but just because you define a function to be the composition of many other functions doesn't mean that evaluating the composition will be fast. You can't immediately apply parallelization. In evaluating g*f(x) = g(f(x), g cannot be appled until the return value from f is known.

But depending on the function being composed, the result might be equivalent to a different parallelizable solution, or you might be able to get an approximation algorithm that is parallelizable.

If the function is defined on a small finite domain, then you could use a bruteforce approach to calculate f^N(x) in log(N) time, given that you have enough processors. You could enumerate the members of f's domain into 1...k, and then express f as a mapping on these numbers. This mapping can be expressed as an array of ints, A, where A[i] = j when f(num->object(i)) = num->object(j). You can then apply A to itself, to get an array for f^2. You can apply that array to itself to get an array for f^4. And so on. Doing this n times will give you a definition for f^(2^n). You can then take the base 2 decomposition of N, and compose the needed f^(2^i) functions to give you f^N. This requires having a processor for every single element in the domain, and the domain can be very large, especially if f is operating on some kind of data structure. So you can't do this very often.

Name: Anonymous 2011-12-21 1:55


imperative = AIDS

Name: Anonymous 2011-12-21 2:51

>>52
Explain.

Name: Anonymous 2011-12-21 3:24

>>51
What I'm saying that f'(x) = f(f(x)) can often be computed to an equivalent that runs as fast as f(x) can, and if you believe
Each application of f cannot be performed until the input parameter is known.
holds in the vast majority of cases then you're in for a lot of surprises. Any commutative operation can be parallelized to log n time with log n CPUs via straightforward divide and conquer. Many that aren't can still be parallelized without much trouble (consider what's needed for division).

Name: Anonymous 2011-12-21 4:09

>>54

yeah, that's cool, but that takes special knowledge about what f is, and you are using a different algorithm to calculate the same result, and this different algorithm is either very efficient or can be easily parallelized. One example of an f I was thinking of would be expanding the frontier in breadth first search. One can't expand the next frontier until the current one is known. Although the frontier can be expanded in parallel, so there are opportunities for parallelization there. But there is a sequential nature to growing a path, where the growth must always go into directions that you have not yet been to. You don't know where not to go, until you've gotten there.

Name: Anonymous 2011-12-21 4:58

>>55
But there is a sequential nature to growing a path
There's a parallel nature to finding a path. There's a common trait to the vast majority of computational problems: the actual path is relatively short, but there's a lot of possible paths. Because you are not usually interested in problems where the answer is very, very long but relatively straightforward to produce, and the only problem is the sheer length of it. You are interested in problems with short answers which are hard to find. Such problems are usually highly parallelizable, since you are perfectly content with the limit of "you can't find the path faster than you can walk the path". Exceptions are rare and contrived.

Name: Anonymous 2011-12-21 5:17

>>56

yeah, I bet massively parallel breadth first search has a lot of applications. It seems to fit the same type of characteristic of parallel algorithms, where you do a ton of work and most of it ends up being irrelevant, but you end up with your solution, which is something similar to a shortest path to one specific vertex. So in this scenario, parallelization would be very useful if the branching factor is relatively high, and the length of the path you are looking for is manageable.

Name: Anonymous 2011-12-21 6:31

>>43
There is no problem that needs to be solved, the tools and theory already exists, I am especially not interested in a broken language designed by someone who apparently isn't very experienced when it comes to even using programming languages, let alone designing them.

Name: Anonymous 2011-12-21 7:29

>>58
Deadlocks, treads races, etc
Ring a bell?
Threads, as they ate now, are highly unstable.

Name: Anonymous 2011-12-21 7:51

>>59
If you mistreat them, yes, if you know what you're doing, no.
You can misuse anything and get a bad result, if you think removing control from the user is a good idea then you're most likely a sub par Java programmer. It can only lead to inefficiency.

Name: Anonymous 2011-12-21 8:34

>>55
It takes special knowledge of f to write f in the first place. Parallelizable means parallelizable in principle, not 'by the compiler'. However if the compiler knows f is commutative there are a lot of cases where f' is automatically computable.

Name: Anonymous 2011-12-21 9:46

If Java was good enough with threads then Erlang would have never had a chance with the big companies to begin with, with its unfriendly prologian syntax (vs. piggybacking the most familiar syntax after BASIC), near complete lack of IDEs, two books (vs the 90,000 publishers that push 90,000 Java books clocking at 700 pages every year), lack of existence even in academia (Haskell, ML, Prolog is more likely to be taught at some point while in school), lack of a workforce knowledgeable in it, completely cryptic hidden documentation on even setting up a proper application with it (I still haven't learned how that .app shit works and I've written a few thousand lines of Erlang at this point and have the pragmatic Erlang book).

Erlang is not among the hipster languages, it doesn't even pretend to be elegant. Java has to be doing some things terribly wrong to have something like Erlang pick up market share in the same space as Java's alleged best feature.

Name: Anonymous 2011-12-21 10:59

Yearly reminder how easy it is to implement sleepsort in C.

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>

int main(int argc, char **argv) {
  int i;

  omp_set_num_threads(argc-1);

#pragma omp parallel for
  for (i = 0; i < argc - 1; i++) {
    long int this = atol(argv[i+1]);

    sleep(this);

    printf("%ld\n", this);
    fflush(stdout);
  }

  return 0;
}

Name: Anonymous 2011-12-21 11:56

>>63
this
2011
using C++ reserved word in C code
can not compile with C++ compiler

O_o

Name: Anonymous 2011-12-21 11:58

>>64
C++
IHBT :(

Name: 64 2011-12-21 12:02

>>65
I'm not trolling actually. There are many people who use C++.

Name: Anonymous 2011-12-21 12:03

>>66
There are many people that live under dictators, too. Some filth like Russians will even vote for them, given the chance. But I don't see why I should support them.

Name: Anonymous 2011-12-21 12:08

>>67
Huh? Great argument, though mine will be better.

If Nazis were alive, they would use C++ reserved words in C code.

Name: Anonymous 2011-12-21 12:09

>>68
I expect such great engineers would indeed use C and reject C++. Good point!

Name: Anonymous 2011-12-21 12:10

>>69
lol'd

you win

Name: Anonymous 2011-12-21 12:12

Just use Clojure you shitters.

Thread over.

Name: Anonymous 2011-12-21 12:12

>>71

Fuck you!

GC is shit.

Name: Anonymous 2011-12-21 12:13

>>71
JVM is shit.

Name: Anonymous 2011-12-21 14:22

>>64
I think the author of the code did it specifically to annoy you C++ retards.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List