Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Concurrent Programming

Name: Anonymous 2011-12-18 15:51

So /prog/, what do you think about current situation of concurrent programming?

Isn't it shit and pain in the ass?

Name: Anonymous 2011-12-18 16:29

new thread()

HOLY NIGGER! THAT WAS HARD!

Name: Anonymous 2011-12-18 18:02

>>2
enjoy your deadlocks and synchronization bloat,
5 niggers and a bucket o' chicken problem

Name: Anonymous 2011-12-18 22:14

>>1
yeah is a pain in the ass. on the other hand you could use a language with a ffi and with good support of concurrent threads (ie erlang) and get away with it

Name: Anonymous 2011-12-19 0:37

I've been working on this problem for a while, and the general consensus is that it sucks. I'm implementing a language with autonomous mutex locking that goes through a governing lock controller, and implements simple keywords and operators for concurrency eg.


spawn (routine) //Routines are automatically concurrent loop-  like functions
call (function) //Has the same effect as 'call' in some assembly implementations
statement { statement {statement }} //Where all would be distributed accross threads in the interpreter


I'll release the first version of the interpreter in a few months, but I can't promise anything; even threadsafeness will only come with maturation of the software, and I'm on the fence right now about implementing it in C++ or Erlang. Most likely I will implement it in Erlang due to C++ being utter malware where leaks are commonplace and Concurrency only comes in the form of shitty APIs (like pthreads) but I am new to Erlang, and that could ho,d me back, whereas I know C++ like the back of my hand.

Name: Anonymous 2011-12-19 0:37

Erlang and Haskell make it easier.

Name: Anonymous 2011-12-19 0:48

>>5
Of course, the line termination is not newline as it may seem in the post above, it's more a flow control intensive mix of commas and full stops, where a statement is ended with a full stop or a comma, and a sentence is 1 or more statements terminated by a full stop


statement,
another statement,
function call.

if x > 9000,
    read(SICP).

kill(osama),
kill(obama).

Name: Anonymous 2011-12-19 1:14

MPI and OpenMP is nice, I don't get what all the whinging is about.

Name: Anonymous 2011-12-19 1:28

It's extremely easy and efficient in Go. It's your fault that you're using inferior languages.

Name: Anonymous 2011-12-19 1:32

>>9
Go isn't efficient.

Name: Anonymous 2011-12-19 1:41

Go is a reimplementaion of the entire c faimily with annoying operators and the go keyword. If a language doesn't make you rethink programming as a whole,  you shouldn't be learning that language. Erlang does everything go does, but with style. Google is great, but go was (is) a screw up.

Name: Anonymous 2011-12-19 2:09

Go is ugly, that is one thing I do not forgive.

/* The Computer Language Benchmarks Game
 * http://shootout.alioth.debian.org/
 *
 * contributed by K P anonymous
 */

package main

import (
   "bufio"
   "os"
)

const lineSize = 60

var complement = [256]uint8{
   'A': 'T', 'a': 'T',
   'C': 'G', 'c': 'G',
   'G': 'C', 'g': 'C',
   'T': 'A', 't': 'A',
   'U': 'A', 'u': 'A',
   'M': 'K', 'm': 'K',
   'R': 'Y', 'r': 'Y',
   'W': 'W', 'w': 'W',
   'S': 'S', 's': 'S',
   'Y': 'R', 'y': 'R',
   'K': 'M', 'k': 'M',
   'V': 'B', 'v': 'B',
   'H': 'D', 'h': 'D',
   'D': 'H', 'd': 'H',
   'B': 'V', 'b': 'V',
   'N': 'N', 'n': 'N',
}

func main() {
   in, _ := bufio.NewReaderSize(os.Stdin, 1<<18)
   buf := make([]byte, 1<<20)
   line, err := in.ReadSlice('\n')
   for err == nil {
      os.Stdout.Write(line)

      // Accumulate reversed complement in buf[w:]
      nchar := 0
      w := len(buf)
      for {
         line, err = in.ReadSlice('\n')
         if err != nil || line[0] == '>' {
            break
         }
         line = line[0 : len(line)-1]
         nchar += len(line)
         if len(line)+nchar/lineSize+128 >= w {
            nbuf := make([]byte, len(buf)*5)
            copy(nbuf[len(nbuf)-len(buf):], buf)
            w += len(nbuf) - len(buf)
            buf = nbuf
         }

         for i, c := range line {
            buf[w-i-1] = complement[c]
         }
         w -= len(line)
      }

      // Copy down to beginning of buffer, inserting newlines.
      // The loop left room for the newlines and 128 bytes of padding.
      i := 0
      for j := w; j < len(buf); j += lineSize {
         i += copy(buf[i:i+lineSize], buf[j:])
         buf[i] = '\n'
         i++
      }
      os.Stdout.Write(buf[0:i])
   }
}

Name: Anonymous 2011-12-19 2:25

>>12
According to those stats, Python 3 is, on average, 58x slower than C. LOL

Name: Anonymous 2011-12-19 3:23

>>13
LOL THAT IS SO FANY

Name: Anonymous 2011-12-19 3:25

>>14
That's hilarious

Name: Anonymous 2011-12-19 3:35

>>13
No one used Python 3 anyway, so it doesn't really matter.

Name: Anonymous 2011-12-19 3:59

Why are Perl, Python and Ruby so fucking slow?

Name: Anonymous 2011-12-19 4:21

>>17

lazy programmers making the implementations? Of maybe the languages support commonly used features that are costly, and the ease of programming that way makes the programmer overlook the inefficiency and accidentally write code that is more expensive than it needs to be? Silly implementation of GC? Failure to perform optimizations that are useful for dynamic languages? I dunno, there could be a lot of reasons.

Name: Anonymous 2011-12-19 5:41

>>18
the ease of programming that way makes the programmer overlook the inefficiency
I find this to very commonly be the case. Making expensive and memory-consuming operations so easy to use usually ends up with people being incredibly wasteful in their programming.

Name: Anonymous 2011-12-20 0:35

>>19
which is fine, because all optimization that's worth doing can be done later.

Name: Anonymous 2011-12-20 0:40

Concurrent Programming is ENTERPRISE QUALITY with java

Name: Anonymous 2011-12-20 1:55

>>18
Failure to perform optimizations that are useful for misdesigned dynamic languages?

Name: Anonymous 2011-12-20 2:04

>>21
I'm currently trying to understand where a deadlock comes from, in a concurrent Java application.
Thread A is waiting for a, which is owned by B, who is waiting for b, which is owned by NOBODY. Fucking useless threaddumps.
I want to die.

Name: Anonymous 2011-12-20 3:52

>>21
Ha ha ha, oh wow.

Name: Anonymous 2011-12-20 5:25

Concurrent programming is the future. Since we can no longer significantly increase the performance of a single core the chip designers are starting to increase the number of cores instead. This obviously means that non-concurrent applications will have a hardcap on their performance while concurrent application performance will continue to follow Moore's law.

Unfortunately we do not currently have proper tools for concurrent programming, and as a result it is damn near impossible to do non-trivial concurrent applications correctly (trivial concurrency is also much, much harder than you'd think). Fortunately the industry is aware of this and there are many initiatives to solve the problem both at the language level (e.g., Clojure) and library level (e.g., Apple's GCD). While these efforts are in the right direction, we do not have a full solution yet.

Name: Anonymous 2011-12-20 5:36

>>25

The issue is that not every problem can be easily parallelized. For some problems, it is as simple as splitting up the work arbitrarily among N workers and then have them all report back when they are done. Then there are other things that are innately sequential. Like computing, f(f(f(f(f(x)))). Each application of f cannot be performed until the input parameter is known. So, if you were to draw a dependency graph of the computation, you'd get a single path where each node is an application of f, with x at the end.

Name: Anonymous 2011-12-20 6:02

>>26

Not every problem can, but basically every modern application can. We're long past the time when an application would solve one sequential computation and do nothing else. Even if at its core your program is computing some problem that cannot be parallelized, there will still be a large number of auxiliary tasks that can be executed in parallel (e.g., user interactions, GUI stuff, I/O, networking, housekeeping, etc. etc.)

Name: Anonymous 2011-12-20 7:22

>>27

that's true, but that's only like 5 or 7 things, so once you have a 7 or 10 core computer, you have pretty much all the hardware you could easily throw at any general application. So the limitations of a single core cpu are still pretty important. I wonder what we'll do once cpu speeds stop increasing, and demand for computing still increases. I guess we'll just have a computing shortage. Or maybe people will stop trying to make machines vroom vroom fasta and instead make software careful careful more aware of limited resources.

Name: Anonymous 2011-12-20 7:30

>>1
I know you guys like to dis Java (despite not knowing anything about it) but it has some pretty simple and nice abstractions for concurrency. Check it out, things can really be that simple though java fails at everything else.

Name: Anonymous 2011-12-20 11:36

seriously though java is best

Name: Anonymous 2011-12-20 12:06

Concurrency have already been solved in multiple scenarios, take shit like trains, motherfucking trains, trains are multiple processes working concurrently on mutable entities, note how they do not crash (well, at least most of them don't).

Name: Anonymous 2011-12-20 12:16

>>30
C is better, for concurrency at least.

Name: Anonymous 2011-12-20 12:19

Study Erlang. Develop a new language based around concurrency and not shit.

Name: Anonymous 2011-12-20 12:32

>>33
What's wrong with Go?

Name: Anonymous 2011-12-20 14:15

>>34
Besides that it's shit?

Name: Anonymous 2011-12-20 14:19

>>35
But it's not shit.

Name: Anonymous 2011-12-20 15:38

>>29

Yes, yes it does. However, you need to sacrifice a great amount of performance(apart from usual java slowness).

I.e, you can use an arrayblockingqueue. It is so easy to use that even an high schooler can solve producer/consumer scenario. Hope you have something to do while waiting.

Name: Anonymous 2011-12-20 16:28

>>35
If that's the worst you have, there must be nothing wrong with Go.

Name: Anonymous 2011-12-20 17:34

>>36,38
It's slow, immature and very poorly designed.

Name: Anonymous 2011-12-20 18:17

>>26
f(f(f(f(f(x))))
Have you never heard of function composition? Most of the things that "can't" be parallelized in fact can be.

>>34
There are a lot of arguments against Go, but since this is a concurrency discussion, a single goroutine can easily block all of the goroutines on that CPU, with no resource contention.

Name: Anonymous 2011-12-20 18:59

>>39
It has about the same speed as racket, is still in development (better that than to be permanently stuck with shit features) and is actually quite well designed (of course, you're a Java faggot).
It's the higher-level C-like language we need. When it's finished and the compiler is improved, its speed will easily match other higher level languages, such as the overly-optimized Java (which, despite what trolls may say, is not all that slow these days).

Name: Anonymous 2011-12-20 20:14

>>41
and is actually quite well designed
N. O. P. E.

Name: Anonymous 2011-12-20 20:15

Is the fact that no one seemed to take any interest in my concept for a concurrent language that o opte earlier a sign for the future?  You're all saying this problem needs to be solved at the language levelled, is anyone interested in my language?

Name: Anonymous 2011-12-20 20:17

>>43
One word - erlang. But I'm still interested in what you can produce.

Name: Anonymous 2011-12-20 20:50

>>41
It's the higher-level C-like language we need.
It hardly has anything in common with C. They share braces and static typing and that's it.

We don't need it. It isn't applicable in many of the areas where C has been a good choice and it has one of the worst concurrency model of any languages that have been designed to address the issue of concurrency.

People keep trying to kill C with a language that depends on a runtime system and uncontrollable GC. That's just not going to happen. C is being displace by these languages, but it won't be replaced by them. Go is probably going to displace Python more than C in that regard.

Name: Anonymous 2011-12-20 20:57

>>44
After studying Erlang, I discovered it has almost the same approach as what I am trying to create, but I'm taking a more imperial approach. Right now I'm not very comfortablewith Erlang, as C++ is my home and I'm moving to D, but I guess Ill see what happens. ISO sill thin there is room for my contributions.

Name: Anonymous 2011-12-20 21:03

>>46
s/ISO still thin/I still think/
Goddamned gingerbread keyboard.

Name: Anonymous 2011-12-20 21:06

>>46
U MENA ``imperative''.
But seriously, there's nothing particularly scary about Erlang's approach... I guess if you're looking for an imperative language with nice concurrency support, try Ada. I've barely uses `tasks' in it yet, but the rest of the language is damn nice.

Name: Anonymous 2011-12-20 21:33

>>48
Its like learning French. Its not hostile, but its different, and takes a bit to get uses to, despite not being very different. And Jesus, even Ada has components for concurrency? Everyone's trying.to solve a problem but no one is doing itbrifht.

Name: Anonymous 2011-12-20 21:43

Name: Anonymous 2011-12-21 1:36

>>40

There is function composition, but just because you define a function to be the composition of many other functions doesn't mean that evaluating the composition will be fast. You can't immediately apply parallelization. In evaluating g*f(x) = g(f(x), g cannot be appled until the return value from f is known.

But depending on the function being composed, the result might be equivalent to a different parallelizable solution, or you might be able to get an approximation algorithm that is parallelizable.

If the function is defined on a small finite domain, then you could use a bruteforce approach to calculate f^N(x) in log(N) time, given that you have enough processors. You could enumerate the members of f's domain into 1...k, and then express f as a mapping on these numbers. This mapping can be expressed as an array of ints, A, where A[i] = j when f(num->object(i)) = num->object(j). You can then apply A to itself, to get an array for f^2. You can apply that array to itself to get an array for f^4. And so on. Doing this n times will give you a definition for f^(2^n). You can then take the base 2 decomposition of N, and compose the needed f^(2^i) functions to give you f^N. This requires having a processor for every single element in the domain, and the domain can be very large, especially if f is operating on some kind of data structure. So you can't do this very often.

Name: Anonymous 2011-12-21 1:55


imperative = AIDS

Name: Anonymous 2011-12-21 2:51

>>52
Explain.

Name: Anonymous 2011-12-21 3:24

>>51
What I'm saying that f'(x) = f(f(x)) can often be computed to an equivalent that runs as fast as f(x) can, and if you believe
Each application of f cannot be performed until the input parameter is known.
holds in the vast majority of cases then you're in for a lot of surprises. Any commutative operation can be parallelized to log n time with log n CPUs via straightforward divide and conquer. Many that aren't can still be parallelized without much trouble (consider what's needed for division).

Name: Anonymous 2011-12-21 4:09

>>54

yeah, that's cool, but that takes special knowledge about what f is, and you are using a different algorithm to calculate the same result, and this different algorithm is either very efficient or can be easily parallelized. One example of an f I was thinking of would be expanding the frontier in breadth first search. One can't expand the next frontier until the current one is known. Although the frontier can be expanded in parallel, so there are opportunities for parallelization there. But there is a sequential nature to growing a path, where the growth must always go into directions that you have not yet been to. You don't know where not to go, until you've gotten there.

Name: Anonymous 2011-12-21 4:58

>>55
But there is a sequential nature to growing a path
There's a parallel nature to finding a path. There's a common trait to the vast majority of computational problems: the actual path is relatively short, but there's a lot of possible paths. Because you are not usually interested in problems where the answer is very, very long but relatively straightforward to produce, and the only problem is the sheer length of it. You are interested in problems with short answers which are hard to find. Such problems are usually highly parallelizable, since you are perfectly content with the limit of "you can't find the path faster than you can walk the path". Exceptions are rare and contrived.

Name: Anonymous 2011-12-21 5:17

>>56

yeah, I bet massively parallel breadth first search has a lot of applications. It seems to fit the same type of characteristic of parallel algorithms, where you do a ton of work and most of it ends up being irrelevant, but you end up with your solution, which is something similar to a shortest path to one specific vertex. So in this scenario, parallelization would be very useful if the branching factor is relatively high, and the length of the path you are looking for is manageable.

Name: Anonymous 2011-12-21 6:31

>>43
There is no problem that needs to be solved, the tools and theory already exists, I am especially not interested in a broken language designed by someone who apparently isn't very experienced when it comes to even using programming languages, let alone designing them.

Name: Anonymous 2011-12-21 7:29

>>58
Deadlocks, treads races, etc
Ring a bell?
Threads, as they ate now, are highly unstable.

Name: Anonymous 2011-12-21 7:51

>>59
If you mistreat them, yes, if you know what you're doing, no.
You can misuse anything and get a bad result, if you think removing control from the user is a good idea then you're most likely a sub par Java programmer. It can only lead to inefficiency.

Name: Anonymous 2011-12-21 8:34

>>55
It takes special knowledge of f to write f in the first place. Parallelizable means parallelizable in principle, not 'by the compiler'. However if the compiler knows f is commutative there are a lot of cases where f' is automatically computable.

Name: Anonymous 2011-12-21 9:46

If Java was good enough with threads then Erlang would have never had a chance with the big companies to begin with, with its unfriendly prologian syntax (vs. piggybacking the most familiar syntax after BASIC), near complete lack of IDEs, two books (vs the 90,000 publishers that push 90,000 Java books clocking at 700 pages every year), lack of existence even in academia (Haskell, ML, Prolog is more likely to be taught at some point while in school), lack of a workforce knowledgeable in it, completely cryptic hidden documentation on even setting up a proper application with it (I still haven't learned how that .app shit works and I've written a few thousand lines of Erlang at this point and have the pragmatic Erlang book).

Erlang is not among the hipster languages, it doesn't even pretend to be elegant. Java has to be doing some things terribly wrong to have something like Erlang pick up market share in the same space as Java's alleged best feature.

Name: Anonymous 2011-12-21 10:59

Yearly reminder how easy it is to implement sleepsort in C.

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>

int main(int argc, char **argv) {
  int i;

  omp_set_num_threads(argc-1);

#pragma omp parallel for
  for (i = 0; i < argc - 1; i++) {
    long int this = atol(argv[i+1]);

    sleep(this);

    printf("%ld\n", this);
    fflush(stdout);
  }

  return 0;
}

Name: Anonymous 2011-12-21 11:56

>>63
this
2011
using C++ reserved word in C code
can not compile with C++ compiler

O_o

Name: Anonymous 2011-12-21 11:58

>>64
C++
IHBT :(

Name: 64 2011-12-21 12:02

>>65
I'm not trolling actually. There are many people who use C++.

Name: Anonymous 2011-12-21 12:03

>>66
There are many people that live under dictators, too. Some filth like Russians will even vote for them, given the chance. But I don't see why I should support them.

Name: Anonymous 2011-12-21 12:08

>>67
Huh? Great argument, though mine will be better.

If Nazis were alive, they would use C++ reserved words in C code.

Name: Anonymous 2011-12-21 12:09

>>68
I expect such great engineers would indeed use C and reject C++. Good point!

Name: Anonymous 2011-12-21 12:10

>>69
lol'd

you win

Name: Anonymous 2011-12-21 12:12

Just use Clojure you shitters.

Thread over.

Name: Anonymous 2011-12-21 12:12

>>71

Fuck you!

GC is shit.

Name: Anonymous 2011-12-21 12:13

>>71
JVM is shit.

Name: Anonymous 2011-12-21 14:22

>>64
I think the author of the code did it specifically to annoy you C++ retards.

Don't change these.
Name: Email:
Entire Thread Thread List