>>1
yeah is a pain in the ass. on the other hand you could use a language with a ffi and with good support of concurrent threads (ie erlang) and get away with it
Name:
Anonymous2011-12-19 0:37
I've been working on this problem for a while, and the general consensus is that it sucks. I'm implementing a language with autonomous mutex locking that goes through a governing lock controller, and implements simple keywords and operators for concurrency eg.
spawn (routine) //Routines are automatically concurrent loop- like functions
call (function) //Has the same effect as 'call' in some assembly implementations
statement { statement {statement }} //Where all would be distributed accross threads in the interpreter
I'll release the first version of the interpreter in a few months, but I can't promise anything; even threadsafeness will only come with maturation of the software, and I'm on the fence right now about implementing it in C++ or Erlang. Most likely I will implement it in Erlang due to C++ being utter malware where leaks are commonplace and Concurrency only comes in the form of shitty APIs (like pthreads) but I am new to Erlang, and that could ho,d me back, whereas I know C++ like the back of my hand.
Name:
Anonymous2011-12-19 0:37
Erlang and Haskell make it easier.
Name:
Anonymous2011-12-19 0:48
>>5
Of course, the line termination is not newline as it may seem in the post above, it's more a flow control intensive mix of commas and full stops, where a statement is ended with a full stop or a comma, and a sentence is 1 or more statements terminated by a full stop
statement,
another statement,
function call.
if x > 9000,
read(SICP).
kill(osama),
kill(obama).
Name:
Anonymous2011-12-19 1:14
MPI and OpenMP is nice, I don't get what all the whinging is about.
Name:
Anonymous2011-12-19 1:28
It's extremely easy and efficient in Go. It's your fault that you're using inferior languages.
Go is a reimplementaion of the entire c faimily with annoying operators and the go keyword. If a language doesn't make you rethink programming as a whole, you shouldn't be learning that language. Erlang does everything go does, but with style. Google is great, but go was (is) a screw up.
// Accumulate reversed complement in buf[w:]
nchar := 0
w := len(buf)
for {
line, err = in.ReadSlice('\n')
if err != nil || line[0] == '>' {
break
}
line = line[0 : len(line)-1]
nchar += len(line)
if len(line)+nchar/lineSize+128 >= w {
nbuf := make([]byte, len(buf)*5)
copy(nbuf[len(nbuf)-len(buf):], buf)
w += len(nbuf) - len(buf)
buf = nbuf
}
for i, c := range line {
buf[w-i-1] = complement[c]
}
w -= len(line)
}
// Copy down to beginning of buffer, inserting newlines.
// The loop left room for the newlines and 128 bytes of padding.
i := 0
for j := w; j < len(buf); j += lineSize {
i += copy(buf[i:i+lineSize], buf[j:])
buf[i] = '\n'
i++
}
os.Stdout.Write(buf[0:i])
}
}
Name:
Anonymous2011-12-19 2:25
>>12
According to those stats, Python 3 is, on average, 58x slower than C. LOL
lazy programmers making the implementations? Of maybe the languages support commonly used features that are costly, and the ease of programming that way makes the programmer overlook the inefficiency and accidentally write code that is more expensive than it needs to be? Silly implementation of GC? Failure to perform optimizations that are useful for dynamic languages? I dunno, there could be a lot of reasons.
>>18 the ease of programming that way makes the programmer overlook the inefficiency
I find this to very commonly be the case. Making expensive and memory-consuming operations so easy to use usually ends up with people being incredibly wasteful in their programming.
Name:
Anonymous2011-12-20 0:35
>>19
which is fine, because all optimization that's worth doing can be done later.
Name:
Anonymous2011-12-20 0:40
Concurrent Programming is ENTERPRISE QUALITY with java
>>18 Failure to perform optimizations that are useful for misdesigned dynamic languages?
Name:
Anonymous2011-12-20 2:04
>>21
I'm currently trying to understand where a deadlock comes from, in a concurrent Java application.
Thread A is waiting for a, which is owned by B, who is waiting for b, which is owned by NOBODY. Fucking useless threaddumps.
I want to die.
Concurrent programming is the future. Since we can no longer significantly increase the performance of a single core the chip designers are starting to increase the number of cores instead. This obviously means that non-concurrent applications will have a hardcap on their performance while concurrent application performance will continue to follow Moore's law.
Unfortunately we do not currently have proper tools for concurrent programming, and as a result it is damn near impossible to do non-trivial concurrent applications correctly (trivial concurrency is also much, much harder than you'd think). Fortunately the industry is aware of this and there are many initiatives to solve the problem both at the language level (e.g., Clojure) and library level (e.g., Apple's GCD). While these efforts are in the right direction, we do not have a full solution yet.
The issue is that not every problem can be easily parallelized. For some problems, it is as simple as splitting up the work arbitrarily among N workers and then have them all report back when they are done. Then there are other things that are innately sequential. Like computing, f(f(f(f(f(x)))). Each application of f cannot be performed until the input parameter is known. So, if you were to draw a dependency graph of the computation, you'd get a single path where each node is an application of f, with x at the end.
Not every problem can, but basically every modern application can. We're long past the time when an application would solve one sequential computation and do nothing else. Even if at its core your program is computing some problem that cannot be parallelized, there will still be a large number of auxiliary tasks that can be executed in parallel (e.g., user interactions, GUI stuff, I/O, networking, housekeeping, etc. etc.)
that's true, but that's only like 5 or 7 things, so once you have a 7 or 10 core computer, you have pretty much all the hardware you could easily throw at any general application. So the limitations of a single core cpu are still pretty important. I wonder what we'll do once cpu speeds stop increasing, and demand for computing still increases. I guess we'll just have a computing shortage. Or maybe people will stop trying to make machines vroom vroom fasta and instead make software careful careful more aware of limited resources.
>>1
I know you guys like to dis Java (despite not knowing anything about it) but it has some pretty simple and nice abstractions for concurrency. Check it out, things can really be that simple though java fails at everything else.
Name:
Anonymous2011-12-20 11:36
seriously though java is best
Name:
Anonymous2011-12-20 12:06
Concurrency have already been solved in multiple scenarios, take shit like trains, motherfucking trains, trains are multiple processes working concurrently on mutable entities, note how they do not crash (well, at least most of them don't).
Yes, yes it does. However, you need to sacrifice a great amount of performance(apart from usual java slowness).
I.e, you can use an arrayblockingqueue. It is so easy to use that even an high schooler can solve producer/consumer scenario. Hope you have something to do while waiting.
>>35
If that's the worst you have, there must be nothing wrong with Go.
Name:
Anonymous2011-12-20 17:34
>>36,38
It's slow, immature and very poorly designed.
Name:
Anonymous2011-12-20 18:17
>>26 f(f(f(f(f(x))))
Have you never heard of function composition? Most of the things that "can't" be parallelized in fact can be.
>>34
There are a lot of arguments against Go, but since this is a concurrency discussion, a single goroutine can easily block all of the goroutines on that CPU, with no resource contention.