Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Any decent modern general-purpose languages?

Name: Anonymous 2012-07-25 10:55

Assembly: Unportable. No standardised syntax.
Classical Visual Basic: Some good parts. Shit overall.
C: Shitty standard library. Deficient type system. Can't into Unicode. ``Unportable assembly.''
D and C++: Obfuscated boilerplate languages.
Java and C#: Forced OOP.
Common Lisp: Archaic cons-based library. Writing complex macros is a PitA due to the unlispy quotation syntaxes.
Scheme: CL without namespaces.
Clojure and Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.
OCaml: Great language, only one, deficient, implementation.
Haskell: Academic sex toy.
Forth: Reinventing the wheel over and over.
Ruby: Implicit declarations. Slow as fuck.
Python: Implicit declarations. FioC.
Perl: Brain damage.
PHP: Pretty much shit.
JavaScript: "" == false

It's impossible to list them all but, please, what decent modern general-purpose languages exist?

Name: Anonymous 2012-07-27 5:56

>>79
So, it just means they're full of security flaws hidden in C undefined behaviour.

Name: Anonymous 2012-07-27 6:11

>>71
So you would rather use a terribly designed language than to write a few lines of code required to use the FFI? Seriously?

You shouldn't need inline assembly anywhere except in kernels and some
case of extreme optimizations. Requiring a modern language to support that is outright ludicrous.

By the way, check this:
package main
// #include <stdio.h>
// int f(int a, int b) { return a * b; }
import "C"
func main() {
    C.printf(C.CString("%d\n"), C.f(2, 3))
}

Yes, I just defined and called a C function in inline Go. See that? It's not that hard to use C stuff.
However, I see no reason at all to require my language to support anything more than that. Inline ASM? C as a subset of the language? What purpose would any of that serve in the long run except to complicate my job?

By the way, there's no such thing as asm block in C. What you're referring to is a C extension included in whatever C-based language you're using.

Name: Anonymous 2012-07-27 13:37

>>82
Weave as a part of scipy and Instant as a part of Fenics are just two Python modules which can do the same thing, they also allow easy access to numpy arrays.

Name: Anonymous 2012-07-27 14:04

>>72
Compilers don't do magic. They follow a set of rules like any software to turn common constructs into tighter code. But compilers are not aware of the context of the problem you're trying to solve, or the algorithm you're trying to express. Suppose you're in a very hot block of code. You find a more clever solution than the ASM output. That's where you use inline assembly.

Name: Anonymous 2012-07-27 14:08

>>82
By the way, there's no such thing as asm block in C.
It's not standard, but every C compiler to exist has implemented it.

Name: Anonymous 2012-07-27 14:14

Try Grovy or Scala or Erlang

Name: Anonymous 2012-07-27 14:23

>>85
In midly incompatible ways.

Name: Anonymous 2012-07-27 14:40

>>84
However, it's extremely rare that such an optimization would be worth more than general portability. It certainly shouldn't be a requirement for a general-purpose language (note that ``general'' does not mean ``specific set of hardware''). So, when you REALLY have to use that, you can use whatever convoluted way you can think of, since it's quite unlikely that you'll be doing it regularly.

>>85
And that's exactly what I said. That block isn't a part of the C language, but is a very common compiler extension.

Name: Anonymous 2012-07-27 15:11

>>71,82
Haha, prawg, open your mouth and take my huge Lisp with inline C. http://lush.sourceforge.net/index.html

Name: Anonymous 2012-07-27 15:48

Scheme: CL without namespaces.
lol no, Scheme: unbloated CL

Name: Anonymous 2012-07-27 15:52

Perl6 come to me

Name: Anonymous 2012-07-27 15:57

>>90
Scheme is the C of Lisp. Undefined as shit.

Name: Anonymous 2012-07-27 16:36

>>92
Say, what's undefined in Scheme?

Name: Anonymous 2012-07-27 16:59

>>93
Order of evaluation of arguments, let may evaluate its clauses in any order, and certainly some others.

Name: Anonymous 2012-07-27 17:03

>>90
More like
Scheme: CL without any standard library

Name: Anonymous 2012-07-27 18:06

>>1
Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.

You do realize that Elang can do both, right?
At the same time, too.

Name: Anonymous 2012-07-27 18:43

>>96
yea but can it do both while it's doing both?

Name: Anonymous 2012-07-27 18:49

>>97
Just because you're the robot doesn't mean you're the robot.

Did you even read the book?

Name: Anonymous 2012-07-27 19:11

>>98
No, but I read SICP.

Name: Anonymous 2012-07-27 19:42

100 GET

Name: Anonymous 2012-07-27 19:51

>>94
Order of evaluation of arguments, let may evaluate its clauses in any order

I don't know why some people thinks that 'undefined behaviors' are a bad thing. Actually it's good because it force you to think in a correct way

Name: Anonymous 2012-07-27 19:55

Why would such a minor freckle like this matter? Are you mutating states, /prog/?

Name: Anonymous 2012-07-27 20:04

>>101
No, it's not.  Everything should be well-defined and nothing left up to the implementation.

>>102
In fact I am mutating states whenever I must, which happens about 1% of the time.  And in that 1% of the time, only 1% of the time does the side-effect escape the function.

Name: Anonymous 2012-07-27 20:14

>>103
I think that you don't understand why something is undefined behavior

Name: Anonymous 2012-07-27 20:18

JACKSON 5 + 100 GET

20 TIMES BETTER THAN JACKSON  5 GET

Name: Anonymous 2012-07-27 20:27

>>103
Everything should be well-defined and nothing left up to the implementation.
No languages have ``nothing left up to the implementation'' unless they are ``defined'' by their implementation. See: Perl.

Name: Anonymous 2012-07-27 20:29

>>103
I guess you want Java, “run once, write everywhere”.

Name: !L33tUKZj5I 2012-07-27 23:53

The best programming language there ever was, or ever will be, is Spectrum BASIC.
Every machine out there should have an interpreter.

Name: Anonymous 2012-07-28 3:11

>>103
Who made up that rule? If I made a Shitgol and left the entire language as an exercise to the implementations and Shitgol compiler writers and Shitgol users were okay with that then it should be fine. Standard specs only exist to prevent excess incompatible extensions between vendors.

Name: Anonymous 2012-07-28 4:43

>>104
I do understand, it's because compiler writers want extra room for optimizations.  For instance, a compiler could schedule the evaluation of a procedure's arguments in parallel.

>>106
Most languages aren't Lisp, thus they're crap.  Pointing out that most languages have undefined behaviour means nothing.

>>107,109
I never thought I'd say this, but Java does have something good about it.  I find reproducibility to be fairly important -- if a program is executed on two different implementations, it should have the same result (leaving aside things like reference->integer conversions which aren't even guaranteed to be the same across two runs of the same implementation).  Leaving undefined behaviour in your language is a great way to make your users shoot themselves in the foot.

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-07-28 5:53

>>72,73,82
Clearly none of you are experienced with Asm, or looked at compiler output (or you have, but can't understand/see a way to do it better, because of the first point). Compilers are a good way to generate code quickly; even at the highest optimisation level none of them can reason about things like register allocation in the same way that a knowledgeable programmer can. People have been saying compilers are better at generating code than programmers, but that's only if you compare to average or below-average Asm programmers.

Go learn Asm and compile any program you want, then look at the Asm output. Find a function and see if you can do it better. Unless you have some ultra-powerful compiler I haven't seen before, chances are you can beat the compiler in size, speed, or both. Perhaps because of the source language semantics the compiler can't do something, but YOU (should) know how your program operates and can easily take advantage of things like cross-function register allocation, convenient invariants (you can prove a register will always have e.g. 0, and use it appropriately; the compiler can't), and advanced stack manipulation.

>>82
Saying a language is "terribly designed" when it has the capabilities I need (and then some) is simply opinion. And it's not "write a few lines of code required to use the FFI", it's having to do that plus all the other "administrative" cruft surrounding it. In other words the integration is not as seamless as it could be. You said it yourself -- "FFI". Not "embed code directly in the compiler/JIT's output".

>>83
This is interesting, but note that it's not a core language feature. Someone clearly had the need to do that.

>>88
True portability is a myth. You can never get that in a compiled language, no matter how hard the standards bodies try. The closest thing to it is Java, and everyone should be familiar with its characteristics. If portability is ALL I care about for an application, then that's what I'd choose. My definition of "general purpose language" is one that allows the programmer to use whatever level of abstraction he/she finds appropriate, and switch between them effortlessly. How many architectures out there, in active use, have 37-bit integers, 11-bit bytes, or other oddities that the portability advocates keep using as examples? If you were working on such an uncommon system, portability is going to be the least of your worries. Right now, if you're targeting regular desktops, there's basically two ISAs and two OSs: x86-32/x86-64, Windows/*nix.

>>89
This is the sort of stuff I was talking about.

For instance, a compiler could schedule the evaluation of a procedure's arguments in parallel.
Unless those arguments require extensive computation, chances are that the overhead of preparing to get that done would outweigh the cost of just running them all in 1 thread serially. Modern processors are now quite good at parallel instruction execution, IIRC these 3 instructions

add b[edx], 5
sub eax, ecx
sub ebx, ecx


are executed in parallel since ~Core 2, with the second two running while the first is waiting on two memory accesses.

Name: Anonymous 2012-07-28 6:26

>>111
While you could hyper-optimize that code, do you really need that? Just when was the last time you really needed something like that instead of the code that would work at least on x86 and ARM?

Name: Anonymous 2012-07-28 6:41

>>11
Except Python does not do tail call elimination. Or have it's own stack for function calls. Enjoy your 998 calls or
>RuntimeError: maximum recursion depth exceeded

Name: Anonymous 2012-07-28 6:51

>>113
CLISP doesn't either, yet it is still perfectly usable.

Name: Anonymous 2012-07-28 6:52

>>111
Compilers can track the state of numerous registers far more effectively than a human can. They can take a higher level approach to assigning code than a human would reasonably do with a pen.

Name: Anonymous 2012-07-28 6:59

>>114
No, just no.

Name: Anonymous 2012-07-28 8:08

>>111
This is interesting, but note that it's not a core language feature. Someone clearly had the need to do that.
If you drop numpy arrays you can also drop numpy as a dependency and then all that is left is distutils which is part of the core language.

Name: Anonymous 2012-07-28 12:33

>>84
Compilers don't do magic. They follow a set of rules like any software to turn common constructs into tighter code.

Depending on the set of optimization algorithms they employ, they can appear to be magical.

But compilers are not aware of the context of the problem you're trying to solve, or the algorithm you're trying to express.

Then you are using the wrong language. You should only express the program as a minimally described idea. The compiler can then find an implementation of the idea that obtains an optimal efficiency on that target architecture and platform.

Suppose you're in a very hot block of code. You find a more clever solution than the ASM output. That's where you use inline assembly.

I'd rather let my genetic peep hole simulated annealing optimizer run on it with test input for two days.

Name: Anonymous 2012-07-28 12:50

>>118
let my genetic peep hole simulated annealing optimizer run on it with test input for two days.
please be trolling

Name: Anonymous 2012-07-28 12:55

>>>111

TRIPS

Clearly none of you are experienced with Asm, or looked at compiler output (or you have, but can't understand/see a way to do it better, because of the first point). Compilers are a good way to generate code quickly; even at the highest optimisation level none of them can reason about things like register allocation in the same way that a knowledgeable programmer can.

http://en.wikipedia.org/wiki/Register_allocation
http://en.wikipedia.org/wiki/Liveness_analysis
http://en.wikipedia.org/wiki/Category:Data-flow_analysis

People have been saying compilers are better at generating code than programmers, but that's only if you compare to average or below-average Asm programmers.

Use a better compiler.

Go learn Asm and compile any program you want, then look at the Asm output. Find a function and see if you can do it better. Unless you have some ultra-powerful compiler I haven't seen before, chances are you can beat the compiler in size, speed, or both. Perhaps because of the source language semantics the compiler can't do something,

Use a better language. The only defined behavior present should be critical to the correctness of the program.

but YOU (should) know how your program operates and can easily take advantage of things like cross-function register allocation, convenient invariants (you can prove a register will always have e.g. 0, and use it appropriately; the compiler can't), and advanced stack manipulation.

http://en.wikipedia.org/wiki/Interprocedural_optimization

Saying a language is "terribly designed" when it has the capabilities I need (and then some) is simply opinion. And it's not "write a few lines of code required to use the FFI", it's having to do that plus all the other "administrative" cruft surrounding it. In other words the integration is not as seamless as it could be. You said it yourself -- "FFI". Not "embed code directly in the compiler/JIT's output".

This is an implementation issue, not a language issue.

Unless those arguments require extensive computation, chances are that the overhead of preparing to get that done would outweigh the cost of just running them all in 1 thread serially.

This is a decision that can be made by the compiler. Although it would be annoying if in order to take advantage of this optimization, if you had to stuff large expressions to evaluate in parallel inside of function call parameter lists. In the purely functional setting, you can simply define your variables relative to other prior evaluations, and the compiler can create a dynamic evaluator that evaluates expressions as soon as their dependencies are finished, using a number of threads optimal for the target architecture.

Modern processors are now quite good at parallel instruction execution, IIRC these 3 instructions

add b[edx], 5
sub eax, ecx
sub ebx, ecx

are executed in parallel since ~Core 2, with the second two running while the first is waiting on two memory accesses.

cool stuff.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List