Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Any decent modern general-purpose languages?

Name: Anonymous 2012-07-25 10:55

Assembly: Unportable. No standardised syntax.
Classical Visual Basic: Some good parts. Shit overall.
C: Shitty standard library. Deficient type system. Can't into Unicode. ``Unportable assembly.''
D and C++: Obfuscated boilerplate languages.
Java and C#: Forced OOP.
Common Lisp: Archaic cons-based library. Writing complex macros is a PitA due to the unlispy quotation syntaxes.
Scheme: CL without namespaces.
Clojure and Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.
OCaml: Great language, only one, deficient, implementation.
Haskell: Academic sex toy.
Forth: Reinventing the wheel over and over.
Ruby: Implicit declarations. Slow as fuck.
Python: Implicit declarations. FioC.
Perl: Brain damage.
PHP: Pretty much shit.
JavaScript: "" == false

It's impossible to list them all but, please, what decent modern general-purpose languages exist?

Name: Anonymous 2012-07-29 16:50

>>158
i don't see why javascript SHOULDN'T be up there, even if PHP weren't listed

Name: Anonymous 2012-07-29 16:53

>>157
>>158
I realize that it mostly works and is useful, and sometimes use it myself when I need a tiny server backend for something, what jostles my flaps is the horrible choices in its design that result in ephemeral and obscure bugs creeping in to large projects. Of the useful, widespread programming languages it has the worst intrinsic flaws and insanities, that aren't the result of deliberate design decisions so much as incompetence. And so I am buttfurious on the internet about it.

Name: Anonymous 2012-07-29 17:08

>>162
>>157
>>158

buttfurious
Gabe plots cake, ``please''!

Name: Anonymous 2012-07-29 17:31

>>162
You know what else mostly works? A rusty knife.

Name: Anonymous 2012-07-29 17:32

>>164
Exactly.

Name: Anonymous 2012-07-29 17:36

>>165
I hope you'll enjoy tetanus, then?

Name: Anonymous 2012-07-29 19:03

>>166
HAX MY TETANUS

Name: Anonymous 2012-07-29 19:51

>>162
back to /g/, ``please''!

Name: Anonymous 2012-07-29 22:27

>>72
you're experience has maid you clothes minded.
[b][i]what[/i][/b]

Name: Anonymous 2012-07-29 22:38

So /g/ finally raided /prog/. I'm out of here, again.

fuck off, ``faggots"

Name: Anonymous 2012-07-29 23:26

>>170

no, we are just pretending to be from /g/.

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-07-30 5:45

I don't understand why more people aren't concerned with improving the tools they use every day
Because they think compilers are somehow magical and mysterious entities. One of (many) somewhat-sluggish projects has been to write a better C compiler, one that generates code closer to how an assembly language programmer would write it---by necessity, not by pattern.

* Automate profiling of generated machine code. Select constructs that give the best performance based upon empirical data collected from are large set test runs.
MSVC has PGO, which is not bad, but is a bit of a pain to use and thus not used much except in special cases. Optimisation should be more "ubiquitous", not a special-case thing to be applied after you discover that things aren't quite right. I know they still teach that "premature optimisation" bullshit; but going back to change something that's been written already is a waste of resources. Optimisation should be integrated into design, thought of as "what's the best way to do this" throughout so that the final product doesn't need much more done to it.

>>151
That quote may be true for those who don't understand fundamentals like binary and machine instructions, but being clever---and thorough with the design---when writing code can in many cases completely avoid ever having to debug it, as you will have essentially proved it correct before ever running it.

>>156
Whatever you call it, it's become ubiquitous for websites for a reason. The same with all the other languages you complain about. "Purity" or whatever you want to call it has no real practical advantage. Pragmatism does.

Name: Anonymous 2012-07-30 6:30

>>172
Whatever you call it, it's become ubiquitous for websites for a reason.
The Jews at Zend bribed Web hosts.

Name: Anonymous 2012-07-30 7:35

Its only real selling point was that it had a foothold on at least 1% of all the internet's domains by 1998. In 1998 you had a choice of ASP classique, Coldfusion and PHP3 for 1. sticking your spaghetti code all over your HTML which was the fad back then and 2. run in-process in the web server instead of CGI. The next closest competitor had WSGI came about by 2003, mod_python came on a CD for a book in 2000, PHP was at version 3 by 1998. There were others a little later (JSP rushed out in 1999) but by then the Code-In-HTML language market was cornered by PHP on Linux and ASP on Windows.

Its other selling point, hosting companies liked PHP because of the shitty safe_mode when it came to cramming thousands of sites on a server.

Name: Anonymous 2012-07-30 8:02

>>172
While you ``optimize'' your bitfiddling code, I'll implement a better algorithm, Cudder.

Name: Anonymous 2012-07-30 12:04

>>175
LOL
Did you understand what he talked about?

Name: Anonymous 2012-07-30 13:22

>>172
"premature optimisation" bullshit
Optimisation should be integrated into design
Truth: asm is less readable and thus less maintanable.
Ideally, programs should be executable algorithms, expressed in adequate abstractions. Optimization should be done just when necessary, because it's architecture-dependent and almost write-only.
Or are you telling me to follow master FrozenVoid?
IHBT

Name: Anonymous 2012-07-30 16:30

>>137
I wanna have a threesome with you and Cudderspace

Name: Anonymous 2012-07-30 18:27

>>172
Optimisation should be integrated into design, thought of as "what's the best way to do this" throughout so that the final product doesn't need much more done to it.

"The best way to do this" is the most readable way to do it. If the optimized way is the most readable then do that, if the nieve way is the most readable then do that. If it profiles badly, THEN rewrite it in the optimized way and leave a small novel in the comments on why you made every choice you made and what its actually accomplishing.

I'm not saying that your way of coding is wrong or bad or anything, I'm just saying that it makes people not want to work with you.

Name: Anonymous 2012-07-30 18:30

>>177,179
Typical “premature optimisation” bullshiters. It's good to plan ahead.

Name: Anonymous 2012-07-30 18:36

>>177
your post is full of shit

Name: Anonymous 2012-07-30 18:41

>>180
then go to /optimisation/, this is /prog/

Name: Anonymous 2012-07-30 19:24

>>180
Sometimes its needed. Sometimes.
But you're likely to spend much more time optimising ahead and them spend much more to fix bugs. See this: http://tibleiz.net/asm-xml/
3 in 4 releases are bugfixes.

Name: Anonymous 2012-07-30 19:46

>>179
OR I could just write it correctly the first time. Code readability is over-rated. As long as it was not intentionally obfuscated, a programmer worth his salt will be able to figure out what it does (because it's right there in front of his face).

Name: Anonymous 2012-07-30 20:18

>>177
3/10.

>>183
You're now blurring "optimizing ahead" and "writing shitty code" into one thing. You can not optimize your design ahead-of-time and still have bug riddled code.

Of course, this partially does depend on your language. If you're gonna be writing a decently sizeable system in something like C, you will indeed need to do a fair degree of ``premature optimization'', simply because changing large chunks later takes so much time and will end up introducing more bugs in the process.

Name: Anonymous 2012-07-30 21:13

>>185
What's with these X/10 little shits coming to /prog/?

Did /g/ finally invade us?

Name: Anonymous 2012-07-30 21:15

>>186
Nice attempt to avoid my argument.
Have fun with your slow programs!

Name: Anonymous 2012-07-30 22:18

>>185
The whole point of planning ahead is when you haven't code yet, such as the asmxml project at its beginning. From the ground up it was designed to be optimised, but still had bugs and new feature requests. Maybe a new feature can disrupt your clever design, and you'll be walking on eggs to implement it. But now that the design is clever and everything is coded in asm, it's a pita.

Name: 188 2012-07-30 22:29

http://www.adacore.com/adaanswers/gems/gem-93-high-performance-multi-core-programming-part-1/

When it comes to performance, no amount of tweaking can compensate for an inherently slow design.

Name: 188 2012-07-30 22:40

But you check the "design" and see that it doesn't involve rewriting your app in asm because your compiler doesn't output optimal code.

Name: Anonymous 2012-07-30 22:48

>>184
OR I could just write it correctly the first time. Code readability is over-rated. As long as it was not intentionally obfuscated, a programmer worth his salt will be able to figure out what it does (because it's right there in front of his face).
What if you fucked up everything and are too stupid to realize it, and the superior programmer is reading your fucked up code, trying to infer what the fuck you were actually trying to do from your endless chain of tacit false assumptions?

>>186
yes

>>188
It's too late for them. Don't bother.

Name: Anonymous 2012-07-30 22:49

This thread is fail.

>>172
That quote may be true for those who don't understand fundamentals like binary and machine instructions
That quote may be Brian W. Kernighan

Name: Anonymous 2012-07-31 2:12

>>172
I don't understand why more people aren't concerned with improving the tools they use every day
Because they think compilers are somehow magical and mysterious entities.
People are probably just either satisfied with suboptimal machine code generation or they don't know any better.
One of (many) somewhat-sluggish projects has been to write a better C compiler, one that generates code closer to how an assembly language programmer would write it---by necessity, not by pattern.
This would be very nifty. The only implementation strategy for machine code generation I've seen so far is to simply convert syntactical constructs to a stream of equivalent instructions and apply a series of optimizations that maintain equivalence throughout the process. Albeit, there can be other things going on like using a minimum amount of registers. In the whole process, the compiler doesn't really know what you are trying to do. It's just spitting out machine code it knows is equivalent to the higher level source code you gave it, and then simplifying it a bit. An optimizing compiler tries to find a highest performing program within the equivalence class of the program the programmer submitted. So it applies all these operations that it knows preserve equivalence, but will likely improve performance.

It seems like in order for the compiler to truly generate assembly with necessity, it would need to have a complete high level understanding of what you are trying to do. But as soon as I start to think about whether this is possible, or already implemented, it feels like it comes down to a philosophical question. If I turn the steering wheel of my car to the left, does my car realize that I want to go left, and turns its wheels by necessity? Or is it just doing what the mechanical engineer designed it to do, which was to allow rotation of the steering wheel was to directly affect the wheels?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-07-31 6:27

>>177
Truth: asm is less readable and thus less maintanable.
Readability is subjective (size or speed is NOT), but to someone experienced, it's more readable. Even the format is consistent. The only thing is, it also tends to be more voluminous.

>>183
Look at the bugtrackers of other XML libraries and you'll find much worse.

When I was talking about design, I meant decisions that are not easy to reverse, and that can have huge consequences for efficiency. For example, instead of the obvious (and inefficient) "make objects out of everything and copy data into them" parsing algorithm, do it in-place so there's no unnecessary copying of data. Simplify the design to make it more "direct". I'll use the web browser as an example here since we're all familiar with them: how long is the path (in terms of # of instructions or call depth) between Firefox getting a WM_PAINT and its first API call into the OS to draw the page content into the window? In the browser design I'm planning, it's < 5. I'm not planning to do any memory allocation/deallocation there either.

Maybe "optimisation" is the wrong word for this, but what else do you call making something more efficient? If you plan your design like this and have it carefully thought out before writing a single line of code, the chances are high that you'll be ahead of those who "optimise later", without even needing to go to inline Asm. You could say that, despite the compiler's stupidity, everyone is using the same compiler but your design has already been optimised at a higher level, so its inefficient output is still going to be more efficient than the output from an inefficient design.

>>193
It seems like in order for the compiler to truly generate assembly with necessity, it would need to have a complete high level understanding of what you are trying to do.
That's not necessary, all it needs to understand is the semantics of the source language. What we agree is not statement-by-statement translation and applying optimisation afterwards (there's that anti-premature-optimisation again!), but my idea is to generate code "backwards", working from the desired result. E.g. in


int foo() {
 int i = f();
 int j = k();
 int l = 2 + j;
 ... /* code that does a lot with l, but never has effect on i, j, nor other global variables */
 return i + 2*j;
}


a "dumb" or "traditional" compiler would emit code for all that stuff in the middle, and optimisation might have a chance at removing (some of) it and moving variables into registers. An "intelligent" compiler could work backwards and "think" "This function's result depends on i and j. What do i and j depend on? f() and k(). What do f() and k() depend on? ... " Eventually it might determine that f() and k() are actually constant, and substitute that in, propagate the changes down, and reduce foo() itself to a constant. And of course, all of this might not even be done if foo() can never get called from any entry point. At every reference to a nonlocal variable, this process would need to be performed as their results are "visible" to other functions.

For things like register allocation (done after the above), the compiler could track how many variables are actually needed, and then choose how many extra "slots" are required in memory. It can alter their allocation to registers depending on their usage and loop nesting level. In size/balanced optimisation mode, if it sees certain variables having LIFO-like usage patterns, it can emit push/pop (single byte instructions) instead of explicitly doing a stack allocate and moves. If there's a long-running inner loop with frequently accessed variables, push the less frequently used variables on the stack. This is how an Asm programmer does register allocation by necessity.

Due to the halting problem it's not possible to prove that f() or k() terminate even if they do not depend on nonlocal variables, but the compiler can offer an option to simulate their execution for a limited number of cycles. No deep understanding (other than language semantics) required by the compiler, just a different way of approaching the problem.

Follow this process to its logical conclusion, even into library functions and such, and your printf("%d\n", fib(5)); becomes a fputs("2178309\n", stdio);. In the output binary too, there will be nothing more than what the compiler found was needed. That is the ultimate goal.

Name: Anonymous 2012-07-31 10:49

>>194
side-effects troubles optimizing compilers
don't want functional programming
LOL

Name: Anonymous 2012-07-31 11:25

Use Eiffel.

Name: Anonymous 2012-07-31 12:24

>>194
If you plan your design like this and have it carefully thought out before writing a single line of code, the chances are high that you'll be ahead of those who "optimise later", without even needing to go to inline Asm.
well the issue isn't that you're making things more efficient, efficiency is generally good, the issue is that it's your only priority.

take a contrived fibinacci example. you don't want to use the recursive O(2^n) algorithm...

but you also probably also don't want to use the O(1) algorithm unless you have a REALLY good reason to:


const double inverseSqrt5 = 0.44721359549995793928183473374626
const double phi = 1.6180339887498948482045868343656

static int Fibonacci(int n) {
    return (int)Math.Floor(Math.Pow(phi, n) * inverseSqrt5 + 0.5);
}

Name: >>197 2012-07-31 12:25

>>197
forgot to quote first line.

Name: Anonymous 2012-07-31 12:39

>>197
but you also probably also don't want to use the O(1) algorithm unless you have a REALLY good reason to
Why? The meaning of the function is captured by int Fibonacci(int n). How it is implemented is none of anyone's business.

Name: Anonymous 2012-07-31 12:42

>>197
what is the value of (int)std::numeric_limits<double>::infinity(), anyway?

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List