Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Any decent modern general-purpose languages?

Name: Anonymous 2012-07-25 10:55

Assembly: Unportable. No standardised syntax.
Classical Visual Basic: Some good parts. Shit overall.
C: Shitty standard library. Deficient type system. Can't into Unicode. ``Unportable assembly.''
D and C++: Obfuscated boilerplate languages.
Java and C#: Forced OOP.
Common Lisp: Archaic cons-based library. Writing complex macros is a PitA due to the unlispy quotation syntaxes.
Scheme: CL without namespaces.
Clojure and Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.
OCaml: Great language, only one, deficient, implementation.
Haskell: Academic sex toy.
Forth: Reinventing the wheel over and over.
Ruby: Implicit declarations. Slow as fuck.
Python: Implicit declarations. FioC.
Perl: Brain damage.
PHP: Pretty much shit.
JavaScript: "" == false

It's impossible to list them all but, please, what decent modern general-purpose languages exist?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-06 4:53

>>229,230
The accumulator is still preferred, while it may not be directly faster many instructions are at least 1 byte shorter when operating on it which translates into better code density and cache utilisation. I've worked with more than half a dozen different architectures and x86 is one of my favourites (among others such as the Z80 and 8051), it's quite "balanced" and has many opportunities for optimisation. I have less nice things to say about the 64-bit "extensions".

The theoretical developments indirectly pay off to a lot of things. Just think, where did SSA come from?
SSA is not new, it was already around at the time of the Dragon Book. And Asm programmers have been implicitly using that sort of thinking in allocating registers before that.

No one really cares about compilers. Just look at databases, networking, and graphics. Then look at compilers.
That is unfortunate, because unless written in Asm, all software depends on them and a highly optimising compiler has the potential to make almost all software more efficient. Then again, the "optimise later" or "forget optimisation" bullshit that CS students are brainwashed with doesn't help either.

>>236,237
You can't gain something without losing something else. What you gain in flexibility, you lose in instruction size and the delays introduced by additional datapath circuitry. Going back to the diagram above, how many of the operations in a program, on average, need to preserve both of the inputs? Or put it another way, what is the average number of "outputs" needed for all these operations? This corresponds directly to the best architecture choice. If all 3 of them need to be preseved most of the time, then a 3-address architecture makes sense. If only 1 of them needs to be preserved, then 1 address architectures will be most suitable. From the general proliferation of 2-address architectures, it's probably the case that this number is around 2, and maybe slightly less. Either way it's a good middle-ground choice.

I will agree that x86 code generation is not as simple as it is with other architectures. But at the same time, it also presents more opportunities for optimisation so that the hardware can also run at its full capabilities. Consider the other extreme, a 3-address "pure RISC" with so many registers (let's say... 256) that "register allocation" becomes trivial and having to "spill" means there is something very unusual about your program. Instructions are going to have to be at least 24 bits just to hold the register specfications alone; round that up to 32, so an instruction is at least 32 bits. Forgetting for the moment how you can have that many registers at such clock speeds, consider running a typical application program on such a processor, where there are never more than ~10 live variables and the majority of the operations only need 2-address type instructions. Now each time it fetches an instruction it's wasting 8+ bits on a redundant register field, and 95%+ of the registers are going to be unused. Clearly this is a huge waste of bus bandwidth and die area.

The advantage that CISCs like x86 have is exemplified by the systems of today, where memory is much slower than the core and there are multiple cores vying for its use. Here, you want the CPU to be able to run quickly inside the core while sipping instructions from memory (including caches) at a slower rate. This is where high code density is crucial. Expand instructions into RISC-like micro-ops inside, taking less bandwidth than pulling them over the memory bus.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List