Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Any decent modern general-purpose languages?

Name: Anonymous 2012-07-25 10:55

Assembly: Unportable. No standardised syntax.
Classical Visual Basic: Some good parts. Shit overall.
C: Shitty standard library. Deficient type system. Can't into Unicode. ``Unportable assembly.''
D and C++: Obfuscated boilerplate languages.
Java and C#: Forced OOP.
Common Lisp: Archaic cons-based library. Writing complex macros is a PitA due to the unlispy quotation syntaxes.
Scheme: CL without namespaces.
Clojure and Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.
OCaml: Great language, only one, deficient, implementation.
Haskell: Academic sex toy.
Forth: Reinventing the wheel over and over.
Ruby: Implicit declarations. Slow as fuck.
Python: Implicit declarations. FioC.
Perl: Brain damage.
PHP: Pretty much shit.
JavaScript: "" == false

It's impossible to list them all but, please, what decent modern general-purpose languages exist?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-06 4:53

>>229,230
The accumulator is still preferred, while it may not be directly faster many instructions are at least 1 byte shorter when operating on it which translates into better code density and cache utilisation. I've worked with more than half a dozen different architectures and x86 is one of my favourites (among others such as the Z80 and 8051), it's quite "balanced" and has many opportunities for optimisation. I have less nice things to say about the 64-bit "extensions".

The theoretical developments indirectly pay off to a lot of things. Just think, where did SSA come from?
SSA is not new, it was already around at the time of the Dragon Book. And Asm programmers have been implicitly using that sort of thinking in allocating registers before that.

No one really cares about compilers. Just look at databases, networking, and graphics. Then look at compilers.
That is unfortunate, because unless written in Asm, all software depends on them and a highly optimising compiler has the potential to make almost all software more efficient. Then again, the "optimise later" or "forget optimisation" bullshit that CS students are brainwashed with doesn't help either.

>>236,237
You can't gain something without losing something else. What you gain in flexibility, you lose in instruction size and the delays introduced by additional datapath circuitry. Going back to the diagram above, how many of the operations in a program, on average, need to preserve both of the inputs? Or put it another way, what is the average number of "outputs" needed for all these operations? This corresponds directly to the best architecture choice. If all 3 of them need to be preseved most of the time, then a 3-address architecture makes sense. If only 1 of them needs to be preserved, then 1 address architectures will be most suitable. From the general proliferation of 2-address architectures, it's probably the case that this number is around 2, and maybe slightly less. Either way it's a good middle-ground choice.

I will agree that x86 code generation is not as simple as it is with other architectures. But at the same time, it also presents more opportunities for optimisation so that the hardware can also run at its full capabilities. Consider the other extreme, a 3-address "pure RISC" with so many registers (let's say... 256) that "register allocation" becomes trivial and having to "spill" means there is something very unusual about your program. Instructions are going to have to be at least 24 bits just to hold the register specfications alone; round that up to 32, so an instruction is at least 32 bits. Forgetting for the moment how you can have that many registers at such clock speeds, consider running a typical application program on such a processor, where there are never more than ~10 live variables and the majority of the operations only need 2-address type instructions. Now each time it fetches an instruction it's wasting 8+ bits on a redundant register field, and 95%+ of the registers are going to be unused. Clearly this is a huge waste of bus bandwidth and die area.

The advantage that CISCs like x86 have is exemplified by the systems of today, where memory is much slower than the core and there are multiple cores vying for its use. Here, you want the CPU to be able to run quickly inside the core while sipping instructions from memory (including caches) at a slower rate. This is where high code density is crucial. Expand instructions into RISC-like micro-ops inside, taking less bandwidth than pulling them over the memory bus.

Name: Anonymous 2012-08-06 5:04

>>241
So the ideal architecture is MIPS64 with destination = first operand?

Name: Anonymous 2012-08-06 6:19

If you want something done, you better do it yourself. Just start making and using your own language.

Name: Anonymous 2012-08-06 6:36

This thread made me discover Go and I'm quite pleased. I realize I'm not quite a fan of promoting Google or it's products but Go seems quite nice.

Name: Anonymous 2012-08-06 6:44

>>241
mrinstr ::= mxxxxxxx | m1111111 instr
instr ::= xxxxxxxx

add r1, r2, r3 ; r1 = r2+r3 => 10000000 r1 r2 r3
add r1, r2     ; r1 = r1+r2 => 00000000 r1 r2
add r1, r1, r2 ; can be both


There, saved you're space.

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-07 5:14

>>242
No, more like IA32 with a 64-bit extension closer to how x86 was extended to 32 bits. MIPS is optimised for silicon area and not much else. Things like branch delay slots(!) and fixed-width instructions make the ISA very rigid and there isn't a whole lot the software can do. In other words the ISA is so tightly coupled to the original hardware implementation that there is little opportunity for optimisation. In contrast, with x86 where many instructions perform multiple basic operations, parallelism almost suggests itself. All these improvements can happen inside the core and immediately benefit existing applications, without changing anything else. If anyone here remembers the days of Socket 370, that's exactly what happened.

The RISCs that have huge instructions that only do 1 thing will require more memory bandwidth to do superscalar/OOE, because by necessity they will need to fetch instructions faster. A 32-bit wide core-memory bus will either need to be widened to 64-bit or have its speed doubled. Not needed with x86: 32 bits is already enough to hold up to 4 instructions and they just need to change the core itself to execute them in parallel.

>>245
Cutting off half the opcode space to do that is hardly desirable, not to mention 256 registers is still way too much for the majority of use cases.

x86 has several 3-address instructions (some of the multiplies and divides), and they were likely added because an analysis of existing code showed that those specific instructions were much more likely to benefit with them present.

Name: Anonymous 2012-08-07 5:20

>>246
What exactly is wrong with AMD64? The retarded space-eating REX prefixes?

Name: Anonymous 2012-08-07 9:05

>>246
We knew you're shit at programming, and now we know you're shit at computer architecture. Are there more things you're shit at that we should know?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-08 6:51

>>247
That's part of it. They got rid of a whole row of instructions when one of the existing unused ones could be repurposed for a size override/high-bank register select, like they did when going from 16 to 32 bits. It was overall too disruptive of a change.

>>248
I see you've run out of arguments.

Name: Anonymous 2012-08-08 13:02

>>246
>The RISCs that have huge instructions that only do 1 thing will require more memory bandwidth to do superscalar/OOE, because by necessity they will need to fetch instructions faster. A 32-bit wide core-memory bus will either need to be widened to 64-bit or have its speed doubled. Not needed with x86: 32 bits is already enough to hold up to 4 instructions and they just need to change the core itself to execute them in parallel.
Are you seriously brain-dead enough to rehash arguments that have been dead for 20 years out of your ignorance? PROTIP, you moronic piece of shit, modern x86 can only function by breaking down the complex instructions to a reduced set for the hardware first.

It's really sad that in the time you spent showing the world how pathetically uneducated you were, you could have maybe read and learned something. Well, maybe a word or two of it with your displayed intellectual capacity so far.

Name: Anonymous 2012-08-08 13:15

>>250
Learn about the Von Neumann bottleneck and Shiichan quotes.

Name: Anonymous 2012-08-08 13:23

>>246
Cutting off half the opcode space to do that is hardly desirable,
I don't hear you bitch about the ?ax-specialized opcodes in x86, though. Or multiple encoding for the same instruction. Or the 66h/67h prefixes.
not to mention 256 registers is still way too much for the majority of use cases.
Make them 32, 4 bit for each register, add r1 r2 r3 is now 20 bit, add r1 r2 is now 16 bit. They're, saved you're space again.

Name: Anonymous 2012-08-08 13:27

C:\TEXT\KOPIPE.TXT

God says...

For years, people have been predicting--hoping for--the demise of assembly
language, claiming that the world is ready to move on to less primitive approaches to
programming...and for years, the best programs around have been written in assembly language.
Why is this? Simply because assembly language is hard to work with, but--properly used--
produces programs of unparalleled performance. Mediocre programmers have a terrible time
working with assembly language; on the other hand, assembly language is, without fail, the
language that PC gurus use when they need the best possible code.

Name: VIPPER 2012-08-08 15:19

>>253
11/10

Seriously, ASM fucking sucks(x86 atleast) and using ASM in this age for anything really serious is fucking stupid.
Its not the point that ASM is or was powerfull, ASM was and is shit, its just that programmers of the old were fucking demi-gods that yet many of us could imagine to be real.

Some DOS games back in the time were way more innovative and creative then most games of today. Plus they had much lower budgets, less computing power, less help both personel wise and computer assistance wise.
And lots of them were writen in ASM and/or some obscure shitty language.

Name: VIPPER 2012-08-08 15:20

>>254
*Could not imagine to be real.

Sorry

Name: Anonymous 2012-08-08 15:47

>>254
One of my favorite games (Tyrian) was written in Pascal

Name: Anonymous 2012-08-08 18:52

>>246
http://bitsavers.org/pdf/datapoint/2200/2200_Programmers_Man_Aug71.pdf
Why is the original description of the embedded terminal architecture that Intel licensed from Datapoint to make the 8008/8080 so much more readable than Intel's description? And why are we still using extensions to extensions to extensions to an embedded CPU architecture designed to be what could cheaply fit on one chip in the 70's?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-09 2:42

>>250
modern x86 can only function by breaking down the complex instructions to a reduced set for the hardware first.
And that happens in the core, where clock speeds can be pushed much higher. Here's an analogy that makes the situation clearer: suppose you can choose two types of slaves (this is 4chan after all), one can do everything really quickly but also only knows really simple commands, and another that isn't that quick but you can tell them to do something in one word what would take a few dozen with the other, and they'll take their time doing it. You can only talk to one slave at a time. Which type will you choose a few dozen of if you're the master and want to get them to do something like build a house?

With the first type, all your time is going to be spent telling them one-by-one what to do in excruciating detail, while all the others are just waiting for their next little command since they finished their last one almost immediately after you told them. Not a good use of resources and overall throughput is going to suck. With the second, you'll spend much less time telling them what to do, and they'll spend much more time actually doing something rather than waiting. You might even have enough time to go spend some time with your friends. You are the memory. The slaves are cores of a multicore processor. Your friends are peripheral devices. The first type is RISC. The second is CISC. Get it now?

I'm not keen on turning this into personal insults but it appears that your definition of "educated" is more akin to "brainwashed with the beliefs of the academics", and it really shows the lack of actual thinking going on in your head when you can't even seem to understand my argument enough to formulate a defense. I hope the analogy helps.

>>252
The accumulator opcodes are designed for use in calculations involving a long chain of accumulated results, which is relatively frequent. Function return values go there too. It's quite clear that a long-lived variable or set of variables should almost always stay in (e)ax. Multiple encodings are always an issue, they're hard to get rid of but relatively speaking x86 has a lot less of them (and empty space) than some other architectures.

No idea what you're getting at about the Osize/Asize prefices, but that's much better than x86-64's replacing an entire row of register increment/decrement instructions with 64-bit-only prefices. One of the most common operations done in loops. I'm almost willing to bet Intel was NOT pleased with AMD doing that.

>>257
It's probably just you. I have the 8008 manual and it's not bad. Maybe you like the octal notation for opcodes.

We're still using x86 because it's what works the best for general purpose CPUs at the moment. The i432 failed to unseat it, Itanium was pretty dismal, ARM and MIPS are filling the low end of the spectrum where die size/transistor count and power consumption matter more than anything else, POWER, SPARC and the "big RISCs" are in the massively parallel TOP500 supercomputers where they belong (and in applications that can actually make use of all those registers, with specialised hardware/software that ensures all the cores don't fight each other for memory), which leaves x86 to fill in the huge general-purpose/value-performance market.

Name: Anonymous 2012-08-09 4:57

try Ada

Name: Anonymous 2012-08-09 5:58

>>258
>"educated" is more akin to "brainwashed with the beliefs of the academics"
Hahahahah oh wow. Sure, reality is brainwashing, and all the processors operate otherwise because all the engineers Intel and AMD employ are unaware of the scientific breakthroughs a special brain dead piece of shit made in his basement. Clearly no one was aware that one CISC operation could do more in a slower run on a system that couldn't scale, and no one analysed that it ended up being overall much much slower than running multiple commands on a fast architecture, they must all have put CISC->RISC subsystems on modern x86 processors out of sheer ignorance! If only they had employed a clinically retarded turd nugget who thought understanding one CISC operation doing more was an expression of genius and relevant, one that couldn't imagine these concepts are really simple to non-retarded folk and his intellectual humiliation came from not understanding the facts! All these people brainwashed by "logic" and "mathematics" will never have your vision, will they, retard? Oh, and these aren't insults, I am making factual statements as I always do.

Enjoy your cold hard ownage.

Name: Anonymous 2012-08-09 6:11

>>260
Firstly, learn about the processor-main memory bottleneck. Your notions of algorithmic complexity and such are useless at such a small level. Secondly, go back to the imageboards, ``please''!

Name: Anonymous 2012-08-09 6:39

>>260
Well your attack is not credible since you don't provide any argument yourself, you only echo other peoples beliefs like a sheep.

Name: Anonymous 2012-08-09 7:14

>>261
Irrelevant to the topic, and the fact that algorithmic complexity doesn't come into play is to the point of my argument although I didn't even bother touching that subject as it is a topic that has a conclusive answer. Are you sure you read the right post, or are do you just have some pre-made answers in case of butthurt that you chug forward regardless of the content? Sorry for educating you so far.

>>262
So saying 2+2=4, and that it is recognized as such by all experts and all hardware at the modern date is implemented as such puts the burden of proof on me, and "BAWWWW ACADEMICS ARE BRAINWASHED!!11 ONE INSTRUCTION CAN DO TWO THINGS MEANS IT WILL BE FASTER EVEN IF THE OTHER PROCESSOR CAN DO A HUNDRED INSTRUCTIONS IN THE SAME TIME" is a credible counterargument worth discussing? Are you sure you aren't a certain brain-dead excrement leftover tripfag samefagging who would be uneducated enough to take this seriously?

Name: Anonymous 2012-08-09 7:40

>>263
>>261
Irrelevant to the topic
Are you really sure that the issue of memory bottlenecking is irrelevant to machine code compression?

Name: Anonymous 2012-08-09 7:50

Cudder is killing /prog/ as always. I don't want to park my car in her cudder anymore.

Name: Anonymous 2012-08-09 11:19

>>265
// ==UserScript==
// @name           /prog/ cleaner
// @description    Decudder
// @namespace      http://dis.4chan.org/prog/
// @include        http://dis.4chan.org/*;;
// @version        1.0
// ==/UserScript==

(function() {
    var posts = document.getElementsByClassName('post');
    for (var i = posts.length - 1; i >= 0; --i) {
        var trip = posts[i].getElementsByTagName('span')[4];
        if (trip.textContent == '!MhMRSATORI!FBeUS42x4uM+kgp' ||
            trip.textContent == '!MhMRSATORI!fR8duoqGZdD/iE5' ||
            trip.textContent == '!MhMRSATORI!vzR1SbE7g/KHqrb')
            posts[i].parentNode.removeChild(posts[i]);
    }
})();


He changes secure trips occasionally, so expect to adjust accordingly.

________________
sqlite> select count(*), trip from posts where author = 'Cudder ' group by trip order by count(*) desc;
204|!MhMRSATORI!FBeUS42x4uM+kgp
61|!MhMRSATORI!fR8duoqGZdD/iE5
31|!MhMRSATORI!vzR1SbE7g/KHqrb
9|!MhMRSATORI
1|!MhMRSATORI!cbCOMSATORIM/pr

Name: Anonymous 2012-08-09 11:39

>>265,266
He could still post as Anonymous, dumbshits.

Name: Anonymous 2012-08-09 14:07

>>267
So? If he were forced into anonymity, he'd certainly post less, because he craves attention, but it won't even come to that, because most people wouldn't be using >>266-kun's script.
Why do Americans always think that if a measure doesn't solve 100% of the problem it addresses, it's not worth doing at all?

Name: Anonymous 2012-08-09 16:58

>>267-268
Cudder is female.

Name: Anonymous 2012-08-09 17:00

>>269
He wishes.

Name: Anonymous 2012-08-09 18:47

>>268
U+1F502 CLOCKWISE RIGHTWARDS AND LEFTWARDS OPEN CIRCLE ARROWS WITH CIRCLED ONE OVERLAY
Your argument is invalid.

Name: Anonymous 2012-08-09 19:00

>>269
yeah right

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-10 7:21

>>260
that it ended up being overall much much slower than running multiple commands on a fast architecture
Look at when those studies were published. Back then, they thought they could keep raising clock speeds indefinitely, and memory bandwidth was plentiful. The technology of the time was such that the disparity between CPU core speeds and memory speeds was much smaller. A lot has changed since then.

CISC->RISC subsystems on modern x86 processors
I already answered that in the first part of >>258. I wouldn't call it "CISC->RISC" though, it's just a parallel microsequencer.

All these people brainwashed by "logic" and "mathematics" will never have your vision
The problem is the academics parroting information from out-of-date or flawed studies and failing to actually look at the reality and think. We saw this from the discussion of register allocation earlier.

From the RISC wiki page:
"The goal was to make instructions so simple that they could easily be pipelined, in order to achieve a single clock throughput at high frequencies."

Anyone reading that should automatically think "what about the rest of the system? Can the memory keep up? Can the clock frequency be raised indefinitely?" The huge assumption they made there was that memory would always be fast enough --- and that assumption, while it may have been true in the 80s and early 90s, is not anymore. The laws of physics also weigh in on this, limiting clock frequencies. Someone who accepts that statement without questioning its assumptions is like one who believes in a religion; as that quote from one of your "idols" himself says, it's "unscientific and ultimately destructive".

When your opponent starts becoming half-unintelligible it's a good sign that he's unable to think anymore. P.S. calling someone a "retard" is far from "factual" :)

>>263
EVEN IF THE OTHER PROCESSOR CAN DO A HUNDRED INSTRUCTIONS IN THE SAME TIME
But it can't, given the same memory bandwidth. That's the whole point. A simple calculation shows it even more clearly: Suppose you have 10GB/s of memory bandwidth. On a classic RISC with 4-byte instructions and 1 instruction per cycle (e.g. MIPS), that corresponds to 2.5GIPS. That is the maximum speed you can execute instructions at, and would be equivalent to a clock speed of 2.5GHz. You cannot raise the clock speed more without needing to insert wait states, even it the silicon allowed going up to 4GHz. 100 instructions is going to take 40ns, no matter what. Now consider a typical modern x86, up to 4 instructions per cycle, as low as 1 byte per instruction, and a clock speed of 2.5GHz. Assuming that 4 1-byte instructions can do as much work as the 100 RISC instructions, and runs in 100 clock cycles, they can be fetched in 0.4ns, leaving the memory bus idle for the remaining 39.6ns while the instructions execute in the core for 99 more clock cycles. Even ignoring the 4x superscalar speedup, here is where x86 has the advantage: the silicon can do more than 2.5GHz, so let's increase it to e.g. 4GHz. You can't do this for the RISC because the memory can't go any faster. The x86 is now fetching in 0.4ns and takes 24.75ns to execute, total of 25.15ns for the four instructions of 100 clock cycles each, running in parallel. The core can be run at the highest frequency it can, not limited by how fast instructions can be fed into it. In practice, memory is a bit slower (10GB/s quoted above is a peak burst rate), so the difference is even bigger.

>>265
If by "killing" you mean repealing old misconceptions and beliefs and encouraging more critical thinking, I'm in complete agreement. /prog/ isn't a church. Fuck this pseudo-religion.

Name: Anonymous 2012-08-10 9:31

>>273
Will you marry me?

Name: Anonymous 2012-08-10 10:10

>>273
We saw this from the discussion of register allocation earlier.
Welp, it surely was someone with no background whatsoever in graph theory that noted that all the graphs generated by SSA are chordal and can be colored in polytime, right?

Name: Anonymous 2012-08-10 12:36

>>273
>"HURRR SUPPOSE 2=3, THEN 2+2=6!!!11 LOOK I AM NOT RETARDED NO MORE, WHERE IS YOUR 4 NOW!1! RELIGION OF SCIENCE!!111 I HAVE ANSWERED YOUR ARGUMENT ALREADY EVEN THOUGH I NEVER DID, AND NEVER COULD UNSURPRISINGLY BECAUSE WHATEVER I CRY DOESN'T CHANGE THE FACT THAT CISC INSTRUCTIONS ARE STILL CONVERTED TO RISC IN MODERN PROCESSORS"
Well you being clinically retarded is not up to discussion, you prove it by every sentence you manage to shit out. Notice that you are so incredibly brain dead that you are still talking about imaginary old "published studies" while the cold hard facts I've brought to the table were the implementations by profit-seeking corporations that are bound to the x86 CISC architecture. Sure, all the top level engineers at Intel, everyone working in alternatives in CISC and RISC based devices, along with everyone in academia and anyone knowing mathematics and logic is unaware of your scientific breakthrough that makes CISC magically relevant after 80's, perhaps you should name your theory after yourself "Retard Processing" (well, you actually haven't managed to provide content for it other than spouting baseless numbers that don't relate to reality, but I'm sure even a retard such as yourself has backed your notions despite not telling us, right? right?)...

Ah, also, it's funny that you are talking about "an opponent" of yours in "an argument" - I am merely putting some brainless piece of shit to his place by attempting to grind facts into his vacuous mind, there is nothing to argue here for anyone who knows basic mathematics or logic. You might even be crying "2+2=5" while bleeding out of your anus like this. You should read more instead of crying, you won't learn anything with your level of retardation, but you'll get points for effort.

Name: Anonymous 2012-08-10 12:42

Cudder, please stop arguing with the imageboard retard.

Name: Anonymous 2012-08-10 14:39

                                                                       `
one of your "idols" himself

λBλLSλN λ SλSSMλN

Name: Anonymous 2012-08-10 15:21

If you seriously want to make the claim that RISC is superior, you'd better have the fucking willpower to optimize the living fuck out of your code or be able to shell out for a proprietary compiler that can.
x86 was shitty as fuck when it came out, but our requirements have changed - things that impacted performance back in the day don't impact it now, and being able to code in a way that has hardware-level optimizations for certain operations is a useful innovation/

Name: Anonymous 2012-08-10 16:06

RISC is shit.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List