Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Any decent modern general-purpose languages?

Name: Anonymous 2012-07-25 10:55

Assembly: Unportable. No standardised syntax.
Classical Visual Basic: Some good parts. Shit overall.
C: Shitty standard library. Deficient type system. Can't into Unicode. ``Unportable assembly.''
D and C++: Obfuscated boilerplate languages.
Java and C#: Forced OOP.
Common Lisp: Archaic cons-based library. Writing complex macros is a PitA due to the unlispy quotation syntaxes.
Scheme: CL without namespaces.
Clojure and Erlang: Concurrency is unneeded outside of a few very specific applications. Parallelism is where it's at.
OCaml: Great language, only one, deficient, implementation.
Haskell: Academic sex toy.
Forth: Reinventing the wheel over and over.
Ruby: Implicit declarations. Slow as fuck.
Python: Implicit declarations. FioC.
Perl: Brain damage.
PHP: Pretty much shit.
JavaScript: "" == false

It's impossible to list them all but, please, what decent modern general-purpose languages exist?

Name: Cudder !MhMRSATORI!fR8duoqGZdD/iE5 2012-08-04 8:54

At each time (at each line in the assembly) the live variables must be allocated in some known assignment to registers and memory. But this assignment is subject to change at any point in the program.
From what I've read, that's part of the principle behind SSA: to split up the idea of a "variable" into multiple ones, and each is only used once, so they correspond exactly to labelling the flow of data between operations. However that still clings to the notion of a "variable", which should be considered source-code-only. When generating code, it's better to think of how data flows between different operations in order to finally arrive at one or more final results.

Although then you'd have to account for the extra instructions required for moving the values around. It's not quite the same as the register allocation problem I'm used to, which was to just relabel the registers to use as few as possible.
A move is not necessary in many cases when the operation does not increase the total storage needed, especially with more flexible architectures. This is better illustrated with an example:


reduce total storage; one less slot needed:
       (C = A + B, A and B not used afterwards)
A       B          AX      BX
|       |          |       |               |       |
+---*---+     ->   *-------+         or    +-------*
    |              |    <free>          <free>     |
    C              AX
             (generates ADD AX, BX  (generates ADD BX, AX
              and frees BX)           and frees AX)

unchanged total storage:
           (C = A + B, B is unused afterwards)
A       B          AX      BX
|       |          |       |
+---*---+     ->   +-------*
|   |              |       |
A   C              AX      BX
             (generates ADD BX, AX
         and AX continues, BX is new flow)

increase total storage:
           (C = A + B, both A and B are used afterwards)
A       B          AX      BX
|       |          |       |
+---*---+     ->   +   *---+   (MOV CX, BX)
|   |   |          |   |   |
A   C   B          AX  CX  BX
                   |   |   |
                   +---*   |   (ADD CX, AX)
                   |   |   |

Because of the flexibility of x86, CX above could be replaced with a memory location (e.g. [BP+4]) if the compiler determined that CX was still in use and should stay there, and so should all the other registers, or move its contents into memory before the move, if circumstances favour it being used for the result of the addition.

Using the fewest registers is also not necessarily the best policy; processors can parallelise better when multiple registers are used, and the instructions are not dependent. For example, although the minimum number of registers for memory-to-memory copies is 1, when there are several of these to be done, you'll see e.g.
mov eax, [esp+8]
mov ecx, [esp+16]
mov edx, [esp+64]
mov ebx, [esp+96]
mov [esi+4], eax
mov [esi+8], ecx
mov [esi+12], edx
mov [esi+16], ebx

instead of repeatedly using eax. Thus the algorithm should, when the minimum number of registers used is below the amount available, shuffle the order of operations (while maintaining their dependency ordering) to increase it. The optimal solution isn't the minimum number of registers, but the exact amount available.

Graph coloring is sort of similar to sudoku, cross word puzzles, etc. It's an abstract representation of some of the basic constraint satisfaction problems people solve themselves every day. But people usually use creative methods to solve the problems. Computers are more brute force, although heuristics can be used as well.
Thus we should examine those "creative methods" and quantify the exact steps and decisions they take, in order to implement an algorithm that can behave like a programmer thinks, with the added advantage of computer speed.

I agree. And that's a nice proof right there. I was very confused when I read about that and I was wondering why someone hadn't written something that got some more consistency on this.
Because of the "accepted fact" that graph colouring is the way to do register allocation, and that it's proved to be NP-complete, few academics have challenged those norms and entertained the possibility that RA might not be. It's necessary to "think outside the box" here.

There might be details in the original problem that are lost in the abstraction.
By using abstraction to transform RA to graph colouring you have transformed a very specific problem into a very general one, and increased its complexity. Because I am quite convinced that RA is not NP-complete I do not believe in using this abstraction.

In this case, solutions to the abstraction will need to be extended to account for the new details before they can be applied.
Why not forget about it completely and think of the solution to the specific problem from the ground up, using first principles? It seems people are more interested in building abstractions on top of abstractions than solving the actual problem, and I do not think that is a good use of resources. By indoctrinating others in this way, it becomes harder for them to think freely and innovate. The fact that linear-scan and the interest in SSA stuff is so recent clearly illustrates this.

In summary, it's an issue of perspective. Asm programmers do not think of variables much, but more about results of computation being held in registers/memory, and probably wouldn't care about assigning names to them. It's not "eax is currently holding j", it's "eax is holding the result of ...", and they're free to move that data around without having to think about what it's called. The flow model I talk about is a lot closer to this, and SSA is a step in the right direction. It's a little sad to see all this research being spent on these traditional directions, and not gaining much, when we could've had much better compilers back then, if they had chosen to think about these things "bottom-up".

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List