Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-8081-120121-

Too many languages

Name: Anonymous 2014-03-09 9:44

There are thousands of programming languages.

The purpose of a programming language is to express programs. The
purpose of learning programming languages is to build up a toolbox for
reasoning about and synthesizing programs in any one given language.

There are diminishing returns on learning programming languages, and
time is scarce.

Therefore one must select between programming languages to study.

A good selection of languages has both
+ breadth
  + satisfies a number of real world economic needs.
+ focus
  + exploits similarity between languages and incremental learning.
  + some unifying basis

A good member of a particular selection meets a number of the
following criteria:
+ Satisfies one particular school of thought on programming languages.
+ Significant difference from predecessors
+ Significant influence on successors
+ Economically significant
+ Advanced i.e. no direct, established and proven heir.
+ A good language.
  + Easy to express programs with
  + Easy to read programs expressed with
  + Easy to reason about programms expressed with

No one of these criteria are sufficient or even necessary conditions.

A bad member satisfies the opposite criteria.

Name: Anonymous 2014-03-09 9:46

I have made a good (as defined above) selection of programming languages:

The most meaningful categorization for this selection is by syntactic
(and therefore (mostly) semantic) tradition or school of
thought. Nevertheless the categorization is not clean, and very rough.
Specifically:
+ the division between concatenative and point free is more a semantic
  one, and
+ rule syntax in this case strictly means "Prolog like rule syntax".

+ Applicative languages:
  : Common Lisp, Scheme, Clojure, [Symta, AP5, InterLisp, T]
+ Structured, procedural:
  : Java, C++, Go, [Modula-3, Oberon, ALGOL68, Eiffel]
+ Message:
  : Smalltalk, Self, [Newspeak]
+ Concatenative:
  : Forth, Factor, [PostScript, Joy]
+ Point free:
  : J, [APL]
+ Rule:
  : Prolog, Datalog, [Erlang, Logtalk, Bloom]

This categorization highlights six branches of focus and exploited
similarity.

Name: Anonymous 2014-03-09 9:47

Additionaly
: Mathematica [Maxima, Axiom]
and
: SQL [CLIPS]
and
: [Refal]
are in the selection. But including them in the above categorization
would be troubling:
+ Refal, Mathematica, Maxima and Axiom would fall into the applicative
  category, but strongly differ in evaluation semantics to the Lisp
  descendents listed. Moreover, Refal differs from the CAS languages.
+ SQL and CLIPS are rule languages semantically, but the surface
  syntax is entirely different to the Prolog descendents listed.

Name: Anonymous 2014-03-09 9:47

Languages in brackets are carefuly chosen extensions of the selection
which (aim to) maximize the diminishing returns (and minimize costs)
on pursuing study further in their particular categories.

The additional uncategorized languages and extensions hopefuly reveal
(lucidate or support) that the "unifying basis" of the selection is
symbolic compuation, knowledge representation and stratified
programming for large systems on von Neumann machines.

Finally, the languages (with associated tools) cover almost the
entirety of today's economic spectrum (one notable gap is .NET).

Name: Anonymous 2014-03-09 9:49

Some languages were excluded but just barely. These
languages follow below:

One group of languages with ample opportunity for profitable study is
the ML family, which would fit nicely in the adopted taxonomy as:
+ Structured, applicative, pattern matching languages:
  : Clean Fortress [Haskell, OCaml]
Additionaly the primary selection above (i.e. not extension) has some
overlaps with the "unifying basis".

Outside of the set of categories in the established taxonomy for the
selection (as strictly defined above) there exist a number of
interesting languages:
: Maude, [CafeOBJ, OBJ3]
: Coq
: Lustre, [Lucid Synchrone, Esterel, Signal]
: Unicon [Icon, SNOBOL4]

Name: Anonymous 2014-03-09 9:50

Similarly there are a number of languages in the intersection of
categories:
: AmbientTalk, Curl, Dylan, Ioke, Julia, Lasso, Logo, Metalua,
: Nemerle, Oz, PLOT, Rebol, Slate

Finally the Algol (procedural, structured) and Lisp (applicative)
selection could be extended much further with (historically and
otherwise) interesting members.
: BETA, BCPL, Delphi, Goo, HyperTalk, 3-Lisp, Rexx, ZetaLisp

Name: Anonymous 2014-03-09 9:54

Now, other languages were excluded more strongly,
because they are average and completely unremarkable.

So, many "blub" languages were entirely excluded:
: ASP.NET, Bash, C, C#, Objective C, JavaScript PHP, Perl, Python,
: Ruby, Scala, VB.NET
etc.

Many unremarkable technology and domain specific languages were
excluded.

Similarly there are a whole host of unremarkable languages very
similar to a (perhaps remarkable) predecessor, augmented with a
novelty feature (or even less justifiably a *library*) targetting an
emerging market, technology or beginners e.g. Monkey, Processing etc.
These were strongly excluded too.

Name: Anonymous 2014-03-09 10:04

In conclusion, even though there are many languages, one can make a
selection which outperforms the pareto principle with regards to costs
and benefits. This selection is not unique, and perhaps someone
with a different "unifying basis" would take a different approach.

Name: Anonymous 2014-03-09 10:09

Learn C. Learn Scheme.

Name: Anonymous 2014-03-09 10:11

>>9
Scheme was included, C was deliberately (and strongly) excluded.

Name: Anonymous 2014-03-09 10:27

clamp my anus

Name: Anonymous 2014-03-09 12:41

>>2
Symta
Nikita "Delicate Flower" Sadkoff detected.

Name: Anonymous 2014-03-09 12:58

Sadkike

Name: Anonymous 2014-03-10 9:21

I am not Nikita.

I am however, in many ways, his apprentice.

Name: Anonymous 2014-03-10 10:27

>>14
did u suck his DIK, faggot?

Name: Anonymous 2014-03-10 10:37

>>15
No. I became aware of him over time and saw remarkable similarities between our tastes in languages.

For example today I wanted to see if anyone on /prog/ knew about Refal. Surprise surprise only Nikita.

Name: Anonymous 2014-03-10 11:17

>>16
Because he's Russian.

Name: Anonymous 2014-03-10 11:29

>>17
It also turns out all those good posts on Common Lisp, or the ones pointing out the flaws in Scheme, Python and Haskell: Nikita.

The ones about J, Forth, Smalltalk: Nikita.

It seems pretty much everything I liked about /prog/ was just the one guy.

The rest of you are UNIX skiddies who think programming in C is ``hardcore'', Haskell is for smart people, minimalism is cool and ``OOP'' sucks.

That being said there are differences. For example I'm not so averse to the ALGOLs (well its love hate really). I also have a thing for CAS style term-rewriting, which I don't know if Mr. Sadkov has discovered.

I also don't seem to have much in common with Nikita outside of programming interests and languages. I certainly don't share any of his political or personal views.

Name: Anonymous 2014-03-10 11:39

>>18
But Haskell is for smart people.

Name: Chris Done 2014-03-10 11:45

In C# I was always dealing with null errors in maintenance and confused by the limits of the type system and the arcane lambdas (C#: "Lam.. lam.. lamdoh!") it had at the time. I also wasted time trying to decide when to use an object class versus functions that work on a value. Until recently in C# you had to declare all types up front in a really duplicating way. Now only some are required. That gets old quickly and you can feel your hands wearing down every time you have to type that boilerplate. This is undoubtedly the bad experience that leads people to hate anything with the word "static" in it. C# fools you into thinking it's safe with its type system, and then shangais you at runtime. So you end up doing "defensive" programming at composition time, trying to make sure it won't blow up, which slows you down.

In JavaScript I'm always nervous about every line of code I wrote in case it'll explode, so I end up re-reading what I wrote. It doesn't matter much because I get runtime exceptions later anyway. Standard JS has no functional libs, so most code I write can't involve using them, and most code I read online doesn't either. So it's always verbose. You also pay for abstractions in performance in JS. No partial application is also a bother. JavaScript (vanilla) lacks lexical let which kills me.

In Elisp I always forget argument order of functions. There's zero convention. There are some functions that I use all the time and lookup the argument order every single time. Haskell's argument order always favours partial application. Elisp has no pattern matching and a single namespace is icky. It has LET and LET* which is an unfortunate distinction. Lisp has macros that would enable pattern matching, but who cares if only I know and use it? That's like making a boat I can only ride in my bathtub.

I like that Haskell has a "readFile" and "appendFile" function, simple "obvious" things like that. In Common Lisp it's like a three line expression. Also, for a "list processing" language, Lisp sucks at lists compared to Haskell. While I'm bashing Lisp, the LOOP macro sucks because it's heterogenous (shocker: this is why all macros suck). The syntax is unintuitive and special-cased. Whereas Haskell's looping facilities are all just normal functions that you compose. You only have to learn foldr/foldl/zip/map etc. once. They all have their use standalone or in concert. Elisp also lacks lexical scope which kills me.

The poor equality that Lisp has (and JS, and most everything else) trips me up a lot. (Did you mean EQ, EQV, EQUAL, string= or =)? Oh, you want to compare to arrays for equality? Ho, ho, ho. Lack of pattern mathing also kills me. It's laborious and boring to deconstruct objects manually. Haskell's type inspection is fantastic. Most of the time I don't have to read docs. I have to do that a lot in the above languages (though, C# a little less so). You have to sit there and read pros written by someone who resents having to write it in the first place while the other programmers are playing frisby outside. Again, composition and partial application are wonderous. I groan every time I have to write function(){ return ... } in JS or (lambda () ...) in Lisp or whatever weird stuff you have to do in C# these days.

I didn't use Java much. I spent a few days with it to do some PDF manipulation, which would've been a couple days if I wasn't swearing so much. If you think Haskell's type-laden APIs are confusing, look at the iText Java API. HP Lovecraft would have trouble coming up with an apt description. Although I think Iron Maiden could have a good go at it.

People say that they spend more time thinking in Haskell than writing it. I don't have that experience. I just start writing and then worry about the right design later. Thanks to the static checker, the cost of refactoring is very low. Probably the lowest of any language I've used to make pennies. In other languages you really have to think "is this the right design?" ahead of time otherwise you will hate life afterwards when you realise it wasn't. I've had competent Lisper colleagues just give up on changing a complex piece of code because "it works" even though it's a horrific mess. So in that way Haskell lets me just start going. Sure, it won't let me compare a number with a string but that's not something I want to do anyway.

Also, the type system is faster because you don't need to run a program to find out it's fundamentally broken. You find out at the tap-tap-tap-oops phase rather than the run-and-wait-for-it-to-run-oh-bugger phase.

In Bash I forget the syntax all the time because it's just stupid. Whenever I want to do something more than piping I start to wish I was using Haskell. This is why I'm working on a Haskell shell. I'm near to it being my full-time shell, once I've implemented one last thing.

Also doing threaded stuff in Haskell is trivial. And I'm now at the stage where I pretty much know every library I'm going to reach for to accomplish any task I tend to work on. But that comes with time and isn't peculiar to Haskell.

Name: Anonymous 2014-03-10 12:00

>>16
For example today I wanted to see if anyone on /prog/ knew about Refal. Surprise surprise only Nikita.

He is not Nikita.

                  -- The real "Nikita"

Name: Anonymous 2014-03-10 12:01

>>19
LOL

Yeah O.K.

I put it in the extension part of my near misses category. I suppose I did this because Clean was there, and Clean is intersting because it evaluates by graph rewriting. Haskell descended from ML via Miranda and Clean, but Haskell is more like Miranda.

When I first learned Haskell, I learned it to use it, not to study it as a language; so there's also a desire to go back for this reason as well. Finally, I hear good things about GHC.

Name: Anonymous 2014-03-10 12:11

>>22
They actually wanted to use Miranda but the intellectual property kikes refused so Haskell had to be invented as a replacement to Miranda.

Name: Anonymous 2014-03-10 12:28

>>24
>Thanks to the static checker, the cost of refactoring is very low.

I am baffled.

It doesn't take much thought to conclude that static typing (without subtyping and a default implicit universal type for all variables not declared otherwise) would increase the cost of refactoring, not decrease it.

Name: Anonymous 2014-03-10 12:33

>>24
It would decrease that cost because it enables the compiler to point out the spots that are still in need of reworking, with a fast response time because the program doesn't even have to be run.
You should do more practice, less thought.

Name: Anonymous 2014-03-10 12:44

>>20
I am also baffled by the similarities you drew between Lisp's equality functions and JavaScripts equality operators.

For one Lisp doesn't have any type coercion issues to worry about. The existence of each equality function is well justified as well
+ if you want to compare identities use eq,
+ if you want to compare conceptual memory cells assuming unboxed integers and characters use eql,
+ if you want to compare conses by the elements they contain (likewise for bitvectors and strings character by character) use equal.
+ if you additionaly want to compare arrays, structure and hashtables by their elements (and strings case insensitively) use equalp

It's very useful to have these things separate.

Name: Anonymous 2014-03-10 12:52

>>25
Fast response time? Your whole program has to be recompiled.

When I'm programming in Lisp, I often make a change that would require
+ Adjusting my mandatory type decorations even though procedure bodies need no other adjustment,
+ An entire recompile,
in Haskell. But ``poof'' neither is an issue with Common Lisp.

Name: Anonymous 2014-03-10 13:52

>+ Significant difference from predecessors
>+ Significant influence on successors
this are not meaningful at all

>+ Economically significant
The things worth making are priceless

>Now, other languages were excluded more strongly,
because they are average and completely unremarkable.
I didn't know a language was supposed to impress by itself, instead of being obvious and useful.

>So, many "blub" languages were entirely excluded:
>: ASP.NET, Bash, C, C#, Objective C, JavaScript PHP, Perl, Python,
>: Ruby, Scala, VB.NET
>etc.
No reasoning, yet you haven't shown an actual alternative to C in computer systems programming.

Name: Anonymous 2014-03-10 15:28

>>28
No reasoning, yet you haven't shown an actual alternative to C in computer systems programming.
Kneel and repent!

SYMTA> (defun sepplefy (x)
  (if (atom x)
      x
      (case (first x)
        (defun (format nil
                       "int ~(~a~)(~{int ~(~a~)~^, ~}) {~%~{~a~%~^;~}}"
                       (second x) (third x) (mapcar #'sepplefy (cdddr x))))
        (if (format nil "if (~a) {~a;} else {~a;}"
                    (sepplefy (second x)) (sepplefy (third x)) (sepplefy (fourth x))))
        (while (format nil "while(~a){~%~{~a~%~^;~}}"
                       (sepplefy (second x)) (mapcar #'sepplefy (cddr x))))
        (return (format nil "return ~a" (sepplefy (second x))))
        ((+ - * / < > <= >= = ==)
         (format nil "(~a ~a ~a)"
                 (sepplefy (second x)) (first x) (sepplefy (third x))))
        (otherwise (format nil "~a(~{~a~^, ~})" (first x) (rest x)))
        )))



SYMTA> (sepplefy '(defun f(n) (if (> n 1) (return (* n (f (- n 1)))) (return 1))))
"int f(int n) {
if ((N > 1)) {return (N * F((- N 1)));} else {return 1;}
}"

Name: Anonymous 2014-03-10 16:55

No reasoning, yet you haven't shown an actual alternative to C in computer systems programming.
*blubbing intensifies*

Name: Anonymous 2014-03-10 17:13

>>30
blub
Shalom!

Name: Anonymous 2014-03-10 19:54

>>28
What?

In my selection were Oberon, Forth, Common Lisp and Smalltalk. All of these have run (quite successfuly) on bare metal and have been used for commercial operating systems.

C++ is also there.

The only reason anyone would use primitive C over something like Oberon are economic reasons (amount of existing programs written in C, size of community, potential to find others willing to pay you to write programs in C) etc.

Name: Anonymous 2014-03-10 20:29

>>32
Don't forget Pascal, PL/I, and BLISS.

Name: Anonymous 2014-03-10 22:31

>>29
Now write some hardware devices drivers using Symta.

Name: Anonymous 2014-03-11 0:09

>>28
Imagine today's most ubiquitous consumer computers were not Von Neumann machines but stack or dataflow machines or something like this.

Now imagine that there are no living operating systems written in C, that all C compilers are buggy and poorly maintained, there are no C compilers at all for consumer hardware, and that the C community is very small and that it is almost impossible to find documentation or help with anything.

No one will employ you for knowing C, and certainly no day to day applications you use or anyone else uses are written in C.

In this reality, C might still be worth studying, if it had other redeeming features, but it doesn't.

So it's very interesting that you'd say something like ``Oh economics don't matter'' and then say ``C is worthwhile''. I wonder what made you say this?

Name: Anonymous 2014-03-11 0:28

>>35
In your hypothetical computer hell, von neumann machines would be a research topic with both academical and commercial interest, and C would be a great language to do the research in.

Name: Anonymous 2014-03-11 0:49

>>36
I believe it would not be ``computer hell'' but ``Lisp machine bliss''.

Name: Anonymous 2014-03-11 2:30

>>36
von neumann machines would be a research topic with both academical and commercial interest
*hypothetical world where FPGAs and dataflow architectures are the norm and von Neumann means some embedded 8-bit chip*
Mainstream computers are doing too many computations at once. I mean, who wants to be able to calculate the length of an arbitrary linked list in one cycle per element while doing a bunch of other stuff at the same time? Who wants to be able to add multi-dimensional arrays in one cycle? We should force programmers to load everything through a ``cache hierarchy'' into these things called ``registers'' and come up with gimmicks called ``out-of-order execution'' and ``register renaming'' so we can have our CPU do 3 or 4 things at once but pretend it still does only one thing at a time. What took one cycle now takes hundreds! Everything has to go through a tiny path called a ``bus'' about 128 bits or so. I can see commercial prospects too. Those companies would pay us billions to have software that runs 1000x slower, maybe 10000x. So, who's with me?

Name: Anonymous 2014-03-11 4:11

OK OP, let's say I'm not a researcher or freeloading autist and I actually need to make money as a programmer. How would that affect your recommended classes of languages?

Name: Anonymous 2014-03-11 4:38

>>39
It wouldn't.

You should be able to make money with Java, SQL, C++, Clojure and Go, which are all in my selection.

Name: Anonymous 2014-03-11 4:41

You forgot XML OP. XML > SQL

Name: Anonymous 2014-03-11 4:42

You forgot XML. XML < SQL

Name: Anonymous 2014-03-11 4:42

You forgot about XML. XML > SQL

Name: Anonymous 2014-03-11 4:50

>>41
How can one be greater or lesser than the other in this case?

XML is not really a programming language anymore than s-expressions are a programming language.

XML (and s-expressions) can be used as (the basis of) a (textual) syntax for a programming language, though.

Name: Anonymous 2014-03-11 7:50

>>41
XML is not a programming language for it is not Turing complete.

Name: Anonymous 2014-03-11 9:41

>>45
Agda is not a programming language for it is not Turing complete.

Name: Anonymous 2014-03-11 10:49

>>40

Hi OP, please include D in your considerations. D feels similar to a Pascal/ADA/Oberon successor in many ways, while maintaining parts of its C heritage. Despite historically the language tended to add features for the sake of it, recently efforts converged towards the goal of very high code reuse, decoupling algorithms and data structures in a STL-way. A lot of sensible decisions were made in the process, which is not immediately apparent when probing the language market.

Name: Anonymous 2014-03-11 11:44

Thread is too old to post in! Make a new one.

Name: Anonymous 2014-03-11 19:00

>>38
Your computers would be slow shit because they'd be made by people like you. Then people would come and make fast good von neumann computers and people would switch. You know, like what already happened.

Lol, fpgas and dataflow architectures.

Name: Anonymous 2014-03-11 20:42

>>49
That never happened.

Name: Anonymous 2014-03-11 20:53

>>50
People were researching all kinds of wacky-ass architectures — data flow architecture goes back to the '70s — but it's out of vogue now because they all turned out to be shit. You can barely make up toy problems dumb enough to perform well on them.

Name: Anonymous 2014-03-12 8:34

>>51
I think they failed commercially because of business practices behind the other architectures and not because they're inherently slow. Intel x86 instruction set is terrible and yet, it's the most successful CPU that's used in the desktop. Other architectures are used in other applications like the ARM design that's popular in modern smartphones. I don't know what Casio uses in their range of products but I guess they succeeded because they found a niche where their set of computers worked well.

Name: Anonymous 2014-03-12 10:26

A lot of good things fail commercially e.g. Lisp Machines, Smalltalk as an operating system etc.

The market optimizes locally, not globally.

Name: Anonymous 2014-03-12 12:39

Adequate programmers develop PHP code, using "PHPDevelStudio"; while "C" and all its spinoffs are messy shit. My acquaintance with "C++" end with the first pages of
the textbook, when I found that to call "_getch()", you have to manually include WHOLE library ("conio.h").

Name: Anonymous 2014-03-12 14:43

>>53
The market optimizes locally, not globally.
So do CPUs. That's why stack variables are faster than malloc or GC.

Name: Anonymous 2014-03-12 16:56

>>52
Business practices nothing. x86 is terrible, but it's a fast kind of terrible, the kind that still makes it a good and useful choice. x86 handily outcompeted Itanium without even trying.

There's still room for alternative architectures — look at the rise of GPUs — but they have to solve real-world problems.

Name: Anonymous 2014-03-12 17:43

mining bitcoins is a very real problem

Name: Anonymous 2014-03-12 22:43

NOEXCEPT

Name: Anonymous 2014-03-12 22:48

>>56
GPUs are special purpose processors. They are intended for a special application and they work well within that niche. I'm thinking about the alternate general purpose high processing CPU designs. I'm confident that if there were businesses that found a niche for a Lisp machine in the past, we'd have Lisp machines today processing out lists of data in the same market as IBM mainframes.

Name: Anonymous 2014-03-13 2:37

>>59
>le pedophile sage

Name: Anonymous 2014-03-13 5:24

>>59
GPUs are special purpose processors.
Exactly. You only need to solve one problem well to be viable, but Lisp machines and dataflow architectures don't even do that.
You're better off compiling your Lisp down to a real GP architecture, just like hardware support for Java turned out not to be worth it.

Name: Anonymous 2014-03-13 12:55

>>59
lol. but that's wrong you fucking retard

Name: Anonymous 2014-03-13 21:34

>>62
lol. but that's wrong you fucking retard

Name: Anonymous 2014-03-13 23:19

>>59
>le pedophile sage

Name: Anonymous 2014-03-14 2:28

>>59,61

I think what's inspiring about Lisp machines today, and what was lost, is the idea of high level hardware and a high level operating system. Where GC, dynamic type checking, memory bounds checking etc. are part of the fundamental services of the hardware.

You're missing the point if you're looking at Lisp machines as an optimization strategy. They are a ``lets not start with shitty abstractions'' strategy.

Name: Anonymous 2014-03-14 7:32

>>65
It wasn't lost as much as deliberately abandoned.

It turns out it's better to give access to a selection of primitive computations than to design towards a specific language model.  Just like it turns out that APIs are more powerful and flexible when they are dumb REST APIs that map to the underlying model, rather than catering to the specific application you are writing.

Look at Java.  Unlike Lisp, Java is in widespread use.  It had its fair shot at hardware execution with support from large actors like Sun and ARM.  Turns out a good JIT beats it handily in all areas that matter.  Flexibility, speed, you name it.  And of course the programmers don't give a whit, they're just writing Java either way.

The thinking that an ISA should have `high level' operations is what got us to x86 in the first place.  A lot of the instructions are just convenience methods for when you're programming assembly code.

Name: Anonymous 2014-03-14 8:14

>>66
``Lost'', in the way I used it meant exactly ``deliberately abandoned'', so I don't understand why you contrasted the two.

Deliberately abandoned in no way implies deliberately abandoned due to the idea being bad, or it being inferior to what exists.

Technologies die due to economic pressures. ``Good enough'' is a thing. People use Java don't they? Do you think Java is the world's best language? It's not, but it's ``Good enough'', proven and there are network effects in using it.

It's insanely foolish to think that markets optimize on technical merits (or that any similar evolutionary sort of process does).

As Alan Kay said (paraphrasing cause it was in some video I watched ages ago) ``Just imagine the most perfect being, and then *prfft* an elephant stomps on it, and that's it. It's gone''

Nothing just ``turns out'' WTF are you talking about? ``flexibility''? I'll grant you ``speed'' but I can't really name anything else, certainly not ``flexibility''.

And why on earth are you talking about ``high level operations''

What in heck does garbage collection, type tagging and bounds checking have to do with ``high level operations''. Those aren't implemented as operations, the whole point.

Finall ``A stupid idea that works is still a stupid idea'': Yiddish proverb.

Name: Anonymous 2014-03-14 8:23

>>65
I think what's inspiring about Lisp machines today, and what was lost, is the idea of high level hardware and a high level operating system. Where GC, dynamic type checking, memory bounds checking etc. are part of the fundamental services of the hardware.
What's inspiring about that? I find that disgusting actually, because it disregards the most fundamental and beautiful property of computation, the existence of universal functions.

Name: Anonymous 2014-03-14 8:44

>>68
I'll give you a chance to explain yourself.

Name: Anonymous 2014-03-14 10:24

>>69
SUCK MAH DIIIIIIIIIIICK

Name: Anonymous 2014-03-14 10:41

>>67
Yiddish
Shalom!

Name: Anonymous 2014-03-14 14:57

>>66                                `
>x86
>high level

Nigga, please. x86 was designed when chips had tens of thousands of transistors. There's nothing ``high level'' about it. Block copy and 8-bit BCD aren't high level. Even a Z80 had those. 68k, PDP-11, and VAX were much nicer for assembly programmers to use. x86 was the worst instruction set ever made. Any praise it gets is from people who don't know anything else.
Intel's iAPX 432 was a high level machine but they made instructions bit-aligned. Totally ruined performance. GC microcode and checking the bounds and authorization of every memory access barely mattered compared to having to shift every bit of every instruction in a slow as fuck non-barrel shifter.

Name: Anonymous 2014-03-14 19:03

>>71
Shalom! A good Shabat to you!

Name: Anonymous 2014-03-15 1:19

>>72
Leave it to Jews to make machine language convoluted and complex.

Name: 68 2014-03-15 8:21

>>69
Which word do you not understand? http://en.wikipedia.org/wiki/Universal_function

In layman's terms, a program written for some abstract machine (which is to say, an index in some particular Goedel numbering) can't possibly tell if it's executing on said machine "directly", like implemented in hardware or something, or inside an arbitrary number of evaluation (interpretation or compilation) layers in arbitrary abstract machines. Maybe even infinite number of such layers.

This is one of the most fundamental results in Computer Science. Why would people reject the eternal mathematical purity of an abstract machine and fetishize irrelevant transient hardware implementation details? Pig disgusting!

(maybe they never did low-level programming themselves and mistakenly believe it to be some sort of lost Eden? I did, it's dirty and exhausting, there's nothing magical about it)

Name: 68,75 2014-03-15 8:24

I mean, go make some chairs or scrub some toilets if you're so infatuated with the material plane.

Name: Anonymous 2014-03-15 8:43

>>75
Please leave this forum and do not come back until you learn the von Neumann architecture.

Name: Anonymous 2014-03-15 8:59

>>75
Thank you for explaining

I have no such misconceptions of Eden.

Currently many software vendors spend a lot of time implementing virtual machines with GC, bounds checking, type tagging etc.

If these were expected services of the hardware, a lot of duplication of effort would be removed (at least from software).

More importantly this would raise the floor of software quality. Gone will be basic mistakes such as memory leaks or corruption.

There's nothing disgusting about wanting to build your house on a concrete foundation.

Name: Anonymous 2014-03-15 9:51

If these were expected services of the hardware, a lot of duplication of effort would be removed (at least from software).
Bollocks. If it was easy to make a one-size-fits-all framework for GC, bounds checking etc, then various virtual machines could just use it.

Now consider the fact that designing, improving, finding and fixing bugs in the same thing implemented in hardware is a hundred times harder.

If you want to build your house on a concrete foundation, why don't you just use JVM? How would the same thing implemented in hardware magically be any better?

And it's not that anyone would be crazy enough to actually implement any significant part of it in silicon (because then improving and debugging becomes not merely much harder, but actually impossible), so what you want is basically a hardware with JVM in the firmware forced on everyone (except it would be a magical JVM without any flaws, lol). Such an alluring prospect!

Name: Anonymous 2014-03-15 10:43

>>79
I do use the JVM in practice.

And it wouldn't be the JVM in hardware.

This jerk has the right idea:

http://www.loper-os.org/?p=55

Name: Anonymous 2014-03-15 10:55

>>80
That jerk sounds like a total idiot. Hardware is pretty bug-free precisely because it's so braindead. Software layers above it are buggy precisely because they are necessarily complicated -- to allow programmers to comfortably inhabit those layers.

That idiot appears to honestly believe that he can have both, "dare to imagine a proper computer – one having an instruction set isomorphic to a modern high-level programming language" -- and that that computer would magically come without the same infestation of bugs that any modern compiler has. And of course it's "political forces" that "strangle at birth" people's attempts to deliver that. I wonder if the Jews microwave his own apartment, introducing bugs into his OS (if the poor goy can't even make a stable high-level abstraction in software, I can't imagine what an abomination his hardware would be).

Name: Anonymous 2014-03-15 12:57

Oops, reposting in the right thread. Sorry for the noise.

They did make processors with hardware support for JVM bytecodes, but no one wanted to use it.

RISC is best ISC.

Name: Anonymous 2014-03-15 16:42

>>82
>le pedophile sage

Name: Anonymous 2014-03-15 19:18

>>81
I don't understand you.

Are you saying that if I gave you a computer that ran fast, had hardware GC, had run-time bounds and type checking that you'd say ``No thank you! That is quite superflous, I'll stick with my x86!''?

Name: Anonymous 2014-03-15 20:12

>>84
I would jack off and cum hard on your shins.

Name: Anonymous 2014-03-15 20:51

>>84
I don't know about >>81-san, but I would certainly stick to JVM on AMD64, which would run faster and have better GC (this is guaranteed), and still have run-time bounds and type checking.

Name: Anonymous 2014-03-15 21:16

Focus an mastering algorithms and data structures instead.

Languages are emphasized too much. You should learn the languages most in demand in each popular paradigm (See TIOBE index.)
Once you've touched each paradigm, you can learn other languages in that paradigm easily.

Name: Anonymous 2014-03-15 21:39

>>86
That's pretty retarded.

Name: Anonymous 2014-03-15 22:15

>>84
Are you saying that if I gave you a computer that ran fast, had hardware GC, had run-time bounds and type checking that you'd say ``No thank you! That is quite superflous, I'll stick with my x86!''?
I'm saying that you could as well ask me if I'd refuse a blow job from Santa Claus. I mean, I probably wouldn't, but how is that relevant to anything?

Name: Anonymous 2014-03-15 22:23

>>89
Because the thing about goals and dreams and things like that is precisely that they are not yet achieved.

I think if you want to say ``Well all that would be nice and a pony too, but it's not worth the effort'' that's far more reasonable, but it seemed like you were saying ``Prfft! GC and bounds checking as a hardware service who wants that?''

Name: Anonymous 2014-03-15 23:23

>>89
ask me if I'd refuse a blow job from Santa Claus. I mean, I probably wouldn't
faggots should be shoot

Name: Anonymous 2014-03-16 2:50

>>90
You are wishing for the perfection of a defective model. Like a nice syntax for running SQL queries from your view code, it can only help you do things wrong.

Name: Anonymous 2014-03-16 5:39

>>92
How does GC and bounds checking ``help you do things wrong''

It obviously doesn't, hence the success of platforms such as the CLR or the JVM.

Name: Anonymous 2014-03-16 8:04

>>92
>le pedophile sage

Name: Anonymous 2014-03-16 8:10

>>93
Notice how there exist both CLR and JVM, each having underwent a number of revisions, with CLR getting user-defined value types and reified generics, while JVM got a JIT compiler iterated through several versions of GC (not to mention third party stuff like Azul's C4). Also, other virtual machines, people don't make them because they have nothing better to do, you know, but because they have ideas how to improve them.

How do you think things would look like with a hardware GC?

Name: Anonymous 2014-03-16 8:10

>>93
Notice how there exist both CLR and JVM, each having underwent a number of revisions, with CLR getting user-defined value types and reified generics, while JVM got a JIT compiler iterated through several versions of GC (not to mention third party stuff like Azul's C4). Also, other virtual machines, people don't make them because they have nothing better to do, you know, but because they have ideas how to improve them.

How do you think things would look like with a hardware GC?

Name: Anonymous 2014-03-16 8:23

>>95
I think things would look great.

If you're trying to say the costs of experimenting with software are lower than the costs of experimenting with hardware, then I agree with you.

Again, my point is, Economics aside, Hardware with GC, bounds checking and type checking, which dollar for dollar ran programs as fast as hardware today, would be superior to that hardware.

The next best thing of course would be a sane operating system. Shame the momentum of UNIX and VMS killed all the Smalltak, Lisp, Prolog, Oberon and even Java OSes.

I hope Microsoft makes that CLR OS.

Name: Anonymous 2014-03-16 8:27

>>96
Have you heard of project Oberon? It was pretty cool.

Here's a recent talk about it:
http://www.multimedia.ethz.ch/conferences/2014/wirth
(The last video)

I think, judging by your attitude, you'd find project Oberon much more inspiring than pushing the facilities it provides to hardware (which you understandably think is not worth the effort).

You can download and play around with Bluebottle (a currently worked on successor) today. Aside from being technically quite cool, one neat thing about it is that it's default GUI is a ZUI.

Name: Anonymous 2014-03-16 8:58

>>98
Here's a recent talk about it:
Is there a written version? Don't want to waste time on a video.

Name: Anonymous 2014-03-16 9:07

Name: Anonymous 2014-03-16 10:22

>>100
Thanks.

Name: Anonymous 2014-03-16 12:53

>>97
Hardware with GC, bounds checking and type checking, which dollar for dollar ran programs as fast as hardware today, would be superior to that hardware.

It would not, because five minutes after it was designed, and six months before it shipped, someone would improve the software, and everyone would just keep using that instead. It is not just that it is hard and expensive to make. It would be a thing lesser than the sum of it parts. Namely, the parts would include a GC that no one asked for or cared to use.

I assume you have never written a compiler, a garbage collector, nor a CPU.

Name: Anonymous 2014-03-16 16:23

>>102
Virtual memory mapping is ``free'' in modern CPUs but imagine what it would be like if the OS had to swap out to disk all memory used by the old process and swap in all memory used by the new process on every task switch. Old OSes had things called "overlays" and "time sharing" that did just that.
Suppose this new CPU makes GC as fast as stack allocation and bounds checking as fast as virtual memory protection checking. You know how you get an exception when you access a null pointer? Hardware could just as easily be able to check whenever you access outside an array. The programmer doesn't need any extra code and not even a virus or low-level code would be able to break this protection, just like you can't write to kernel memory. That's the difference between doing it in software and doing it in hardware.

Name: Anonymous 2014-03-16 16:30

>>95-96
Notice how there exist both x86 and AMD64, each having underwent a number of revisions, with x86 getting floating-point units and SIMD, while AMD64 got a virtual machine assist and several versions of cryptographic accelerators (not to mention third party stuff like VIA's PadLock). Also, other CPUs, people don't make them because they have nothing better to do, you know, but because they have ideas how to improve them.

How do you think things would look like with a hardware GC?

Name: Anonymous 2014-03-16 18:29

>>103
Suppose this new CPU makes GC as fast as stack allocation
Lol. Do you have any idea how GCs work? How different GCs work? Parallel, refcounting, generational, incremental?

Hardware could just as easily be able to check whenever you access outside an array.
Ok, yeah, you can do that part.

not even a virus or low-level code would be able to break this protection
Lol. Don't confuse bounds checking with memory protection. The first is strictly hardening against buggy code.

Name: Anonymous 2014-03-16 18:37

bestest programing languege is html!! but its hard 2 leern

Name: Anonymous 2014-03-16 21:03

>>106
Just use M$ Word! It generates html websites for you!

Name: Anonymous 2014-03-16 21:18

>>107
M$
Are you an Archtm user?

Name: Anonymous 2014-03-16 21:44

>>108
I'm going to abuse your anus.

Name: Anonymous 2014-03-17 2:47

>>105
Do you have any idea how GCs work? How different GCs work?
GCs have a root of reachable objects and follow pointers, saving only what's reachable. A hardware GC collection could happen on a cache miss or page miss.
Parallel, refcounting, generational, incremental?
Parallel (one per core), generational (related to cache hierarchy and swap system), and incremental (works while you do other stuff).
Today's GCs don't get along with swapping or the cache hierarchy. Software GCs actually swap garbage out to disk but with hardware GC, some garbage may never leave L1 cache. It never even has to be written to RAM. Usually, not even stack allocation has that guarantee. That's the kind of speed-up hardware GC can give you.
The first is strictly hardening against buggy code.
Both are strictly hardening against buggy code, but the truth is code is buggy. MS-DOS and "real mode" Windows 1.x-3.x didn't have any protection or paging at all.

Name: Anonymous 2014-03-17 10:32

>>103
Virtual memory mapping is ``free'' in modern CPUs
Ahh, you're comparing apples with oranges. Having a CPU support for certain stuff required for making a GC is awesome, and the best part is that we have increasing amounts of it! Modern CPUs can mark pages as "dirty" on writes so that a partial GC knows which pages in the older generation it has to examine, this is already used in .NET, for example.

Trying to embed an entire GC into firmware on the other hand is both idiotic and pointless.

>>110
You're so clueless I don't even know where to start. Go read some papers about the way actual world-class garbage collectors work, not the Wikipedia's bird's eye overview. You underestimate the amount of effort required to make a modern GC so hard it's not even funny. That's besides being plain old ignorant, "incremental (works while you do other stuff)", "Software GCs actually swap garbage out to disk", lol.

Name: Anonymous 2014-03-17 11:28

>>109
HAX MY ANUS

Name: Anonymous 2014-03-17 15:00

>>111
Modern GCs know nothing about cache or swap and this is a hardware problem. They can make assumptions about cache size, but they can't tell the cache "this is garbage, don't write it to RAM" and immediately reuse that part of the cache for new data. They can allocate and deallocate virtual memory, but they can't tell the swapper "this page is garbage, don't swap it to disk" and immediately free the page.

Name: Lambda A. Calculus !!wKyoNUUHDOmjW7I 2014-03-17 17:46

WAT KINDA RETOID R U?

A good selection of languages has both
+ breadth
+ satisfies a number of real world economic needs.

PROGRAMMING LANGUAGES DONT DO DAT SHIT, PERIOD. PROGRAMS SATISFY A NUMBER OF REAL WORLD ECONOMIC NEEDS, NOT PROGRAMMING LANGUAGES.

+ focus
+ exploits similarity between languages and incremental learning.
+ some unifying basis

MORE BULLSHIT. WHAT UNIFYING BASIS DO U SEE IN C AND BASH? NONE? AND YET DER'S FUCK-LOADS OF SYSTEMS DAT COMBINE C AND BASH.

AND WAT DA FUCK DOES INCREMENTAL LEARNING HAVE TO DO WITH A PROGRAMMING LANGUAGE? SEEMS TO ME DAT IT HAS EVERYTHING TO DO WITH DA FUCKING TEXT/COURSE UR FOLLOWING.

AND UNIFYING BASIS? C AND BASH HAVE AS MUCH IN COMMON AS SCHEME AND BASH, YA FUCKIN RETOID.

A good member of a particular selection meets a number of the
following criteria:
+ Satisfies one particular school of thought on programming languages.

NOW DIS I AGREE WITH. LOOK AT HOW SHITTY C++ IS. U DONT KNOW WHETHER UR WRITING FUNCTIONAL CODE, PROCEDURAL CODE, OR 'OBJECT ORIENTED' CODE. ITS A FUCKING MESS. WITH SMALLER LANGUAGES DAT STRICTLY ADHERE TO ONE PARADIGM, IT'S FUCKIN OBVIOUS.

+ Significant difference from predecessors
+ Significant influence on successors

LITTLE TO SAY IN REGARD TO DIS, BUT I CONSIDER IT INSIGNIFICANT.

+ Economically significant

SEE MY FIRST POINT YA FUCKIN RETOID. UR LIVING IN A DREAM LAND. PROGRAMS ARE ECONOMICALLY SIGNIFICANT. HOUSES ARE ECONOMICALLY SIGNIFICANT. DA CHOICE OF PROGRAMMING LANGUAGE U USE ISN'T, SO LONG AS U CAN WRITE PROGRAMS WITH IT. DA CHOICE OF SAW DAT U USE ISN'T, SO LONG AS U CAN CUT WOOD WITH IT. ARE U A PROGRAMMER OR ARE U JUST A SISSY WHO WORRIES MORE ABOUT DA COLOUR OF HER DRESS, RATHER THAN MORE IMPORTANT SHIT LIKE GETTING A COCK SHOVED IN2 HER?

+ Advanced i.e. no direct, established and proven heir.

IS IT UR GOAL TO BE VAGUE?

+ A good language.
+ Easy to express programs with
+ Easy to read programs expressed with
+ Easy to reason about programms expressed with

NOW DESE ARE SOME GOOD FUCKING POINTS. I THOUGHT DA WHOLE POST WAS SHIT AT FIRST, BUT MIRACULOUSLY U'VE COME TO THE THINGS DAT ACTUALLY MATTER IN A PROGRAMMING LANGUAGE.

STILL, TAKE UR CHOICE OF PROGRAMMING LANGUAGE AS A GRAIN OF SALT. SO LONG AS U CAN WRITE PROGRAMS AND MAINTAIN 'EM, IT DOES DA FUCKING JOB.

Name: L. A. Calculus !!wKyoNUUHDOmjW7I 2014-03-17 18:10

OH, N IN ADDITION TO UR LAST (GOOD) POINTS, IT SHUD BE NOTED DAT DA PROGRAMMING LANGUAGE ONLY CONTRIBUTES LITTLE TO DIS. WHAT'S MORE IMPORTANT IS HOW DA PROGRAMMING WRITES CODE USING DA LANGUAGE. SURE, DA LANGUAGE HELPS A BIT, BUT GET ONE OF DESE FUCKING STACK BOYS TO WRITE A REASONABLY SIZED PROGRAM, IN A PROGRAMMING LANGUAGE DAT U CONSIDER "EASY TO EXPRESS PROGRAMS WITH", DEN COME BACK N TELL ME DAT EXPRESSIVENESS IS MOSTLY IN THE PROGRAMMING LANGUAGE, N NOT MOSTLY IN DA PROGRAMMER.

UR PRETTY MUCH JUST SPEWING SHIT ABOUT A TOPIC DAT DON'T NEED TO HAVE SHIT SPEWED ABOUT IT. ONCE U ACCEPT DAT DA WORLD'S IMPERFECT AND FULL OF SHIT, U MIGHT REALISE WHY PEOPLE OPT FOR SHIT IN ORDER TO MAINTAIN COMPATIBILITY WITH OTHER SHIT. IF U REALLY WANT TO CHANGE DA WURLD N PUSH GOOD DESIGN CHOICES, START OFF SMALL BY GETTING JAVA, A LANGUAGE U ACTUALLY ENCOURAGE THE USE OF, TO SUPPORT UTF-8 NATIVELY AND DROP ITS STUPID UTF-16 BULLSHIT.

Name: Anonymous 2014-03-17 19:33

>>113
I think we can add swapping to the list of things computers do that you don't know how works.

Name: Anonymous 2014-03-17 19:45

>>115
All my points were good. Read everything again and consider it as a whole.

Name: Anonymous 2014-03-17 20:22

>>116
All my points were good. Read everything again and consider it as a whole.

Name: Anonymous 2014-03-17 20:29

>>116
You have a USB drive and you pull it out and swap it for another one. Don't even have to restart.

Name: Anonymous 2014-03-18 15:25

>>113
Except that isn't true. A user space garbage collector can use system calls like mincore to check whether an operation is going to cause a page fault or not. I'm not aware of any well known GCs that actually do this, though - changing page mappings is itself a potentially expensive operation so it may be better to avoid such tricks. The same goes double for cache line management - you'd really need to benchmark heavily to tell whether there is a benefit to doing what you suggest. There are well known cases where trying to second guess the CPU's cache management actually hurts performance.

To argue from ignorance a bit - if added hardware support were really so useful, you would see the implementers of popular language VMs like Hotspot, CLR, or V8 recommending new intrinsics for the CPU designers. The market for hardware that runs these VMs is certainly large enough for folks like Intel to consider it worthwhile to add such things.

The last time I checked this was not happening - the language implementers seem to spend a lot more of their time talking to the kernel people to improve code paths that their VMs exercise heavily. Unless you take the position that the language folks are ignorant of what hardware can do for them I'd judge that hardware support isn't the panacea you say it is.

Name: Anonymous 2014-03-19 3:33

>>120
>le pedophile sage

Name: Anonymous 2014-03-19 5:30

>>120
The idea isn't to have hardware accelerated garbage collection. It's to have hardware garbage collection period. No software would have to implement anything with regards to memory allocation, only the hardware manufacturers.

All programs would be able to allocate memory as they wish knowing that it will be safely deallocated when all hardware type tagged references to that memory disappear.

This also points out that hardware GC has to go hand in hand with hardware type tagging. You can't have one without the other.

Again, the idea isn't optimization. I don't give two shits about optimization (in this context). I care about sitting ontop of sane abstractions and not having every second software vendor reimplementing something that is a basic service to be expected from the hardware or at least the operating system.

Name: Anonymous 2014-03-19 8:28

>>122`
>22
>dubs

nice :^)

Name: Anonymous 2014-03-19 12:49

>>122
I care about sitting ontop of sane abstractions and not having every second software vendor reimplementing something that is a basic service to be expected from the hardware or at least the operating system.
Then only use programs written for JVM. Here, I solved your problem.

Or do you perchance believe that if you get a new incompatible hardware platform, suddenly you'd get more programs that JVM features? LOL.

I mean, what you're saying is that you don't want faster GC, you want enforced GC. Well, good luck with that.

Name: Anonymous 2014-03-19 14:56

>>122
Once you have hardware GC, the next step are hardware continuations and closures! Even C has escape continuations (jmp_buf). You would need some kind of dynamic-wind for exception handling.

Name: Anonymous 2014-03-19 16:29

>>125
Assembly has continuations!
jmp penis
now, jump on my penis

Name: Anonymous 2014-03-19 16:58

>>124
He said sane abstractions, not JVM.

Name: Anonymous 2014-03-19 17:13

>>126
Not a sane abstraction, ya dumb goy.

Name: Anonymous 2014-03-19 18:34

>>1
Advanced i.e. no direct, established and proven heir.
shove it up your ass, fancy boy.

Name: Anonymous 2014-03-19 19:39

>>124
You didn't solve shit.

JVM != All software running on my machine.

What do you mean incompatible? You can write a C compiler for any machine and emulate the lack of GC. There are C compilers targetting the JVM, and Common Lisp, and other advanced platforms.

Name: Anonymous 2014-03-19 22:18

>>122
And how do you think the hardware manufacturers would go about implementing a feature as advanced as garbage collection? They would micro code it, performance would be scarcely better than what pure software and a well thought out ISA could achieve, and you'd be stuck with said mediocre implementation forever.

What you're proposing has been tried before; the result was called the iAPX 432 and it was a miserable failure. Nobody has seriously proposed making hardware do so much work since the 1980s.

Name: >>131 2014-03-19 22:33

>>131
Nobody has seriously proposed making hardware do so much work since the 1980s.
Scratch that; I forgot about Jazelle. ARM also did what you're proposing in the 90s; they also ultimately realized that it was better not to prematurely specialize their hardware.

Name: Anonymous 2014-03-19 23:56

>>131
The iAPX 432 was designed when anything larger than a 68000 needed multiple chips. With today's technology the whole iAPX 432 takes up less space than the non-cache part of a modern x86 CPU with all its decoder baggage. What are all these transistors being used for? Dynamic translation and out-of-order execution to make an obsolete, hard to decode, inefficient instruction set run fast.

Name: Anonymous 2014-03-20 2:14

>>133
Nobody holds up x86 as an example but its frontend is still simpler and more conducive to a performing implementation than a 432s'. Most transistor gains of the last 10 years have gone into the caches; just make sure a software GC fits in those and you've got all the hardware support you will need.

Name: Anonymous 2014-03-20 5:22

>>131
>le pedophile sage

>>132
>le pedophile sage

>>134
>le pedophile sage

Name: Anonymous 2014-03-20 10:34

>>134
Cudder does.

Name: Anonymous 2014-03-20 11:43

>>134
Cudder has crazy Intel Stockholm syndrome and is obsessed with code density measurements. x86 is serviceable but pretending the RISC people didn't win a long time ago is silly.

Name: Anonymous 2014-03-20 14:05

>>136-137
You can't Polish a turd.

Name: Anonymous 2014-03-20 14:06

>>137
>le pedophile sage

Name: Anonymous 2014-03-20 16:37

disgusting stinky NIGGERS

Name: Anonymous 2014-03-22 12:29

>>137                                  `
>tfw
>you will never have Cudder kidnap you,
>tie you up and torture you
>strap on a dildo and fuck your tight little goy ass
>shove her used tampons down your throat while calling you a filthy goy scum

Name: Anonymous 2014-03-22 14:32

>>141
you seem like a stupid faggot sub goyim that deserves to die

her
what!?

BOYS KIDNAP GIRLS. NOT TRANNIES BOYS!

Name: Anonymous 2014-03-27 20:03

Just got reminded of the Mills architecture by this:

http://jakob.engbloms.se/archives/2004

Now we can argue the commercial viability of this, and how realistic their claimed performance is, but this is what real architecture research looks like.

Note the focus on providing a small set of useful primitives, and a distinct lack of pothead "what if we, like, made a garbage collector, but in hardware?" bullshit.

They're actually pushing stuff over to software - from what I can tell the "Mill CPU" target is basically a VM, and you do an additional compilation pass over the bytecode to adapt it to your specific hardware.

Name: Anonymous 2014-03-27 20:31

check 'em dubz

Don't change these.
Name: Email:
Entire Thread Thread List