The rest of this thread is dedicated to debating whether >>4 used `ironic' correctly.
Name:
Anonymous2011-10-21 17:08
Ok, this will surely "out" me as an imperative programmer, but the speaker's focus seems to be that "state" is bad. How the fuck do you write anything useful in any language without having "state?" Any program boils down to code and data, and "state" is just another word for "data." So now we're supposed to write programs that are all code and no data? So I should declare all my C variables as const?
>>6
State isn't bad. Purely functional languages have to simulate state using kludgy hacks such as monads. However, abusing state can make a program much harder to debug and maintain than necessary. That's why it should be used carefully.
Of course, a good programmer knows when to use functional style and when to use imperative style.
That said, I haven't listened to that presentation.
>>6
Uncontrolled side effects (such as mutable state+uncontrolled mutations) are bad, not state itself.
But yes, you can go pretty far without mutation. Also, read SICP, mutation is only introduced in chapter 3.
Name:
Anonymous2011-10-21 17:19
>>7 State isn't bad. Purely functional languages have to simulate state... That said, I haven't listened to that presentation.
He clearly things that state is bad because it "complects" value and time. You'll have to watch it to understand what he means by "complect," but I really don't understand what the alternative to state is. He seems to think it's something called "managed refs" and that sounds like some kind of dog shit you'd find in .NET to me.
>>10
I haven't listened to the presentation too, but I understand what he's saying. He's probably suggesting immutable state along with controlled state (via STM), which is, in my opinion, the right thing to do.
Hickey's got some good ideas, but his problem is that he's the Jobs of programming languages: he really likes to play the revolutionary fucktard, and digress in pointless and sometimes contradictory pseudophilosophical bullshit. There's nothing really new in Clojure, as far as I know, he just combined some good concepts (immutability, STM, code=data) into an inconsistent-from-the-beginning, already broken, frankenstein monster language.
Name:
Anonymous2011-10-21 18:06
>> 9
how are monads hacks? they seem pretty well grounded in math and CS theory. it's actually quite simple once you become familiar with it.
Name:
Anonymous2011-10-21 18:10
>>6
in C you do everything in state, even basic things like looping ("++i") require side effects.
In a functional language, you tend not to loop over anything. You transform data structure to new data structures. It's memory inefficient, but it reduces headaches.
Name:
Anonymous2011-10-21 18:12
>>10
I think by "managed refs" he means transforming data structures to new data structs via function application, which requires automatic memory management. (or headaches.)
Actually mr. hickey is all about practicality. clojure really doensnt try to be new. it takes the parts that have been proven to work from other functional lang. he's pretty humble and i dont see how he's like jobs at all. have you even used clojure?
Name:
Anonymous2011-10-21 18:13
>>13
How about an example of this? Not actual code, but just a problem that would be solved by looping in an imperative language and without looping/state in a functional language.
Name:
Anonymous2011-10-21 18:17
Also this talk is fucking awesome. This is like the first cool link I've seen on /prog/ in some time.
>>18 TERRIBLE example. Looking at that page for one second reaffirms everything I've ever thought about functional languages being absolute shit. If code that looks like that is the alternative to imperative programming, I'll take imperative programming from now until the end of time. Sorry.
the second returns a new list with doubled contents. The first modifies the list in place. The second has no side effects on anything. (of course, under the hood, the CPU's state is changing a lot, but you never have to think about it.)
basically it's about using arguments and return values. things that help you code this way are: lambda, GC.
Name:
Anonymous2011-10-21 18:35
>>1 gives a great explanation for the difference between simplicity and ease.
claims that CL and Scheme are complex because of QUOTE, I guess.
>>14
No, by ,,managed refs'' he means ,,managed refs''[1].
>>15
That's (almost) exactly what I said: he took some good ideas and mixed them together. Albeit good ideas, they don't necessarily make the language good.
It's got already broken features, inconsistencies and design warts, but when I look at Clojure it's just to see the evolution of those ideas together, so I don't mind.
Still, I don't accept mindless fanboyism over the language, it's not The Best Thing Since Sliced Bread.
The Jobs part was more an exaggeration, but he does play the revolutionary.
>>19
Holy shit, I clicked the link a second time and somehow it was worse than the first.
Ok, let me try this again... I have an array of pairs of strings. Say they represent (first_name, last_name). I want to be able to sort the list by either first_name or last_name. In C, I'd probably use a simple data structure that's an array of structs, where each struct contains two char * pointers. To sort them, I walk through the array in whatever fashion my particular sort algorithm requires and I swap structs here and there until my algorithm says that they're all sorted.
In doing this, I've definitely changed a lot of "state" because I've moved those structs around and overwritten memory contents. What's so bad about that, and how does functional programming do better?
>>24
I'm the C guy in this thread and even I can see why his example would lead to a dependence on GC. You're basically passing/returning everything by value. You'd better either have infinite memory or GC or find some way to manually manage all those copies of your data.
Name:
Anonymous2011-10-21 18:45
>>23
in functional programming, your sort function would return a new sorted list (and probably take as an argument, a function to compare the members of the list)
This has the advantage that you don't have to worry about what else might have been using the old list. (for instance, your comparison function) It's still there.
Name:
Anonymous2011-10-21 18:48
>>25
Keep in mind that the C stack is basically an incomplete garbage collector. It's a form of automatic memory management. It's a fast form, but also limited. Scheme's automatic memory management is unlimited and there's no "now you're working with the stack, now you're working with the heap" thing.
yup. FP pioneered GC because memory allocation is a side effect and orthogonal to how computation should work. your brain has limited space and should be filled with important stuff, like the actual problem.
>>21
are you retarded? he said that CL and scheme are complex because the syntax can be ambiguous with the case of parens holding both data and computation. clojure has quote too, but it encourages you to use braces and brackets to denote sets and vectors.
>>23
You'd use a list of pairs. To sort them, you make a new (sorted) list from the original list without mutating it in whatever fashion your particular sort algorithm requires and return a new list.
The old list still exists, see >>26.
Functional programming's advantage here over imperative is not really about immutability over the fact that it's possible to write a generic ,,sort'' function, by being able to pass a comparison function to sort.
>>25
I know, but there's no point in spamming it in every thread, and there was no point in saying it now. Keep it elsewhere, where it is relevant to the discussion.
>>31 but it encourages you to use braces and brackets to denote sets and vectors.
Last time I checked, Scheme and CL had standard syntax for vectors/arrays, and most Scheme implementations provided syntax for hashes, and CL had reader macros. Something changed while I was away?
it's possible to write a generic ,,sort'' function, by being able to pass a comparison function to sort.
That's possible in all variations of C, and in any imperative language that I know of. The std::sort() in C++'s standard library takes a comparison function, and that has been true for decades -- never mind the introduction of lambda in C++0x.
Name:
Anonymous2011-10-21 19:35
This thread has actually reduced my interest in functional languages and I wouldn't have thought that was possible. /prog/ actually accomplished something today.
Maybe a better way: functional programming encourages you to reduce your program down to its essential IO (of which the state of memory and the CPU are not considered a part) and for everything else, describe it in terms of declarative transformations which are easy to manage and reason about.
functional programming languages make this much much easier to do, to varying degrees.
>>35 Purely functional languages suck because they force you to express things uncomfortably at times.
Functional programming languages, especially dynamically-typed ones, allow you a much greater freedom of thought and liberty of code than you will find anywhere else.
Which do you find more readable?
r = [];
for (var i = 0; i < a.length; i++)
r.append( a[i] * 5 );
or
r = map(
function(x){ return x*5 + 1; },
a
);
Functional programming often puts emphasis on recursion or function application (a fancy term you'll see sometimes that just means "calling a function") rather than iteration.
Good examples of functional programming languages are (in alphabetical order) Javascript, Lua, Scheme. Be careful that Javascript and Lua code is often written in an imperative manner (by programmers who come from non-functional languages), while Scheme programmers try to avoid imperativeness.
To keep in mind with functional languages with immutable elements, while a "new list" of cons cells may have been created during sorting, the data the list references is not recreated. Presumably, in most cases the bulk of the memory use is in the data and not the list itself so performance should be OK.
the second returns a new list with doubled contents. The first modifies the list in place. in functional programming, your sort function would return a new sorted list The old list still exists
I have never understood why this is a good thing. Why would you want to keep the unsorted list around unless you needed to use it later? It's a waste of memory and time.
>>35
To me, this thread only reaffirms my belief that functional programming languages and their users have absolutely no understanding of the practical issues of programming in the real world; they believe memory and memory bandwidth is infinite, and so is computation speed.
I have read SICP and studied FP. It's interesting that computations can be modeled without state and the lambda calculus etc., but it's all just mental masturbation. Real computers, and indeed, the Universe as we know it, is stateful (as we know it - because some interpretations of QM assert that new copies are made every time something changes, but regardless, it's state from our point of view.) It's all as theoretical and impractical as Touring machines.
I have seen many beginning programmers write code that makes lots more copies of data than necessary (especially objects, which have more overhead due to constructors etc.), so immutability may be "easier" to think about in some ways, but once you learn how a computer really works (and that should have been the first thing you learned), it's trivial to see how much more efficient it is to not make new copies unless they're actually needed.
>>39 I have never understood why this is a good thing. Why would you want to keep the unsorted list around unless you needed to use it later? It's a waste of memory and time.
Escape analysis gets rid of it.
To me, this thread only reaffirms my belief that functional programming languages and their users have absolutely no understanding of the practical issues of programming in the real world; they believe memory and memory bandwidth is infinite, and so is computation speed.
Don't generalize. Retards are part of any programming language's userbase.
a[i] *= 5;
That's not what my code does; you are doing an in-place modification of the original list, and you forgot to add the one (it's x*5+1). My code assumed that you'd still have to do something with the original list afterwards. Otherwise, were the original list to be discarded, I would have written:
Escape analysis gets rid of it.
Does that mean it'll be converted into an in-place modification? That's not what my code does; you are doing an in-place modification of the original list, and you forgot to add the one
Yes, my point was to demonstrate the in-place modification. The +1 is not in your first code fragment either.
Name:
Anonymous2011-10-22 0:28
Well, putting aside the debate over the value of FP, there were a few sparks of real genius in Hickey's lecture. The "simple vs complex" point is brilliant. The "knitted castle vs Lego castle" is also brilliant.
If he has one valuable theme, it's the goal of reducing complexity and even seeing complexity as a poison that spreads all through your program as soon as you stop guarding against it. But he's silly to assume that "state" is the source of unwanted complexity.
As >>39 points out, state is just a fact of life. There's nothing inherently complex about an integer or a function that increments an integer in place.
Sometimes performance just doesn't matter, compared to the idea behind a piece of code. The in-place C for loop example makes functional programming versions look bad when looked at strictly to performance, but the C for loop has traded meaning for concrete but naive performance.
In fact, the C for loop has so little "meaning" that compilers won't optimize them, without some explicit pragma to do so, out of fear of fucking them up just on aliasing alone. And then it generates basically the steps: is i < l? get at location i * sizeof(a) + a, multiply by 5, store at location i * sizeof(a) + a, increment i, repeat, etc. just run at a brute force execution speed expected of a program written in C.
On the other hand, a form with non-ambiguous meaning can be slow, but it does have meaning and a potential to have better instructions generated.
It's a waste of memory and time.
For sorting, I'd figure if the waste of memory was significant, it would not be a significant waste of time, and vice versa.
It's interesting that computations can be modeled without state and the lambda calculus etc., but it's all just mental masturbation.
Functional does not model without state though, functional simply doesn't hide state. Functional languages force state to be explicitly brought to the attention to the programmer, and forces him to take care of the stateful behavior all the way through the function. The real complexity of any location of code can be eyeballed right away by the number of arguments within all function scopes of that location. What is actually different about functional is non-destructive updates and immutability, this forces a different approach to computation with the benefit of added guarantees in larger concurrent systems. The lack of destructive updates requires one to become creative, this discourages concern of manipulation/curating of data and encourages relationship modeling of data. It is the forced writing of non-retarded code.
Imperative languages hide the real deal about state, and therefore hide real complexity, by not requiring any of this. On the other hand, sometimes the real complexity between objects is understood to be just too big (I'd say things like games would fall under this) and one ends up using something else and put up with the post facto extra debugging that may entail from it.
Given that, it is just one paradigm, it's not particularly great for some domains compared to other domains. I think it works best when the target design has a "tree shape".
Real computers
... use GOTO extensively.
Name:
Anonymous2011-10-22 0:56
>>43 Sometimes performance just doesn't matter, compared to the idea behind a piece of code.
I just don't think that's true... ever. If you didn't care about "performing" your algorithm quickly, you'd just do it on paper and you wouldn't bother using a computer. The only reason computers exist is to do what we want them to do faster than we could do it by hand. I could make a FPS game where I draw each frame on paper and show it to you, then ask "now which button do you press?"
I have seen many beginning programmers write code that makes lots more copies of data than necessary (especially objects, which have more overhead due to constructors etc.), so immutability may be "easier" to think about in some ways, but once you learn how a computer really works (and that should have been the first thing you learned), it's trivial to see how much more efficient it is to not make new copies unless they're actually needed.
Immutability can sometimes save on copying. It is true that every time you want to directly modify an object, you must create a transformed version of the object, and then use the new transformed version in place of that object. If objects are defined recursively in terms of other objects however, immutability will allow you to have objects share data, and you will never need to worry about one of the objects modifying the data for its own purposes, causing strange effects on the other objects sharing the data. If you are going to make a copy of an object (get a reference to a version of an object that will not suddenly change state on you), then that operation is much more efficient in an immutable setting. All you have to do is acquire an extra reference or pointer to the object. If mutability is allowed, then you must make a deep copy of the object, and return a reference to this deep copy. The only way to know that no one else will modify an object is to create a new one, and not tell anyone else about it.
performance mattering is subjective, and it depends on the application of the program. If rendering code for a video game takes a while and yields 1 frame per second game play, then that will ruin the experience for the player, making the game useless and inaccessible. If a script that is run twice a day to make back ups of some logs takes a couple minutes longer than it could have if it was written in C, then most likely, nothing bad will happen, and the choice of using the scripting language over C would pay off in terms of development time.
But back the functional and immutability stuff, some algorithms need to have mutability in order to keep their asymptotic bounds on their running times. There are efficient data structures that are completely immutable, but stuff like hash tables with arbitrary sequences of insertions and deletions need to have mutability.
Name:
Anonymous2011-10-22 1:21
>>46 If a script that is run twice a day to make back ups of some logs takes a couple minutes longer than it could have if it was written in C, then most likely, nothing bad will happen
A) In your example, performance still matters, because you do need the script to run faster than 12 hours in order for it to run twice per day.
B) It would be beneficial for that script to run faster because you'd reduce CPU load on that system.
C) It's pretty pathetic to imply that some programming paradigm is valid because, hey, sure it's horrible, but there are some rare cases where it's OK to be horrible and you probably won't notice the difference in these rare cases.
Performance always matters in the sense that there is a notion of what is acceptable. However, not optimal but adequate is often acceptable.
CPUs are pretty fast. It might be cheaper to just buy a faster computer.
It pretty much comes down to the cost of development time. If a programmer could write a quick script for something in 15 minutes, where writing a program in a high performance language would take a day or two, then the script is much cheaper. If there are no practical gains of using the more expensive program, then any rational person would choose the cheaper program. Some applications just don't require the speed to justify putting that amount of time and money into development. You could use the same argument to explain why people use C and not assembly for everything.
Just wanted to say, it's slow on computers designed to run C. http://www.cs.york.ac.uk/fp/reduceron/
Computers are just tools, they should accomodate our needs.
>>47 C) It's pretty pathetic to imply that some programming paradigm is valid because, hey, sure it's horrible, but there are some rare cases where it's OK to be horrible and you probably won't notice the difference in these rare cases.
Implying that the cases are rare, writing multi-threaded code in most mainstream languages is shit, and in this century a significant amount of code is server-side.
It is up to the programmer to produce the metric of what constitutes comparably horrible. A metric should consider trade-offs between different variables to satisfy the customer. Since you may feel file backup scripts could preferably be written in macro assembler with optimal placement of mnemonics to get the job done, it is probably not worth persuading you to the virtues of HASKAL.
Name:
Anonymous2011-10-22 2:28
>>49
Conspiracy by Intel to made CPU which run C fast, and intentionally cripple languages to make them compile to C or lose the performance edge held by C.
architectures run their assembly instruction set as fast as they operate. Any language that is translated to assembly instructions has an equal opportunity to run fast, although different languages have different assumptions about what's going on in the code, which makes it easier or harder to get good optimization. C is pretty simple and close to assembly, so it is good for that.
There seems to be a lot of misconception in this thread about state, why it's complex how it can be fixed. Some quotes:
He seems to think it's something called "managed refs" and that sounds like some kind of dog shit you'd find in .NET to me. As >>39 points out, state is just a fact of life. There's nothing inherently complex about an integer or a function that increments an integer in place.
There is actually another presentation from Rich that explains all this points: http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
I recommend you to watch it before discussing any further here, because at this point the thread has degraded completely into empty talks about assembly, CPUs and other irrelevant bullshit.
Also, I think this talk is even more informative and insightful than the one from this thread and I'll create a separate thread for it so more people would have a chance to see it.
Name:
Anonymous2011-10-22 4:44
Incrementing an Int? Heresy.
You create a new Object.IntFactory and pass OldInt value to Incrementator method which determines the proper return value and checks for overflows, thread-safely creates a new Int with value equal to OldInt + IncrementatorAddValue(default is 1, but you can override the method by calling IncrementatorByValue or SetDefaultIncrement(thread-local)).
purity is relative, obviously you are going have side effects but the idea is to isolate them and keep them separate from the actual program logic.
and the reason that old copies exist is so that other processes can access it. if you look at guy steele's talk, he mentions that parralell programs have to use more memory to decouple time from value. it's a necessary tradeoff for building large systems.
i dont think you ever read sicp or studied any functional language
Name:
Anonymous2011-10-22 5:02
SICP is obsolete. Clojure is dinosaur language paleontology. Java was a sick dog that Oracle put to sleep. The future is FIOC: Forever Indenting Our Code.
>>57
Then maybe you should go to /g/ or /pr/ or some other place that isnt about obsolete stuff.
Name:
Anonymous2011-10-22 5:10
Every intertwining add burden to our brain
Every bug was written by someone, compiled cleanly, and passed all the tests.
Focusing on ease, you will get complexity that kills you
If you go for simple, you start slow, and eventually go faster
OO is easy, but yields complexity
Incidental complexity—incidental is Latin for "your fault"
We can learn more things, but can't get smarter
Simple doesn't mean less parts. Sometimes we need to have more parts to make every part simple
State/Object is complex; Values are simple
Methods are complex; functions/namespaces are simple
Variables are complex; managed refs are simpler
Inheritance/switches are complex; polymorphism à la carte is simple
Syntax is complex; data is simple
Loops are complex; set functions are simple
Actors are complex; queues are simple
ORM is complex; declarative data manipulation is simple
I did years and years of stateful programming, and it sucked
Nothing makes state simple
If you take two things from this talk: the first is simple vs. easy; the second is that you do not need the complex tools
Use only
values
functions
namespaces
data
polymorphism à la carte
set functions
queues
learn SQL
rules
We create complex data structures around data rather than dealing with data directly. Use a map rather than defining a type
You have to analyse the problem and make a decision on the result
Abstraction should separate What, Who, How, When/Where, and Why
Information is simple. The only thing you can do is ruin it
Leave data alone
It's your fault if you don't have a simple system
Guardrails don't yield simplicity
Develop your sensibility around disentanglement
All reliability tools (tests, etc.) doesn't matter
Name:
Anonymous2011-10-22 5:10
Every intertwining add burden to our brain
Every bug was written by someone, compiled cleanly, and passed all the tests.
Focusing on ease, you will get complexity that kills you
If you go for simple, you start slow, and eventually go faster
OO is easy, but yields complexity
Incidental complexity—incidental is Latin for "your fault"
We can learn more things, but can't get smarter
Simple doesn't mean less parts. Sometimes we need to have more parts to make every part simple
State/Object is complex; Values are simple
Methods are complex; functions/namespaces are simple
Variables are complex; managed refs are simpler
Inheritance/switches are complex; polymorphism à la carte is simple
Syntax is complex; data is simple
Loops are complex; set functions are simple
Actors are complex; queues are simple
ORM is complex; declarative data manipulation is simple
I did years and years of stateful programming, and it sucked
Nothing makes state simple
If you take two things from this talk: the first is simple vs. easy; the second is that you do not need the complex tools
Use only
values
functions
namespaces
data
polymorphism à la carte
set functions
queues
learn SQL
rules
We create complex data structures around data rather than dealing with data directly. Use a map rather than defining a type
You have to analyse the problem and make a decision on the result
Abstraction should separate What, Who, How, When/Where, and Why
Information is simple. The only thing you can do is ruin it
Leave data alone
It's your fault if you don't have a simple system
Guardrails don't yield simplicity
Develop your sensibility around disentanglement
All reliability tools (tests, etc.) doesn't matter
Name:
Anonymous2011-10-22 5:24
Syntax is complex; data is simple
fuck off lithpfag
Name:
Anonymous2011-10-22 7:02
Inheritance/switches are complex; polymorphism à la carte is simple
switch(x) is more complex than a whole class of zygohistomorphic prepromorphisms? Cool. I could certainly feel its more capable.
Name:
Anonymous2011-10-22 7:03
If a script that is run twice a day to make back ups of some logs takes a couple minutes longer than it could have if it was written in C, then most likely, nothing bad will happen, and the choice of using the scripting language over C would pay off in terms of development time.
Except if that script will be run twice a day, every day, for an undetermined number of years. An extra minute a day is 6 hours every year. If that script took several more minutes to develop in C, you've still gained that time back after a while.
"Programmer time is expensive" only makes sense in the context of writing code that will be used a very limited number of times by a very limited number of users. Otherwise the initial time investment will be recovered very quickly.
I can write a script in an HLL in 2 minutes to perform a task in 10 minutes, or I could take 10 minutes to write it in C, so it will perform the same task in 5 minutes. If I'm only going to run it once, the HLL makes sense - 12 minutes total vs 15 minutes for C. If I'm going to run it twice, it changes -- 10*2 + 2 = 22 minutes, vs 5*2 + 10 = 20 minutes. And the more times you need to do that task, the more time you'll save with your initial investment.
CPUs are pretty fast. It might be cheaper to just buy a faster computer.
Speed is NOT infinite. Do not assume hardware will keep getting faster, because its improvement is already slowing down.
It pretty much comes down to the cost of development time. If a programmer could write a quick script for something in 15 minutes, where writing a program in a high performance language would take a day or two, then the script is much cheaper.
No. It depends on how many times the script will be used, and by how many users.
Name:
Anonymous2011-10-22 7:56
>>62
Watch the video to understand in what meaning the word "complex" is used here.
Name:
Anonymous2011-10-22 7:58
>>63
But is the machine loaded to the point that those extra minutes matter?
>>59
I'm interjecting again for a moment, but OO is easy, but yields complexity OO really means nothing, even Haskell can be OO by some definitions of OO. I'm assuming Java-style class-based OO. State/Object is complex; Values are simple State is not complex, unmanaged state is. Clojure has state. Variables are complex; managed refs are simpler No, conflating the concept of mutability and variables is complex. Variables should be immutable, and a dedicated datatype/thing should be used for mutability. Clojure does get it right, and also do ML (refs), Racket (boxes, although it hasn't immutable variables). Inheritance/switches are complex; polymorphism à la carte is simple No, conflating subclassing, subtyping and polymorphism is complex. Syntax is complex; data is simple Complex syntax is complex. Loops are complex; set functions are simple Loops are not complex, not having a way to abstract them away is. Actors are complex; queues are simple You just said functions are simple, actors can't be complex. The Sussman noted they are equivalent. Nothing makes state simple Again, state is ok if controlled.
You do realize that run-time does matter as much, because it (in the backup script example) involves only machine-time, not human-time?
putting electricity and server upkeep aside, development time is much more relevant than the time it takes the machine to run your code
Name:
Anonymous2011-10-22 14:05
>>39
the advantage is not having to think about whether you need it later. It's incredibly freeing for your brain. The whole point of non-assembly programming is to let you ignore details.
Maybe a metaphor gets it across: imperative programming says I have this shiny new 2. It is good and serves my purposes. I WUV U 2!! ... FUCK YOU 1 I NEVER LIKED YOU GO FUCKING DIE AND- wait you don't have any fiends or family right? Probably not. *shoots 1 in the face* *plops 2 down on his corpse.*
FP says I have this shiny new 2. It is good and serves my purposes. I WUV U 2!! who the fuck is 1?
Name:
Anonymous2011-10-22 14:42
How would you do this efficiently in a purely functional way? It's for simplifying a multiplication.
; Not tested, but I hope you get the idea.
(let ((vars nil)
(consts nil))
(dolist (x terms `(* ,@vars ,(apply #'* consts)))
; AIF is the anaphoric if macro.
; CONST tries to coerce its parameter into a constant number and returns NIL if it can't.
(aif (const x)
(if (zerop it)
(return 0)
(push it consts))
(push x vars))))
>>70
Oh, I see. Thanks for your insight.
I feel bad for having forgotten the equivalence between accumulation parameters and mutable places.
Name:
Anonymous2011-10-22 17:03
In all this discussion, I still haven't really seen a reasonable argument against state. Hickey just says it's complex because "it complects time and value." Pretty vague.
If I have an egg, and then I fry it, I have a fried egg. Physics doesn't allow me to pass a copy of the egg to the frying pan and keep the original egg in the carton for later. Passing everything by value/copy is a more complex concept because it's not intuitive in our universe.
Name:
Anonymous2011-10-22 17:08
>>72 Physics doesn't allow me to pass a copy of the egg to the frying pan and keep the original egg in the carton for later.
You don't have to ask em.
>>72
State in itself isn't bad. However, many bugs are introduced because programmer's expectations are not consistent when state changes and all sorts of state related assumptions are made. State related bugs don't happen in functional algorithms because the logic is consistent.
it's not intuitive in our universe
Please try not to use intuitive in the context of programming as this word is meaningless in this context.
>>80
I'd say intuition shouldn't be conflated with familiarity, programming in general is unintuitive to the population at large. People with the knack often get exposure to programming with an imperative language. Functional and constraint languages seem unintuitive to programmers seasoned to sequential reductionism but not to functional reductionism or logical description; however, people unfamiliar with any of this may find these ``non-intuitive'' modes of thinking easier to grasp because they fit well with related lay knowledge in math and logic.
>>79
That's why total programming can be useful and non-Turing-complete programming languages exist.
Name:
Anonymous2011-10-24 17:09
>>54 degraded completely into empty talks about assembly, CPUs and other irrelevant bullshit.
There it is. This is the heart and soul of the problem with idealists like Hickey. He even says several times that GC is great because the programmer shouldn't have to worry about managing memory.
A computer is a bunch of bits and some logic that can toggle those bits, and nothing more. If you refuse to see the computer for what it is, then you shouldn't be a programmer -- you should be a mathematician.
So you want to the fibs program. The idealist wants to pretend that integers are arbitrarily large and the program will just run forever, spitting out the sequence onto the monitor. The programmer knows that there are many limitations to this program:
- if your integers are fixed-size, they will either saturate or wrap
- if your integers are variable sized, they will consume all of memory
- if you're relying on GC, you'll hit the memory limit even sooner than if you managed your own memory
- if you're outputting to a file, the file will fill up the hard drive
- if you're outputting to the screen, the output will reach a point where even a single number won't fit on that screen
If you don't consider these kinds of limitations to be important, then you're not a programmer. Every program that has ever actually been run has been limited by the available hardware. Ignoring reality just makes you lazy.
non-sage-ing because this is the only recent thread worth a shit
>>85
Maybe the primary mistake is thinking that memory management is anything but just another problem to be solved by a programmer. If there's some GC algorithm out there that suits the needs of the program you're currently writing, then use that GC algorithm just like you'd use your favorite sort algorithm. But don't try to pretend that it isn't an algorithm or that it isn't part of programming. Yes, it's great when you can use an existing library to solve a problem, but it's dumb to think that someone's going to write a library (GC or otherwise) that magically works in every situation. Again, you're just being lazy.
Name:
Anonymous2011-10-24 20:41
>>85
Hey fagstorm, if we had less fucktards like you, we wouldn't have as many security flaws. I'll be writing fast, useful, efficient and most importantly, featureful programs as you debug your crashes that only occur when turning on -O3 and -DUSE_SMP. Now go scrub another midget you fucking faggot.
>>85
Go back to Real Programming in Fortran to perform ENTERPRISE INTEGER OVERFLOW on some rockets and leave us Quiche Eaters to conjure our spells. >>87
This.
Name:
Anonymous2011-10-24 22:35
>>85 >>86
...sure... but the only reason to write your own memory manager is performance, and the thing is, in the near future, performance is going to come from multi-core processing. And manually doing that is just too hard for humans if you're code is side-effect ridden and low-level. Being good at manual memory management in 2011 is like being good at making buggy whips in 1911.
Name:
Anonymous2011-10-24 22:44
>>89
It really depends on the application. Low-power high-function computers like phones and handhelds still needs specialist knowledge to get the most out of the machine; this also includes low-power, low-cost specialist machines like media service devices or specialised service machines. Many people never work with such constraints and so, they don't require such knowledge.
This is really just an application of using the right tool for the job. I like using other people's tools whenever possible so that I can invest my effort into ensuring the computing experience is logically correct and acceptably quick.
- if your integers are fixed-size, they will either saturate or wrap
x \in Z/Zn where n = 2k, and where k is either 8, 16, 32, or 64.
- if your integers are variable sized, they will consume all of memory
Let M be the amount of memory in bits, dedicated for storing integers, and let I be the set of all active integers, and V(i) be the value of integer i. Then the following must hold:
\sum_{i \in I}log2(i) <= M
- if you're relying on GC, you'll hit the memory limit even sooner than if you managed your own memory
Let TGC be the time at which you run out of memory while using GC. Let TMAN be the time at which you run out of memory using malloc free style memory management. Then:
TGC > TMAN
This isn't necessarily true though. Malloc free style can lead to pretty bad fragmentation, and with a copying garbage collector, you can compact the memory that is in use. If is true that you'll always have to keep part of the heap free so you'll have some space to copy stuff to, but if you use a copyin collector with many generations, then you can make unoccupied part small.
- if you're outputting to a file, the file will fill up the hard drive
Let F(t) be the amount of bits you have outputted to a file, after time t, and H be the number of bits on the hard drive. Then we must have that F(t) <= H, for all t.
- if you're outputting to the screen, the output will reach a point where even a single number won't fit on that
Let S be the number of digits that can be stored on a screen, and N be a multiset of numbers which you would like to display on the screen, in base 10. Then, if you don't use delimiters, the following must hold:
sum_{n \in N} ceiling(log10) <= S
and if you are using a single separator character, then:
If by "security flaws" you mean things like buffer overflows, that's another artifact of teaching idealist bullshit first. Memory is NOT infinite, things can NOT grow arbitrarily long.
When people learn to drive, they learn to realize their car has a certain size and don't try to drive it into places where it won't fit.
What's so fucking hard about making sure the memory you're using is the right size? You don't even have to estimate, just count! It's so simple it should be common sense, and yet it isn't...
...because those who were taught the stupid "infinite resources" bullshit were conditioned to think otherwise.
Ignoring the reality and trying to cover it up by inventing more shit on top of it doesn't make it go away.
>>95 What's so fucking hard about making sure the memory you're using is the right size? You don't even have to estimate, just count! It's so simple it should be common sense, and yet it isn't...
Don't ask me, my code is fucking perfect. Ask the fucktards whose commits I have to watch over because around 20% introduce a major security flaw. You've never worked in a team project? Then fuck your shit. I want something that the retards I have to keep in line can use without doing too much damage -- and I know damn right that it ain't C (or C++, I simply can't imagine what creative abominations they'd come up with if allowed to use C++).
Ignoring the reality and trying to cover it up by inventing more shit on top of it doesn't make it go away.
If someone writes shit <proper low level language>, the only way I can fix it is by reverse engineering it and doing a full rewrite (the next person who submits complex code without commenting nor accompanying documentation I swear I will key their fucking car). If someone writes shit in <proper high level language>, at least I can somewhat optimize it (assuming the code isn't completely broken), for example, by using more specific (typed instead of generic, in dynamically typed programming languages) data containers, at least in the cases where the JIT's type inference can't figure shit out. In any case, the code will be shorter and easier to read and hopefully I won't have as much work to do.
All in all, you are half of what is wrong with this world. The other half are the fucking java monkeys who just fell out of the TreeFactory and hit every enterprise branch on the way down.
a.sdkfjas;lkfjw I hate the world ;_;
Name:
Anonymous2011-10-25 1:58
>>95
you know what else is really simple? NAND gates. But I don't program in them. Just sayan.
Name:
Anonymous2011-10-25 2:00
>>95
Well, just to be clear, >>95 is not me: >>85,86. I'm actually not sure whether >>95 is trolling me or >>87, but it doesn't really matter.
Being good at manually managing memory is extremely valuable today. I don't care whether anyone here believes it or not, but I've made a lot of money by understanding computers and not treating them like some theoretical toy.
My point is that they'll always have finite speed and storage. It's nothing more than laziness and short-sightedness to hold out for some imaginary gleaming future where the amount of memory or processing power crosses over some arbitrary threshold, and then, finally, they'll be good enough that we don't have to worry about managing memory. The computer I learned to program on had 64KB of memory, and 32KB of that was used by the OS and BASIC interpreter. A cheap new computer today has 4GB, of which over 3GB is available to applications. That's a factor of 100000 increase -- and guess what -- every "cutting edge" game for sale on the shelf next to that new computer still has to worry about how to efficiently use that 3GB. We're still managing memory manually.
That shiny future that you're waiting for isn't coming because the systems' resources dictate the applications' design, not the other way around. That's not going to change, because I can write an application to make good use of any amount of memory that you can give me. There will never be a computer fast enough or a memory large enough.
>>96
First off, why would you respond to a troll like that?
Second, this: my code is fucking perfect
is either a worse troll than >>95 or proof that you're a freshman in college. The rest of your post makes it sound like you're on some group project with a bunch of retards and you think it's all C's fault.
>>97
But knowing how to would make you better at VB.Net or whatever it is that you do program in.
Name:
Anonymous2011-10-25 2:34
>>98
if you're programming for a modern personal computer or smart phone, memory is effectively unlimited. This is by far what most programs are written for today. In my professional life, the only time I have/had to worry about amount of memory is on video cards when writing shaders. Other than that, it's never a problem. Even (or even especially) when I'm writing in a functional style, because there's so much less "defensive copying" going on when you can safely share immutable data structures. Copying of that kind is epidemic in side-effect heavy languages like C++ and Java.
the claim is not that optimization is not important, it's that we are optimizing *wrong*, just like people in the early 80's who were still writing assembly were optimizing wrong. Efficiency comes from simplicity, not from clever hacks and too much attention to system resources. The same thing that happened to assembly -- computers becoming better at writing it efficiently than humans -- is happening and is certainly going to happen more to: memory management, multithreading. Moreso multithreading than memory management, but the thing is, proper, lockless multithreading requires an abstraction of memory and the only reason we are so very concerned with optimizing memory and caches in the first place is because we are so very terrified of writing functional code that nobody on the hardware side is putting much effort into making it worth it.
I'm not claiming that low level programming will be a forgotten art. All abstractions leak. But it really is attitudes like yours that are holding us back. we've hit this single-thread performance bottleneck and nobody wants to put a lot of effort into the right solutions because nobody wants to learn them or take a step back and think about their programs differently.
>>98
Nobody should be allowed to learn a high-level language before a low-level language along with assembly. That way, you'll learn how to recognize and cherish those things that are done automatically for you, and keep in mind that the VM is not magic and will not magically optimize your code for you. As for the manual memory management, fuck this I'm not repeating all of my pro-GC arguments in every troll shit thread; I'll maybe write it into a kopipe (like the anti-Python kopipe), but I'm too drunk to do it tonight and tomorrow I won't remember.
>>99
Yeah, my code isn't really perfect, but my mistakes are often off-by-ones and things that show up immediately on valgrind. It's very rare that my mistakes make it into an actual commit.
The rest of your post makes it sound like you're on some group project with a bunch of retards and you think it's all C's fault.
Fuck you, look at GNOME. Oversimplified crashing piece of shit that takes tons of memory. Computers used to run just fine with much less memory. If GC and a safe language will get me rid of 90% of the crashes and maybe even lead to simpler and faster code, then be it. Why does driving require a license but writing code that can bring down the fucking machine doesn't?
fuck why do I respond to shit like this
Name:
Anonymous2011-10-25 3:10
>>102
While I agree with your overall sentiment, I am compelled to add this to the discussion.
Computers used to run just fine with much less memory.
Computers couldn't do as many things before as compared to today. Have you really forgotten how internet video was like on standard machines before 2006? The reason we can do so much now is because computers can now crunch through the bits a lot quicker.
>Why does driving require a license but writing code that can bring down the fucking machine doesn't?
All sorts of idiots need to be accountable towards every other car driver on the road. Programmers that write important code for a machine ought to have an implicitly high standard; not only that, the code ought to pass through a QC team before being put into production.
>>101 But it really is attitudes like yours that are holding us back.
Wait, explain that (I'm not being defensive about it, I really want to know what you mean, and I'm open to criticism)
I'll even sum up my "attitude" for you:
- I got a masters and worked in Silicon Valley at a really big company that you probably know really well
- I did a lot of low-level coding there, all the way down to the hardware
- From there, I started a small game company that was moderately successful
- From there, I started another company that makes both hardware and software and this company is currently right in the midst of substantial success
- Over that span, I coded at every level from logic gates to assembly to C to C++ to Perl to Java to C# to a handful of languages that I designed myself to suit a very specific need.
I am comfortable with bare metal languages, device drivers, BIOS code, OS code, game engines, scripting languages, interpreted languages, writing interpreters, writing compilers, code that writes code, genetic programming, you name it...
This current project requires me to do an odd mix of all of it. I have to design hardware, I have to write firmware to go into that hardware, I have to write drivers to communicate with that firmware, and I have to write really high-level GUI code to allow someone to actually use any of it.
In that massive chain of computing, there are times when I have to get something done in 4KB. More often, I have 4GB of memory plus 4GB of pagefile swap space. And in spite of that, there are parts of that high-level GUI code with inline __asm{} and it isn't just to show off my ASM skills. And believe it or not, that high-level GUI code manages its own memory because it has to. I've fucked it up several times and the result is a PC that becomes completely non responsive because it has simply run out of memory. Even in this modern age, computers do not fail gracefully when you consume all of RAM.
Anyway, the point is that I am neither a naive, college graduate, iPhone app hacker nor a Luddite clinging to the glory days of 8086 asm. And my "attitude" is that I can assign a real dollar amount to the value of not pretending that a computer is an imaginary mathematical construct with infinite resources, and not waiting for the day when GC finally becomes "good enough."
Coming from an entry-level programmer, I would have loved to get deeper into assembly (learned the basics) but effective resources are so difficult to come by.
And I totally agree with your method. I'm a mechanical designer, and back when I was learning drafting several years ago, we did everything by hand on vellum then did it over in AutoCAD and the likes. Very effective.
Name:
Anonymous2011-10-25 3:56
>>104
If you're working on your own hardware then you aren't part of the problem. The problem is cultural. The majority of manual memory management proponents work on programs that assume 4GB of memory or more and work on projects that are orders of magnitude more complex than they have to be and can't be parallelized because of the premature optimization and aforementioned complexity etc etc.
The problem is the mutual worship of the people writing software and the people creating hardware leading us in a completely arbitrary direction. If you're making your own hardware then it's all what you need, presumably, and you aren't in this hellish loop of progress prevention.
also what the hell kind of device has 4MB of memory. Does it fit in someone's contact lens or what?
Name:
Anonymous2011-10-25 4:02
>>108 also what the hell kind of device has 4MB of memory. Does it fit in someone's contact lens or what?
Specialist devices such as consumer network routers and modems, home automation systems, car audio systems. Computer contact lens would require a lot more memory than 4MB.
>>107
I didn't say "4MB" anywhere. I said "4KB" in one instance and "4GB" in another. The 4KB limit is there because sometimes that's all you get in firmware. In a lot of cases, you even have to tell your compiler or RTOS in advance how much of that 4KB will be stack space/heap/etc... It sucks. The whole system has more than 4KB, but it's very common that some particular task only gets a tiny little piece of that to work with.
But like I said, I also work on the PC side of things. And in fact, the PC software is also necessarily multi-threaded. And it's also very complex. You seem to be implying that there's just no way to do concurrency without throwing in the towel on memory management, but I can assure you that's not the case. Even modern FPS games generally put their AI and possibly other tasks like physics or audio into separate threads.
people writing software and the people creating hardware leading us in a completely arbitrary direction
Sorry, I just can't buy this... All the hardware guys ever do is make faster CPUs and bigger memory, and you can't really call that an arbitrary direction. And who cares what the people writing software are doing? You can use any platform/language/paradigm you want. Are you concerned that your particular choice won't have enough library support or something?
Name:
Anonymous2011-10-25 4:27
>>109 And it's also very complex. You seem to be implying that there's just no way to do concurrency without throwing in the towel on memory management, but I can assure you that's not the case.
I'm saying it's impossible to do multithreading that ISN'T very complex without throwing in the towel on memory management. Or, as Rich was saying, I don't know of a way to reify time without allocating memory.
Even modern FPS games generally put their AI and possibly other tasks like physics or audio into separate threads.
It's my position that at this point in history, the hardcore games should be running on 30 threads, not 3. And they basically do, but only by means of the GPU (which is now being expanded to work for physics.)
Sorry, I just can't buy this...
the story is:
guy who has to write really fast code: this is how the hardware works, so I'll code very specifically for it to get the best performance.
guy who has to make the next really fast hardware: this is how the guys who have to optimize hardest are writing (nobody is using more than a few threads) so this is how we should design processors.
customers: I really need THIS processor to run my latest game...
programmers who aren't even working in a CPU-bound domain: well this is what the cool guys are doing so...
>>104
I think in the domains you've been involved in low level programming practices like manual memory management is either the norm or necessary in the contemporary situation. Nobody would attempt using GC on a mid-range PIC microcontroller when it has 150 bytes of data RAM and 4Kwords of code flash total. However, there are a lot of domains where Turing-like abstraction is either acceptable or the only reasonable idea.
More importantly, economic factors does apply to evaluating the effectiveness of these approaches. When it comes to the wastefulness of GC, overhead of a hefty run-time for these non-imperative systems and other abstractions, the factor would be a ratio of roughly the number of bytes to number of minute the programmer has in a lifetime. A few decades ago, every byte had significant value so assembler was useful, a few decades further before that they were even more valuable such that they would fashion iron doughnuts crossed within iron wires in an array, by hand. Doing this exercise now would be foolish.
Speaking of which, in hardware, there is a ton of abstractions, I would say there is even more progress of ``turingnization'' in that field than there has been in software as of yet. At around the time between LSI and VLSI, around the first Mac I believe, they prototyped with wirewrap, likely ``drew'' the circuit board design by hand, and soldered circuit boards by hand.
Now, people consider CPLDs primitive, they buy and use ``IP cores'', they may connect up virtual wires in a CAD (some of them can auto-connect to GND and Vdd based on part descriptions), the circuit can be type checked, and to some degree debugged, then, pass this schematic to some IDA like EAGLE or whatever to automatically route paths and TADA a circuit board, and then generate Gerber files, drill files, pick and place files, etc. Email files to china. These tools exists because they are needed to remain competitive and improve metrics of profitability: the same trend must be happening in software since all the big players are picking things up LISP did 40 years ago.
I suspect that ``GC is shit'' thinking will be transitory.
There was skepticism that FORTRAN could overtake assembly.
There was skepticism that PASKAL's ``structured programming'' would be as flexible as FORTRAN GOTO.
The cycle goes on.
Functional programming research is about doing the things that the school of thought behind software engineering have been attempting to band-aid over for decades. Computer science is about concrete mathematical structures; likewise, programming language research focuses on better software construction based on concrete mathematical structures. Since programming proper is divorced from the natural world (other than when hardware limits are reached), traditional Engineering concepts can only apply so much, there isn't such things as a weakened byte, or shear modulus of a function call. Likewise the idea of calculating metrics of N level ``bug-free-ness'' in X KLOC of code that will arrive in Y days and cost Z dollars total, is a crock.
Name:
Anonymous2011-10-25 5:45
Don't ask me, my code is fucking perfect. Ask the fucktards whose commits I have to watch over because around 20% introduce a major security flaw. You've never worked in a team project? Then fuck your shit. I want something that the retards I have to keep in line can use without doing too much damage -- and I know damn right that it ain't C (or C++, I simply can't imagine what creative abominations they'd come up with if allowed to use C++).
What the hell are those fucktards doing writing code in the first place? Fire 'em!
Have you really forgotten how internet video was like on standard machines before 2006?
Actually, bandwidth was the main bottleneck. Even a Pentium 233 can play 320x240 MPEG-1. The reason we can do so much now is because computers can now crunch through the bits a lot quicker.
Give this a read: http://hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins
Also http://www.menuetos.net/ - the problem is not that we need faster hardware, but that softare needs to be more efficient.
Name:
Anonymous2011-10-25 7:19
>>114
There's a fine line between writing efficient software and shipping software ASAP. Reusable API's exist for the sake of the programmers so they don't have to invest effort into constantly rewriting general software.
The philosophy I follow is We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil - Donald Knuth In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering - Donald Knuth
>>115
I agree - 10% of the code takes up 90% of the time, so it's often pointless to optimize the remaining 90%.
Name:
Anonymous2011-10-25 13:56
>>115,116 The philosophy I follow is We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil - Donald Knuth
This is a great philosophy, but if you really believe it, then the first thing you should do is dump functional programming. You're doing exactly what Knuth says you shouldn't. You're bringing in the huge, expensive "optimization" of immutability and garbage collection across the board, whether it's needed or not. If you really wanted to follow that philosophy to the letter of the law, I don't see how you could be anything but a C programmer. C gives you absolutely nothing out of the box. If you want GC or FP or OOP idioms, you have to add them via libraries or roll your own. And you're not going to find a language with more broad library/API/OS support than C, so get libraries for the 97% and spend your time doing the remaining 3% right.
This is a great philosophy, but if you really believe it, then the first thing you should do is dump functional programming.
You're talking about purely functional programming, right? Because in a language that merely emphasizes functional programming, there's nothing stopping you from heavily mutating things all while nicely compartmentalizing computation as you would in functional programming.
immutability and garbage collection across the board, whether it's needed or not
If you can't recognize whether the JIT can optimize away unneeded copies of things (e.g. via escape analysis) and that you should replace them with direct mutation, or you can't tell when the JIT will be unable to choose a type-specific container and you should enforce (or hint) it manually, then you shouldn't be allowed to use a high-level language. As simple as that.
If you really wanted to follow that philosophy to the letter of the law, I don't see how you could be anything but a C programmer.
C, when compiled directly to machine code, will always yield lower performance at the same levels of complexity as a higher level language (kindly resist the temptation of pointing out shitty languages with even shittier implementations as a counterexample to this). A well-written JIT may yield faster code than a static compiler via profiling.
All in all, kindly stop hating on a certain way of doing things simply because some (if not most) of its proponents are cretins.
Name:
FrozenVoid2011-10-25 15:15
>>117
GC languages have a niche for their use as user-friendly scripting glue or interpreters, not performance.GC implementations today are very resource intensive and these languages have a layer of safety-cruft which is a bottleneck compared to C, as it often seen on benchmarks.
>>119
Fuck off and die back to reddit, fhuckhface.
Name:
FrozenVoid2011-10-25 15:29
The real problem is twofold
1.People add safety-cruft into boxed objects, OOP where it lowers performance(getters and setters, access through a layer of method calls)
2.GC algorithms(especially stop-the-world) interrupt the flow of program. Most of the objects are dynamic and need to be checked for references inside a huge dependency tree: this can't be fast by design, either there are leaks or with strict checks the GC will steal alot of execution time(stop-the-world will halt it if executed at same thread).
>>118 C, when compiled directly to machine code, will always yield lower performance at the same levels of complexity as a higher level language
This is a highly debatable assertion, but regardless, you're just proving the point of >>117. Start with C because it's simple. That's Hickey's mantra, after all. You're preaching about not choosing a language based on performance (premature optimization), but then you say not to use C because it's lower performance.
And I hope you do not believe the assertion that speed is everything - speed AND memory consumption, i.e. total efficiency, is more important. I wish that site would publish efficiency metrics, but I did my own analysis on their data and C/C++ comes out 1st place in 9/14 of their benchmarks.
>>124
It's closest to how a computer actually works without being assembly language.
Name:
Anonymous2011-10-25 20:57
>>117
A really hope this is a troll post. It makes 0 sense to me. There is nothing clever, complex, or optimized about functional programming. You work directly at the level of the problem you are solving using the simplest of semantics that yield no incidental complexity.
This is not how C works.
Name:
Anonymous2011-10-25 21:11
>>127
well that's the thing, FP is simple in terms of the problem you are trying to solve, and C is simple in terms of the hardware you're trying to manipulate.
Name:
Anonymous2011-10-25 21:39
nobody is saying you should write device drivers in Clojure, just that you should write the things you normally write in Java in Clojure
Name:
Anonymous2011-10-25 21:41
GUYS.... I just noticed this thread was still going, and took a massive SHIT. It was totally simple, and easy at the same time.
>>125 Compare its feature set and implementation to other commonly used languages. The other languages will be comparatively complex.
500+ pages of unspecified behavior? Underspecified != simple, unsafe != near the machine, bad design != low-level.
>>126 It's closest to how a computer actually works without being assembly language.
Except it isn't. Forth is much more ``low-level'', simplier and powerful than C.
C can't even manipulate its own return stack.
>>123,126
People gave up on using "benchmarks" as the measure of a language's efficiency long ago, for several reasons. Probably the most important one is that they're too easy to manipulate. There was a great site, years ago, where you could choose your favorite language and it would generate benchmark results that showed that your language was the fastest. And the benchmark results weren't fake, they were just tailored to the strengths of whatever language you chose.
The only benchmark that really matters is what people do with languages in the real world. Whether /prog/ likes it or not, C and C++ are still the standard for things where performance matters. When Modern Warfare 15 is written in Clojure, then we'll all acknowledge the fact that functional programming is really swell.
Boo hoo, I know that pointing out the fact that the "industry" still uses C and C++ is considered a troll here, but hey, suck a dick.
Name:
Anonymous2011-10-30 6:20
>>132
But benchmarks are not easy to manipulate when they're about solving real-world problems, like the ones on that site.
And real-world problems is where everything matters.
1. Things don't actually need to be fast, they only need to appear fast (cache, progressive update, etc), there are a hundred other factors that would determine software success.
2. Any language worth a shit has FFI to handle extreme cases of pareto's principle that need to be solved in the constant time interval.
3. Since speed is absolutely everything Lotus 1-2-3 continues to bask in the dollars as everyone waited for them to port an app COMPLETELY WRITTEN IN ASSEMBLY to Windows, while Excel has been left to rot for being just barely Real World performance grade in the early 90s. oh wait, no.
Name:
FrozenVoid2011-10-30 9:00
>>135
There are things that need to be fast or they're unusable.
Would you play any video at 2fps?
Would you wait 20 seconds to load every webpage?
Would you wait a minute for a VM for load each time a script is loaded?
>>136 Would you play any video at 2fps?
If a tight loop needs to convolve and shuffle bits then don't write it Prolog. Pareto principle. Would you wait 20 seconds to load every webpage?
How much of Firefox as a full package is actually compiled code? I've heard of significant amount of the UI code base written in XUL JavaScript. Again, Pareto principle. Would you wait a minute for a VM for load each time a script is loaded?
People still use CGI? Pareto principle.