Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-8081-

__future__

Name: Anonymous 2011-07-15 3:28

What is /prog/'s opinion on the future of programming?
-Will functional languages really take over?
-Are there any other paradigms rising out of obscurity?
-Will the hardware we run on change drastically? Would this affect how we program? Think quantum computers and shit (/tech/esqe, but entirely related to what we do)

I think that in the future the use of multi-paradigm languages will become almost standard when it comes to writing anything. Concurrent programming is the way of the future; it is currently the only way Moore's Law will continue to be followed. However there are elements of functional languages that make them difficult for practical use, so switching between and combining paradigms where necessary may be the way to go. However we will end up with most languages looking like C++, or worse; a mess of different ways of achieving the same goal (I still love me some C++).

As such I don't see a massive shift to functional languages taking place just yet, however I do think we will see the incorporation of functional features into OO and procedural languages. However I feel that the need for functional languages is inevitable; we are moving closer and closer to a world where we are having to concurrently use data from different sources at once and functional languages simplify this situation greatly.

I actually haven't learned a functional language yet, but am probably going to try pick up Haskell and F# later in the year. Most of the information I've got on the functional style comes from http://www.defmacro.org/ramblings/fp.html

Given that all computers are Turing-complete, I don't think we'll make any changes to our actual programming style. High level languages will remain almost exactly the same, save maybe an idea or two. But I do think that the future of hardware could have a gigantic effect on cryptography. I haven't read much about this, but I am very interested in the effect it could have.

Name: Anonymous 2011-07-15 3:49

Moore's law is not a legal law that has to be followed

Name: Anonymous 2011-07-15 3:58

There will be a shiny new language that will save programming once and for all.

oh wait, that's bbcode.

Name: Anonymous 2011-07-15 8:12

I only see more and more bloat as the "industry" tries to keep programmers employed writing menial shit.

Name: Anonymous 2011-07-15 8:57

>>1
Given that all computers are Turing-complete,
No, they aren't. But programming languages are.

Nothing will change if we keep von Neumann architectures.

Name: Anonymous 2011-07-15 9:10

*functional programming will work on a small niche of simulations in finance and physics.
*roles/mixins will slowly become of widespread use.
*virtual machines everywhere. all will support autothreading.
*mental interfaces like the one that will be released with th PS4 will allow us to convert from thought to source code directly.
*the next SICP will use perl6
*everyone will grow a new anus in 2012, just as mayans predicted

Name: Anonymous 2011-07-15 10:13

*virtual machines everywhere. all will support autothreading.
Autothreading? I know the last three points of yours are trolls, but the third one, I'm not so sure. Are you delusional?

There is no such thing as autothreading. There's not enough information in a programmer's written code to somehow automatically figure out how to efficiently parallelize it, in current languages and experimental language additions.

Concurrency isn't hard. The current trend has nothing to do with the compiler automating it. Rather, it is based around task-oriented parallelism, which does hide the details of threads and scheduling of tasks, but you still need to ensure that access to shared memory is synchronized, or better yet that no two tasks can touch the same region of memory.

There is no silver bullet for concurrency. There is no sweeping algorithm for concurrency that automates everything, like how garbage collection automated memory management.

Certain languages try push one or two idioms for concurrency, but it's forced as fuck, it's like trying to build a house with only a hammer but no saw.

The only way to approach concurrency effectively is to embrace a wide range of idioms and patterns and know which ones are best suited for your particular problems.

Name: Anonymous 2011-07-15 10:16

>>7
Also, it's my belief that concurrency is best solved not through complex language additions, but through library supplied abstractions and minimal language additions if needed. Since there's no one-size-fits-all algorithm for concurrency, this makes sense... it's not hard to maintain or use a large library of differing concurrency primitives and abstractions, but trying to build that into a language is insane.

Name: Anonymous 2011-07-15 10:23

Name: Anonymous 2011-07-15 10:33

>>9
Ahh, so autothreading refers to a specific type of concurrency mechanism applied to junctions. It looks interesting, but I don't see it solving all or even modicum of concurrency problems. It appears just to be a different flavor of a parallel map and reduce. Plus junctions only works on scalar types, how are you supposed to compose higher level operations on heavier grained data types?

Name: Anonymous 2011-07-15 10:38

>>7
An example of parallelizable language construct is Lisp's let, since the forms are evaluated in unspecificed order.
The concurrency ``super hero'' Clojure could've done this, but failed as usual by providing only let*.
But yes, there's still no silver bullet for concurrency.

Name: Anonymous 2011-07-15 11:02

>>11
>An example of parallelizable language construct is Lisp's let, since the forms are evaluated in unspecificed order.
You need a special CPU architecture for this.

Name: Anonymous 2011-07-15 13:48

>>4
this.

There won't be a multi-core revolution. Not on a large scale, anyway. The industries and individual programmers will fight it tooth and nail.

Right now there's this retarded waltz of hardware manufacturers making things that programmers think they want and programmers coding things based on hardware design that they think is absolute. I don't see the dance ending any time soon.

Name: Anonymous 2011-07-15 14:00

>>13
Give us back the transputers.

Name: Anonymous 2011-07-15 14:22

>>13
Not quite.

Multi-core and heterogeneous computing revolution is happening in C/C++ land as we speak. Generally for gaming, multimedia, office suites, medical, robotics/automation, and embedded systems (which are starting to get quad-core SoCs). All of the heterogeneous computing languages are C or C++ based (OpenCL, DirectCompute, CUDA, C++ AMP).

But for web development, enterprise/database systems, and so called productivity languages like FIOC, Ruby, Lua, Perl, etc. it will be status quo shit as per usual.

The difference is that the programmers from the first group (the native/unmanaged language guys) are already maxing out the hardware they have and care about systems performance. They're passionate about it. That and the software libraries and tools are finally getting to a point that make parallelizing things easier.

The programmers from the second group, the web/enterprise people, just don't care, all they care about is punching the clock and collecting a paycheck for writing menial boring shit.

Name: Anonymous 2011-07-15 14:42

Name: Anonymous 2011-07-15 16:03

The difference is that the programmers from the first group (the native/unmanaged language guys) are already maxing out the hardware they have and care about systems performance. They're passionate about it. That and the software libraries and tools are finally getting to a point that make parallelizing things easier.

you've got it a bit backward. The guys maxing out what the hardware can do? those are the "programmers coding things based on hardware design that they think is absolute." The relationship is exactly as >>13-san said. A true multicore revolution will necessitate something like Haskell or Qi or something based on the Pi-calculus. The point of the revolution is that we will write things in multiple cores WITH 0 knowledge of the hardware.

Name: Anonymous 2011-07-15 16:05

>>17
>The point of the revolution is that we will write things in multiple cores WITH 0 knowledge of the hardware.
That will never happen.

Name: Anonymous 2011-07-15 16:06

>>17
A true multicore revolution will necessitate something like Haskell or Qi or something based on the Pi-calculus.
Why? Because you say so? Citation required.

Name: Anonymous 2011-07-15 16:07

>>18
it will happen, just not on an industrial or cultural scale. In fact, it already has with Erlang and Haskell.

Name: Anonymous 2011-07-15 16:18

>>19
I'm just going on history and Moore's law. Low level optimization always loses favor in the end. This is a pretty basic and well-accepted fact.

Name: Anonymous 2011-07-15 16:22

>>15

I don't consider web and enterprise the same thing.

Enterprise thinking is more like building commodity features on tight deadlines, making dumb code for programmers with any skill.

Name: Anonymous 2011-07-15 16:26

>>20
Message passing architectures and monads/agents only solve certain problems with parallelism. As someone has already stated elsewhere in the thread, there is no silver bullet.

Haskell does have a suitable number of abstractions in the     PVM extensions framework, but these are things that are not exclusive to Haskell.

In fact, you can easily build just as safe and easy to use library abstractions in C, C++, Java, or whatever and it's already been done.

I'm tired of people like you saying that only XYZ language can solve the problem of parallelism. Fact is, you can do it effectively in most languages. Stop peddling your lies.

http://en.wikipedia.org/wiki/Algorithmic_skeleton#Frameworks_comparison

And that's by no means an exhaustive list, it doesn't include TBB and MPL, which are widely used among C++ programmers in real industry projects, and contains a modern and more complete set of such abstractions.

Name: Anonymous 2011-07-15 16:28

>>23
just like you can write anything in assembly.

Name: Anonymous 2011-07-15 16:34

>>23
The way C does memory doesn't solve everything about how memory actually works, but it's the lowest level people are willing to go. Know why? Because it's good enough, and because programmer time is more expensive than machine time. This is a trend that increases over time. It's really simple.

Name: Anonymous 2011-07-15 16:40

>>21
Eh? Hardware wise, our processors wouldn't be where they were today if it weren't for low-level optimizations. Do you know how much low-level hackery goes into designing modern speculative, out-of-order, cache-coherent CPUs whether they be CISC or RISC?

Software wise, you can't just settle with simple higher level abstractions for everything simply because you're relying on Moores Law eventually giving us enough cores so you can just say "Well, I'll just upgrade to 32 cores and that'll solve my problem."

Getting faster (ie. more transistors) CPUs to make unoptimized software faster worked when things were all serialized running in a single thread, but it will not work for making unoptimized multi-threaded software faster.

The bottleneck isn't more CPUs/transistors. The bottleneck with concurrency is with memory latency and shared memory, and if you haven't noticed, memory latency is actually increasing as throughput is increased.

You will never be able to make multi-threaded software that isn't designed to scale faster by throwing more hardware at it, it will hit it's limit and never surpass it.

You have to solve the problem in software, it's the only way, and there's single simple method to reduce memory sharing in such a way that you can hide it behind the veneers of a general purpose programming language--this is stuff that must be dealt with in the design of the software itself by the programmers using the language themselves.

Name: Anonymous 2011-07-15 16:42

>>26
Those optimizations are there to try (and fail) to overcome the von Neumann bottleneck.

Name: Anonymous 2011-07-15 16:45

>>25
C1x does actually solve everything about how memory actually works, as does C++11/C++0x. They both specify fine-grained memory models that no current commodity hardware actually fully implements, they've been future proofed.

Name: Anonymous 2011-07-15 16:51

>>26
Let's pretend we live in a world where 32 cores is the norm. I'd use Haskell and I wouldn't care about how it managed them. I'd let the compiler figure it out. The semantics of the language are such that the compiler CAN figure it out, to a pretty good extent.

Now, maybe you'd be interested in writing the compiler. I wouldn't. There's a big difference between solving hardware problems and solving people problems, and an abstraction can ALWAYS be built between the two. Of course the abstraction will leak and it won't be the most efficient thing possible. At that point you have to gauge how much patience you have for things like setting compiler flags. I have patience for that AFTER I've profiled, and not before, and I refuse to let it affect my design until after I've profiled.

A language with semantics designed for THIS attitude is the one that should win.

Name: Anonymous 2011-07-15 16:58

>>29
You're assuming that in C or C++ programmers have to manage threads and hardware cores. You are mistaken. Anyone who isn't an amateur has moved beyond working with threads. What do you think our concurrency frameworks are for? They provide adaptive work-stealing task schedulers, in fact, very similar to the exact same underlying task schedulers inside of a typical a Haskell runtime implementation.

If I lived in a world where 32 cores was the norm ( wait, what's this? 50 cores? http://en.wikipedia.org/wiki/Knight%27s_Corner ), I would just use TBB or MPL or OpenMP or Grand Central Dispatch and I'd let the library figure it out. The semantics of the library abstractions are such that the runtime CAN figure it out, to a pretty good extent, even going as far as to group tasks that access similar sets of data on threads assigned via affinity to run on the same NUMA nodes on hardware, to improve cache locality.

Again, I'll reiterate, this isn't a problem that can be solved only with a special language. Haskell doesn't even solve it all in the language, much of it is done via libraries supplied to the programmer.

Name: Anonymous 2011-07-15 17:14

>>30
And here's an example of what I mean. This is C++11 using Intel's TBB framework. Note the parallel_for template function (essentially a parallel map algorithmic skeleton), which parallelizes the outer loop of the matrix multiply.


void parallel_matrix_multiply(double** m1, double** m2, double** result, size_t size) {
   parallel_for(0, size, [&](size_t i) {
      for (size_t j = 0; j < size; j++) {
         double sum = 0;
         for (int k = 0; k < size; k++) {
            sum += m1[i][k] * m2[k][j];
         }
         result[i][j] = sum;
      }
   });
}

Name: Anonymous 2011-07-15 17:19

>>31
Wait, what's that? Higher order functions? Lambdas? Aren't they harmful, evil abstractions that prevent any kind of low-level optimization? Or, so they said years ago.

Name: Anonymous 2011-07-15 17:26

>>32
Personally, I never felt they were evil or harmful. It's about time our old imperative languages started getting them. As for the parallel_for, it's just a template function which breaks the work load into units based on tunable heuristics and submits them to the process's task scheduler, which will then adaptively load-balance using work-stealing. It's nothing built into the language itself.

Name: Anonymous 2011-07-15 17:34

>>33
Fair enough.

Name: Anonymous 2011-07-15 18:03

>>31
{()}{()
 }{
     (
)(}{}}{})

Name: Anonymous 2011-07-15 18:09

>>35
[&,=](*(&&*)()){}

Name: Anonymous 2011-07-15 18:17

>>35,36
()(())()((()()((((((((((((((((())))))))))))())
()())))()))
))))
))

Name: Anonymous 2011-07-15 18:21

>>35 readable yet a bit sepplish
>>36 what
>>37 fucking gay

Name: Anonymous 2011-07-15 18:27

Notice how sepples is evolving into a Lisp with syntax and how ugly it is.

Name: Anonymous 2011-07-15 18:43

>>39
0/10 you're just butthurt

Name: Anonymous 2011-07-15 18:53

>>38
>>36 what
VALID SEPPLESOCKS LAMBDA EXPRESSION

Name: Anonymous 2011-07-15 21:38

the future of programming is python in every area some assembly language isn't

Name: Anonymous 2011-07-16 2:32

>>42
Then why am I learning java?

Name: Anonymous 2011-07-16 2:33

>>43
Because you're as gay as >>42

Name: Anonymous 2011-07-16 3:48

>>42
python is shitty. shit shit shitty.

Name: Anonymous 2011-07-16 3:50

>I don't see a massive shift to functional languages taking place just yet,
It won't ever happen. I think C++ will be hybridized with features of these langs and regain the hipsters who left for java/python/ruby/etc

Name: Anonymous 2011-07-16 3:57

>>46
THAT won't ever happen. C++ is in its final years. Besides being horribly designed, it is suffering from Common Lisp's issue, which is that the language just keeps growing and growing and simply can't be learned in a weekend, much less 6 months.

Name: Anonymous 2011-07-16 3:59

>>46
C++ need a huge syntax overhaul to be useful. Its too complex to parse.

Name: Anonymous 2011-07-16 4:03

>>48
YOU MENA IT'S

Name: Anonymous 2011-07-16 4:32

Haskell and Scheme are the future of programming languages. Maybe BitC will be as well.

Name: Anonymous 2011-07-16 4:38

>>47
What, then, is the future of Common Lisp?

Name: Anonymous 2011-07-16 4:54

>>50
Maybe BitC will be as well.
Except when it won't. They're adopting a more ML-like syntax, losing all the benefits of a homoiconic representation. It's basically ML with some built-in ways to control structure layout and such. It even has THE FORCED COLLECTION OF GARBAGE

Name: Anonymous 2011-07-16 6:00

>>48
There's nothing wrong with context sensitive grammars.

Name: Anonymous 2011-07-16 6:03

>>53
Except everything.

Name: Anonymous 2011-07-16 6:04

>>53
go suck seven dicks at once

Name: Anonymous 2011-07-16 6:06

>>55
You have time to do it, while your code parses.

Name: Anonymous 2011-07-16 6:26

>>54,55
Only an autist with severe OCD would find context sensitive grammars to their disliking.

Name: Anonymous 2011-07-16 6:27

>>57
Sorry, but I'd like my code to parse fast.

Name: Anonymous 2011-07-16 6:43

>>57
Were alll autistes here.

Name: Anonymous 2011-07-16 10:57

Only having read >>1, my answer is:
Will functional languages really take over?
No, but the functional paradigm is useful and it helps to use it (when it makes sense) regardless of the language you're actually using (even for imperative ones like C or assembly)
Are there any other paradigms rising out of obscurity?
I enjoy CL's multiparadigm aproach very much. Declarative, metaprogramming, functional, object oriented (and meta object oriented) and even imperative. Being able to just use what you want and having access to a variety of tools is quite nice. I'm not a huge fan of restrictions as far as programmer's freedom is concerned, except for cases where it can greatly improve stability and clarity of the code, but this is a fine line to walk.
-Will the hardware we run on change drastically? Would this affect how we program? Think quantum computers and shit (/tech/esqe, but entirely related to what we do)
Normal CPUs will be rather common, simply because most of the current software is designed for them. More CPUs (more cores) will be commonplace. GPU-oriented software is also rising, although a niche, but it's useful especially for parallelizable algorithms - it's currently in use for gaming to crypto to AI/machine learning to multimedia/encoding/decoding acceleration and various other more niche applications which require speed.
Where true parallelization will be required FPGAs will meet the demand as usual, but you should expect much denser (although slower) alternatives in the next 5-10 years if certain research projects succeed - they'll mostly be used for treating your hardware like it was software, but now at a much lower cost than FPGAs are today, and possibly for AI applications (maybe even some neural network based AGIs if those ever succeed).

As for Moore's Law - today's CPUs are designed to be too sequential, there is great possibility for parallelization (as far as certain applications are concerned), even if we hit the maximal physical minimimalization plateau  - it won't be that many years until they can't scale down lithography-based techniques used in today's manufacturing (they're already having major challenges), and people will have to be very creative there (be it finally taking the challenge of molecular nanotechnology seriously or some sort of advanced 3D layering before attempting that), and once they take that final step, speeds will increase by a huge lot (likely one last time) and then hit off a plataeu and from there on only the actual design will matter as far as speed is concerned.

As for quantum computing, its success depends a lot on whatever the true laws of this particular universe that we live in are (it's not enough for there to be "quantum mechanics" - the actual underlying implementation matters, depending on which "interpretation" is true, we may see differing results as far as quantum computing is concerned). It's too early to tell if it'll truly be a success. If it does succeed, certain search problems will become much faster to solve than on classical computers, but no, it's not going to make NP-hard stuff suddenly fast.

But I do think that the future of hardware could have a gigantic effect on cryptography.
PKI and various asymmetric crypto may become unusuable. Symmetric crypto will probably be affected much less. There are certain ways to work around these problems, but that would be a discussion for another time.

Future is what you make it.
Certain trends may sweep the world you live in, but it is always the programmer (unless working for employers with specific language/hardware requirments) that decides what solves your problems best.

Name: Anonymous 2011-07-16 11:16

>>51
It will slowly die and be replaced by something more streamlined like Arc (except not Arc.)

Name: Anonymous 2011-07-16 12:33

right tool for the right job

Name: Anonymous 2011-07-16 15:06

>>60
HEY FUCKFACE
CPU'S ARE SEQUENTIAL BECAUSE I/O IS SEQUENTIAL
ALSO, ZISC

Name: Anonymous 2011-07-16 15:34

I'm a professional codder

Name: Anonymous 2011-07-16 17:13

>>60
PKI and various asymmetric crypto may become unusuable. Symmetric crypto will probably be affected much less. There are certain ways to work around these problems, but that would be a discussion for another time.
I've always wondered. Are there any asymmetric cryptosystems that aren't based on the DH assumption nor on factoring large semiprimes?

Name: Anonymous 2011-07-16 17:17

http://en.wikipedia.org/wiki/Amdahl's_law

If you consider its implications we are heading towards giant fpgas or something similar.
Maximizing the parallelization of code will be the only performance consideration. Another stupidity is the fact that 64bit numbers are overkill for almost any application, till they are stuffed down our throats.

Name: Anonymous 2011-07-16 17:48

>>63
That makes no sense when it comes to programs being either. network bound or CPU bound.

Name: Anonymous 2011-07-16 19:04

>>66
64-bits is nice for bit flags and moving memory around. I also wrote a 64-bit FNV-1A hash which is 15% faster than the builtin CRC32 instructions on SSE 4.2 CPUs like the Core i7.

Name: Anonymous 2011-07-16 19:34

>>68
yes it is.
But for things like that you'd even gain more advantage if you do have the ability to couple 2 or more processors to handle upper & lower bits individually. Or accessing memory in a similar fashion. This cannot be done with current architectures of course.
But in the future we might have one processor/logic entity available _just_ to handle one pin.

The only problem I'd see is nobody will be around to handle that much low level stuff.

Name: Anonymous 2011-07-16 19:50

>>69
That sounds retarded. ALU pipelines already process machine words in parallel. Why would having a core per pin with it's own instruction decoder and pipeline be better? That just sounds completely fucking backwards. I'll dub your architecture SISB or Single-Instruction Single-Bit. That's just wrong.

Modern CPUs usually have SIMD pipelines. We're heading towards SIMT and MIMD architectures for general purpose CPUs. SIMT and MIMD are already used on GPUs.

With SIMT/MIMD you have a single instruction decoder driving multiple pipelines, where each pipeline is often 128-bits, 256-bits, 512-bits, or even 1024-bits wide (for those 4x4 32-bit floating point matrix multiplies, awww yeah).

Keeping each pipeline as wide as possible maximizes overall throughput and makes the best use of transistors.

Name: Anonymous 2011-07-16 20:35

FIBONACCI BUTT SORT

Name: Anonymous 2011-07-16 22:17

>>70
Well, one pin might not be practical and was just an example of where the trend might lead us. I have nfi how something like it might look like and if it would work, but there would be some number of bits practical for a particular instruction set / number of processors / layout.

If processors are equipped with a hardware stack, very few single cycle instructions and placed within a rectangular grid. You don't need no fuckin pipelining, at least not in the traditional sense.

Memory access will be an issue only in the transition time, later it will be on the chip.

How exactly is MIMD used in GPUs? I found no documentation of that.

Name: Anonymous 2011-07-16 23:24

I WROTE AN ENCRYPTION ALGORITHM IN 2 SECONDS
IT'S CALLED "SHORT TERM MEMORY LOSS"
100% PERFECT

Name: Anonymous 2011-07-17 0:29

>>73
How do you know?

Name: Anonymous 2011-07-17 1:53

>>29

There's a big difference between solving hardware problems and solving people problems, and an abstraction can ALWAYS be built between the two. Of course the abstraction will leak and it won't be the most efficient thing possible. At that point you have to gauge how much patience you have for things like setting compiler flags. I have patience for that AFTER I've profiled, and not before, and I refuse to let it affect my design until after I've profiled.

 A language with semantics designed for THIS attitude is the one that should win


That's why ParrotVM is such a good idea.

Functional concepts

Parrot has rich support for several features of functional programming including closures and continuations, both of which can be particularly difficult to implement correctly and portably, especially in conjunction with exception handling and threading. Implementing solutions to these problems at the virtual machine level prevents repeated efforts to solve these problems in the individual client languages.


http://en.wikipedia.org/wiki/Parrot_virtual_machine#Functional_concepts

Name: Anonymous 2011-07-17 2:16

>>75
What the fuck is this.
Also, does Parrot really support continuations?

Name: Anonymous 2011-07-17 2:38

>>75
Parrot is pretty cool.

Name: Anonymous 2011-07-17 5:14

>>60
Will functional languages really take over?
No, but the functional paradigm is useful and it helps to use it (when it makes sense) regardless of the language you're actually using (even for imperative ones like C or assembly)
I don't know why people think you need a functional language to write functional code. Some things are simply better expressed functionally, even coding in C, and the result can often be faster, smaller, simpler and easier to understand. I hate how much scaffolding people write to wrap simple algorithms into mutable objects in C++/Java. Drives me crazy.

Name: Anonymous 2011-07-17 6:34

>>78
Because they make it insanely hard.

Name: Anonymous 2011-07-17 9:30

>>76
The first half reads like Joel on Software.

As for the parrot VM, it has surprisingly good feature coverage. It's a much better choice than you might think for your dynlanguage.

Name: Anonymous 2011-07-17 10:15

http://www.reddit.com/r/mylittlepony will take over Reddit...Soon

Name: Anonymous 2011-07-17 10:45

>>81
Go back to reddit, /b/ and /co/.

Name: Anonymous 2011-07-17 11:04

Name: Anonymous 2011-07-17 11:38

>>83
not \proggles\ related!!!!!!!!!!!!!!!!!!!!

Name: Anonymous 2011-07-17 12:31

>>84
Go back to \win32\, gay-for-backslashes.

Name: Anonymous 2011-07-17 12:56

>>78
I hate how much scaffolding people write to wrap simple algorithms into mutable objects in C++/Java. Drives me crazy.

will you show me how C can do it much better?

Seriously, lambda with mutable state is NOT something that is easy to just emulate with a low level language.

Name: kodak_gallery_programmer !!kCq+A64Losi56ze 2011-07-17 15:40

>>78
I hate how much scaffolding people write to wrap simple algorithms into mutable objects in C++/Java. Drives me crazy.


I'm really not convinced that you know what you're talking about.

A simple selection sort algorithm in Java that sorts an array of integers can be written in roughly 20 lines of code. Now if I would modify this to take Double objects, Integer objects, String objects, and Character objects, this algorithm would go from 20 roughly 20 lines to around... what, what's that? It would still only be 20 lines of code.

And look ma, the fact the objects are mutable has no impact on the additional imginary lines of wrapper code!

Name: Anonymous 2011-07-17 16:27

>>87
you are the one who has no idea.

http://www.paulgraham.com/accgen.html

Name: Anonymous 2011-07-17 16:36

>>88
The comment was about simple algorithms. So my response was about concerned a simple algorithm. Now shut up and go play with your sister's barbie dolls you dumb hourly worker.

Name: Anonymous 2011-07-17 16:40

>>89
DO NOT INSULT BARBIE

YOU HAVE NO IDEA WHAT SHE'S BEEN THROUGH

Name: Anonymous 2011-07-17 16:48

>>89
if you don't think in C++-style OOP, accgen IS a simple algorithm.

Name: kodak_gallery_programmer !!kCq+A64Losi56ze 2011-07-17 16:59

>>90
Well, if she was forced to listen to your nonesense, I could imagine that at the very least, she has suffered severe mental trauma.

Name: Anonymous 2011-07-17 17:02

>>89
All workers are hourly, you nigger-lover!

Name: Anonymous 2011-07-17 20:39

>>89
What's simpler than a fucking accumulator generator?

Name: Anonymous 2011-07-17 22:17

>>94
Probably nothing. However, the statement only mentioned simple algorithms. It never mentioned anything about an accumulator generator alone.

Name: Anonymous 2011-07-17 22:51

>>88
At what point is the value of the variable that determines the increment size by is initialized in those implementations?

Name: Anonymous 2011-07-17 23:22

>>96
do you seriously not know any of those languages?

Name: Anonymous 2011-07-17 23:49

>>96
When you call the function that is returned by that code.

Name: Anonymous 2011-07-18 6:54

Forfeit furniture tutorial envelope druid gold milk distribution Poseidon.

Name: Anonymous 2011-07-18 8:36

>>88
In my own non-Lisp-DSL language: foo : ->i[->n[n+=i]];

Name: Anonymous 2011-07-18 11:32

Hungary clan timid defensive.

Name: Anonymous 2011-07-18 11:34

Inclusion drizzle no Calvert florist. Clamber flautist transplantation Rensselaer Nietzsche drop inscrutable lens stab!

Name: Anonymous 2011-07-18 11:45

>>54
Telemeter mineral Vermont feel ingenuity Billings chateau farther... Deed coachmen sachem efficacious pawnshop sib synapses.

Name: Anonymous 2011-07-18 11:46

Perceptual McKay guise Moloch musicology gyrfalcon intensify tater kinglet couch. Bessie Lehigh.

Name: Anonymous 2011-07-18 11:59

Hardscrabble wretch fasten alike livery tonic Brillouin sortie cowpoke.

Name: Anonymous 2011-07-18 12:32

>>100
in mine:
foo = \n i: n += i

it auto-curries your mother

Name: Anonymous 2011-07-18 12:41

>>106
I prefer using a special curry function, but have it your way, faggot.

Name: Anonymous 2011-07-18 14:09

>>107
I prefer pouring molten iron in my anus but I don't flaunt it, faggot.

Name: Anonymous 2011-07-18 14:28

>>100
>>106

(is syntax (for pussies))
returns true

Name: Anonymous 2011-07-18 15:16

>>109
(for-pussies?'syntax)
(faggot? (post-ref ">>109" (current-thread)))
#t

Name: Anonymous 2013-05-14 11:01

The future of programming is finding a two-year-old thread and bump it with a trips get

Name: Anonymous 2013-05-14 11:52

javascript

Name: Anonymous 2013-05-14 12:14

the future of programming is using remote for method invocation, xml for UI design, thread pools for actual work, MVC for web design and authentication standards for everything

Name: Anonymous 2013-05-14 12:18

The future of programming is saying "ohayou computer, show me all the new lolifuta pics from yesterday"

Name: Anonymous 2013-05-14 12:54

>>114
Your computer doesn't do that already? Learn some Lisp, man.

Name: Anonymous 2013-05-14 13:03

>>115
That's the problem, the voice recognizer doesn't handle my lisp very well.

Name: Anonymous 2013-05-14 15:03

The future of programming is rust

Name: Anonymous 2013-05-14 17:45

rust my anus

Don't change these.
Name: Email:
Entire Thread Thread List