Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

__future__

Name: Anonymous 2011-07-15 3:28

What is /prog/'s opinion on the future of programming?
-Will functional languages really take over?
-Are there any other paradigms rising out of obscurity?
-Will the hardware we run on change drastically? Would this affect how we program? Think quantum computers and shit (/tech/esqe, but entirely related to what we do)

I think that in the future the use of multi-paradigm languages will become almost standard when it comes to writing anything. Concurrent programming is the way of the future; it is currently the only way Moore's Law will continue to be followed. However there are elements of functional languages that make them difficult for practical use, so switching between and combining paradigms where necessary may be the way to go. However we will end up with most languages looking like C++, or worse; a mess of different ways of achieving the same goal (I still love me some C++).

As such I don't see a massive shift to functional languages taking place just yet, however I do think we will see the incorporation of functional features into OO and procedural languages. However I feel that the need for functional languages is inevitable; we are moving closer and closer to a world where we are having to concurrently use data from different sources at once and functional languages simplify this situation greatly.

I actually haven't learned a functional language yet, but am probably going to try pick up Haskell and F# later in the year. Most of the information I've got on the functional style comes from http://www.defmacro.org/ramblings/fp.html

Given that all computers are Turing-complete, I don't think we'll make any changes to our actual programming style. High level languages will remain almost exactly the same, save maybe an idea or two. But I do think that the future of hardware could have a gigantic effect on cryptography. I haven't read much about this, but I am very interested in the effect it could have.

Name: Anonymous 2011-07-15 3:49

Moore's law is not a legal law that has to be followed

Name: Anonymous 2011-07-15 3:58

There will be a shiny new language that will save programming once and for all.

oh wait, that's bbcode.

Name: Anonymous 2011-07-15 8:12

I only see more and more bloat as the "industry" tries to keep programmers employed writing menial shit.

Name: Anonymous 2011-07-15 8:57

>>1
Given that all computers are Turing-complete,
No, they aren't. But programming languages are.

Nothing will change if we keep von Neumann architectures.

Name: Anonymous 2011-07-15 9:10

*functional programming will work on a small niche of simulations in finance and physics.
*roles/mixins will slowly become of widespread use.
*virtual machines everywhere. all will support autothreading.
*mental interfaces like the one that will be released with th PS4 will allow us to convert from thought to source code directly.
*the next SICP will use perl6
*everyone will grow a new anus in 2012, just as mayans predicted

Name: Anonymous 2011-07-15 10:13

*virtual machines everywhere. all will support autothreading.
Autothreading? I know the last three points of yours are trolls, but the third one, I'm not so sure. Are you delusional?

There is no such thing as autothreading. There's not enough information in a programmer's written code to somehow automatically figure out how to efficiently parallelize it, in current languages and experimental language additions.

Concurrency isn't hard. The current trend has nothing to do with the compiler automating it. Rather, it is based around task-oriented parallelism, which does hide the details of threads and scheduling of tasks, but you still need to ensure that access to shared memory is synchronized, or better yet that no two tasks can touch the same region of memory.

There is no silver bullet for concurrency. There is no sweeping algorithm for concurrency that automates everything, like how garbage collection automated memory management.

Certain languages try push one or two idioms for concurrency, but it's forced as fuck, it's like trying to build a house with only a hammer but no saw.

The only way to approach concurrency effectively is to embrace a wide range of idioms and patterns and know which ones are best suited for your particular problems.

Name: Anonymous 2011-07-15 10:16

>>7
Also, it's my belief that concurrency is best solved not through complex language additions, but through library supplied abstractions and minimal language additions if needed. Since there's no one-size-fits-all algorithm for concurrency, this makes sense... it's not hard to maintain or use a large library of differing concurrency primitives and abstractions, but trying to build that into a language is insane.

Name: Anonymous 2011-07-15 10:23

Name: Anonymous 2011-07-15 10:33

>>9
Ahh, so autothreading refers to a specific type of concurrency mechanism applied to junctions. It looks interesting, but I don't see it solving all or even modicum of concurrency problems. It appears just to be a different flavor of a parallel map and reduce. Plus junctions only works on scalar types, how are you supposed to compose higher level operations on heavier grained data types?

Name: Anonymous 2011-07-15 10:38

>>7
An example of parallelizable language construct is Lisp's let, since the forms are evaluated in unspecificed order.
The concurrency ``super hero'' Clojure could've done this, but failed as usual by providing only let*.
But yes, there's still no silver bullet for concurrency.

Name: Anonymous 2011-07-15 11:02

>>11
>An example of parallelizable language construct is Lisp's let, since the forms are evaluated in unspecificed order.
You need a special CPU architecture for this.

Name: Anonymous 2011-07-15 13:48

>>4
this.

There won't be a multi-core revolution. Not on a large scale, anyway. The industries and individual programmers will fight it tooth and nail.

Right now there's this retarded waltz of hardware manufacturers making things that programmers think they want and programmers coding things based on hardware design that they think is absolute. I don't see the dance ending any time soon.

Name: Anonymous 2011-07-15 14:00

>>13
Give us back the transputers.

Name: Anonymous 2011-07-15 14:22

>>13
Not quite.

Multi-core and heterogeneous computing revolution is happening in C/C++ land as we speak. Generally for gaming, multimedia, office suites, medical, robotics/automation, and embedded systems (which are starting to get quad-core SoCs). All of the heterogeneous computing languages are C or C++ based (OpenCL, DirectCompute, CUDA, C++ AMP).

But for web development, enterprise/database systems, and so called productivity languages like FIOC, Ruby, Lua, Perl, etc. it will be status quo shit as per usual.

The difference is that the programmers from the first group (the native/unmanaged language guys) are already maxing out the hardware they have and care about systems performance. They're passionate about it. That and the software libraries and tools are finally getting to a point that make parallelizing things easier.

The programmers from the second group, the web/enterprise people, just don't care, all they care about is punching the clock and collecting a paycheck for writing menial boring shit.

Name: Anonymous 2011-07-15 14:42

Name: Anonymous 2011-07-15 16:03

The difference is that the programmers from the first group (the native/unmanaged language guys) are already maxing out the hardware they have and care about systems performance. They're passionate about it. That and the software libraries and tools are finally getting to a point that make parallelizing things easier.

you've got it a bit backward. The guys maxing out what the hardware can do? those are the "programmers coding things based on hardware design that they think is absolute." The relationship is exactly as >>13-san said. A true multicore revolution will necessitate something like Haskell or Qi or something based on the Pi-calculus. The point of the revolution is that we will write things in multiple cores WITH 0 knowledge of the hardware.

Name: Anonymous 2011-07-15 16:05

>>17
>The point of the revolution is that we will write things in multiple cores WITH 0 knowledge of the hardware.
That will never happen.

Name: Anonymous 2011-07-15 16:06

>>17
A true multicore revolution will necessitate something like Haskell or Qi or something based on the Pi-calculus.
Why? Because you say so? Citation required.

Name: Anonymous 2011-07-15 16:07

>>18
it will happen, just not on an industrial or cultural scale. In fact, it already has with Erlang and Haskell.

Name: Anonymous 2011-07-15 16:18

>>19
I'm just going on history and Moore's law. Low level optimization always loses favor in the end. This is a pretty basic and well-accepted fact.

Name: Anonymous 2011-07-15 16:22

>>15

I don't consider web and enterprise the same thing.

Enterprise thinking is more like building commodity features on tight deadlines, making dumb code for programmers with any skill.

Name: Anonymous 2011-07-15 16:26

>>20
Message passing architectures and monads/agents only solve certain problems with parallelism. As someone has already stated elsewhere in the thread, there is no silver bullet.

Haskell does have a suitable number of abstractions in the     PVM extensions framework, but these are things that are not exclusive to Haskell.

In fact, you can easily build just as safe and easy to use library abstractions in C, C++, Java, or whatever and it's already been done.

I'm tired of people like you saying that only XYZ language can solve the problem of parallelism. Fact is, you can do it effectively in most languages. Stop peddling your lies.

http://en.wikipedia.org/wiki/Algorithmic_skeleton#Frameworks_comparison

And that's by no means an exhaustive list, it doesn't include TBB and MPL, which are widely used among C++ programmers in real industry projects, and contains a modern and more complete set of such abstractions.

Name: Anonymous 2011-07-15 16:28

>>23
just like you can write anything in assembly.

Name: Anonymous 2011-07-15 16:34

>>23
The way C does memory doesn't solve everything about how memory actually works, but it's the lowest level people are willing to go. Know why? Because it's good enough, and because programmer time is more expensive than machine time. This is a trend that increases over time. It's really simple.

Name: Anonymous 2011-07-15 16:40

>>21
Eh? Hardware wise, our processors wouldn't be where they were today if it weren't for low-level optimizations. Do you know how much low-level hackery goes into designing modern speculative, out-of-order, cache-coherent CPUs whether they be CISC or RISC?

Software wise, you can't just settle with simple higher level abstractions for everything simply because you're relying on Moores Law eventually giving us enough cores so you can just say "Well, I'll just upgrade to 32 cores and that'll solve my problem."

Getting faster (ie. more transistors) CPUs to make unoptimized software faster worked when things were all serialized running in a single thread, but it will not work for making unoptimized multi-threaded software faster.

The bottleneck isn't more CPUs/transistors. The bottleneck with concurrency is with memory latency and shared memory, and if you haven't noticed, memory latency is actually increasing as throughput is increased.

You will never be able to make multi-threaded software that isn't designed to scale faster by throwing more hardware at it, it will hit it's limit and never surpass it.

You have to solve the problem in software, it's the only way, and there's single simple method to reduce memory sharing in such a way that you can hide it behind the veneers of a general purpose programming language--this is stuff that must be dealt with in the design of the software itself by the programmers using the language themselves.

Name: Anonymous 2011-07-15 16:42

>>26
Those optimizations are there to try (and fail) to overcome the von Neumann bottleneck.

Name: Anonymous 2011-07-15 16:45

>>25
C1x does actually solve everything about how memory actually works, as does C++11/C++0x. They both specify fine-grained memory models that no current commodity hardware actually fully implements, they've been future proofed.

Name: Anonymous 2011-07-15 16:51

>>26
Let's pretend we live in a world where 32 cores is the norm. I'd use Haskell and I wouldn't care about how it managed them. I'd let the compiler figure it out. The semantics of the language are such that the compiler CAN figure it out, to a pretty good extent.

Now, maybe you'd be interested in writing the compiler. I wouldn't. There's a big difference between solving hardware problems and solving people problems, and an abstraction can ALWAYS be built between the two. Of course the abstraction will leak and it won't be the most efficient thing possible. At that point you have to gauge how much patience you have for things like setting compiler flags. I have patience for that AFTER I've profiled, and not before, and I refuse to let it affect my design until after I've profiled.

A language with semantics designed for THIS attitude is the one that should win.

Name: Anonymous 2011-07-15 16:58

>>29
You're assuming that in C or C++ programmers have to manage threads and hardware cores. You are mistaken. Anyone who isn't an amateur has moved beyond working with threads. What do you think our concurrency frameworks are for? They provide adaptive work-stealing task schedulers, in fact, very similar to the exact same underlying task schedulers inside of a typical a Haskell runtime implementation.

If I lived in a world where 32 cores was the norm ( wait, what's this? 50 cores? http://en.wikipedia.org/wiki/Knight%27s_Corner ), I would just use TBB or MPL or OpenMP or Grand Central Dispatch and I'd let the library figure it out. The semantics of the library abstractions are such that the runtime CAN figure it out, to a pretty good extent, even going as far as to group tasks that access similar sets of data on threads assigned via affinity to run on the same NUMA nodes on hardware, to improve cache locality.

Again, I'll reiterate, this isn't a problem that can be solved only with a special language. Haskell doesn't even solve it all in the language, much of it is done via libraries supplied to the programmer.

Name: Anonymous 2011-07-15 17:14

>>30
And here's an example of what I mean. This is C++11 using Intel's TBB framework. Note the parallel_for template function (essentially a parallel map algorithmic skeleton), which parallelizes the outer loop of the matrix multiply.


void parallel_matrix_multiply(double** m1, double** m2, double** result, size_t size) {
   parallel_for(0, size, [&](size_t i) {
      for (size_t j = 0; j < size; j++) {
         double sum = 0;
         for (int k = 0; k < size; k++) {
            sum += m1[i][k] * m2[k][j];
         }
         result[i][j] = sum;
      }
   });
}

Name: Anonymous 2011-07-15 17:19

>>31
Wait, what's that? Higher order functions? Lambdas? Aren't they harmful, evil abstractions that prevent any kind of low-level optimization? Or, so they said years ago.

Name: Anonymous 2011-07-15 17:26

>>32
Personally, I never felt they were evil or harmful. It's about time our old imperative languages started getting them. As for the parallel_for, it's just a template function which breaks the work load into units based on tunable heuristics and submits them to the process's task scheduler, which will then adaptively load-balance using work-stealing. It's nothing built into the language itself.

Name: Anonymous 2011-07-15 17:34

>>33
Fair enough.

Name: Anonymous 2011-07-15 18:03

>>31
{()}{()
 }{
     (
)(}{}}{})

Name: Anonymous 2011-07-15 18:09

>>35
[&,=](*(&&*)()){}

Name: Anonymous 2011-07-15 18:17

>>35,36
()(())()((()()((((((((((((((((())))))))))))())
()())))()))
))))
))

Name: Anonymous 2011-07-15 18:21

>>35 readable yet a bit sepplish
>>36 what
>>37 fucking gay

Name: Anonymous 2011-07-15 18:27

Notice how sepples is evolving into a Lisp with syntax and how ugly it is.

Name: Anonymous 2011-07-15 18:43

>>39
0/10 you're just butthurt

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List