Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Why use C++ in this day and age?

Name: Anonymous 2011-09-13 0:55

Recently, C++11 information was released. I do not really understand what is going on this time. Is Microsoft back to supporting C++ programmers? Why use C++ when writing in C# or Java or other languages can be so much faster, safe, and require less lines of codes? Yes, C++ can be faster in cases, but computers are getting a lot faster. Plus, the standard libraries of C++ are somewhat lacking.

C++ is going the way of C.

Name: Anonymous 2011-09-13 1:01

MASTER OF THE OBVIOUS HAS ARRIVED!

Name: Anonymous 2011-09-13 1:01

C > all

Name: Anonymous 2011-09-13 1:14

>>1
Yes, C++ can be faster in cases, but computers are getting a lot faster.
Single core CPU performance, in terms of performance/watt has peaked for the time being. As the saying goes, the free lunch is long over. Performance is now increased by adding more CPU cores. Memory access speeds, however, are increasing at rates 10x slower than the rate at which new CPU cores are being added to systems. Since memory coherency between CPU cores is required for program correctness, the new bottleneck becomes memory performance. Object-orientation as a paradigm is absolutely terrible for good cache utilization and memory throughput, including when OOP is heavily used in C++. This is where Java and C# and other OO-focused languages come up way short. Things like dynamic typing, reflection metadata, and garbage collection just kill performance in highly parallel systems. We're talking 32-core shared memory software systems and up.

C++ however, is not a strict OOP language, it is a multiparadigm language which supports generic, procedural and some functional programming idioms, in addition to OOP. So when used as an improved C combined with some generic and functional stuff, and using the OO abstractions for where it makes sense (RAII and light-weight class objects to enforce class invariance rules) and given it's strong industry entrenchment and mature compiler and tool support, it wins out against all current competition.

I imagine, in the near future, we'll see new flow-oriented and data-oriented languages make their debuts. Maybe then C++ will lose favor when it comes to HPC and large systems integration.

Name: Anonymous 2011-09-13 1:16

C++'s only leg has been performance. With the multicore revolution, C++ will die. Nobody willingly writes C++ for any reason other than performance.

Name: Anonymous 2011-09-13 1:24

>>1
Plus, the standard libraries of C++ are somewhat lacking.
The standard C++ library might be fairly narrowly focused, but there hundreds of mature third party libraries that do everything that other language standard libraries do.

Boost, Qt, Poco, TBB, OpenCV, etc.

Want to integrate a Javascript JIT interpreter? V8 is C++.

Want to embed a fully functional and high-performance web browser into your program? No problem, link against the FireFox or Chromium libraries (this is what the Steam client does).

Name: Anonymous 2011-09-13 1:25

>>5
How will the multicore revolution kill C++? Last time I checked, C++ is actually thriving far more than most languages due to increased needs for concurrency and parallelism.

Name: Anonymous 2011-09-13 1:54

>>7
languages like Erlang and Haskell will take over. why manually write stuff that can be automated?

I suppose you could have asked the same thing of C++ 10 years ago.

Name: Anonymous 2011-09-13 2:02

>>8
You're assuming that message passing and monads/agents are still seen as the only true way forward for manycore computing. You're wrong. In fact, messaging passing and monads/agents are detrimental towards scalability for many parallel algorithms, introducing huge bottlenecks and synchronization points.

Do you honestly think that parallel computing in C++ is just manually

Name: Anonymous 2011-09-13 2:06

>>9
I was going to say, before my thumb accidently touched my touchpad while the cursor was over the Reply button, that do you honestly think that parallel computing in C++ is just manually coding a bunch of message passes and coordinating agents? It's not.

Furthermore, neither Erlang nor Haskell are usable on GPUs/vector processors, which are integral components of modern supercomputers. C++ is with CUDA or C++AMP. There is no support for heterogenous computing at all in most purported replacements for C/C++. This really needs to change.

Name: Anonymous 2011-09-13 2:10

C++ has excellent top-class multicore and cloud algos. TBB, Cilk, OpenMP,CUDA,etc
If any company wanted such libs on LISP,Haskell,Erland or any academic lang. it would long done, but they prefer the unstable, buggy C++ just for the extra hit of speed. Thats what you think of course, C++ can be just optimized and redesigned from scratch in any area to provide any amount of safety,speed or terseness. Its even possible to write ASM libraries without any C, and provide them as C function calls or inline platform specific hand-optimizied ASM/GPU code blocks just where you need it.
How much ASM can you put into your average LISP/Haskell/Erlang program?

Name: Anonymous 2011-09-13 2:22

>>11
This. Intel's extensions for ICC now have world-class high-performance software transactional memory support for C++, properly tested and QA'd for correctness. Not only that, but they've had it since 2009. Meanwhile, every other language tinkerer working on their language compiler or toolchain, including Haskell, is still in the experimental toy-phase with STM.

Intel is proposing it for addition to C++1x (the next C++ after C++11).

http://software.intel.com/en-us/blogs/2009/08/06/new-draft-specification-of-transactional-language-constructs-for-c/

Name: Anonymous 2011-09-13 2:26

Any language that aims to trump C/C++ at parallelism has to do the following:

http://www.flowlang.net/p/flow-manifesto.html

Because that's currently what modern programmers are doing manually in C++ when it comes to parallelism, not passing rinky-dinky messages between actors nor locking monitors/mutexes to synchronize state. Actor modeling and decomposing concurrency along subsystem boundaries doesn't scale into the manycore realm.

Name: Anonymous 2011-09-13 2:34

>>13
Multicore chips have shared cache.  That's way more useful than multiprocessing.


You know how it's fun to laugh at people who say one day we can talk to compilers in English and get programs?

Name: tdavis 2011-09-13 2:43

LoseThos has master-slave multicore.  You probably don't want a scheduler trying to symmetrically do your jobs, especially if real-time.  Breaking-tasks up into pieces is often not what you want.  You want an API that tells you how many cores, how busy and lets you assign tasks to them.

Name: Anonymous 2011-09-13 2:49

>>14
Multicore chips have shared cache.  That's way more useful than multiprocessing.
I can't tell if you're being serious or are trying to troll. Assuming you're not trolling. shared cache between CPUs isn't the same thing as being able to access the same physical memory concurrently, which is in fact possible to do with multiprocessing by mapping the same physical memory pages into each process's address space.

But perhaps you were more getting at the idea of multiprocessing, as in running multiple processes, isn't as good at fine-grained parallelism as is say task-oriented parallelism operating exclusively within a shared memory environment.

You know how it's fun to laugh at people who say one day we can talk to compilers in English and get programs?
I agree, the idea of being able to implement Flow seems distant, perhaps it won't be entirely achievable for a human programming language.

However, back to the point you made. It'll happen one day, perhaps within our lifetimes even, where we can just ask a compiler to build us a program using English. Only the compilers won't be compilers anymore, they'll be artificial general intelligences capable of feats far surpassing your typical human programmer.

Name: tdavis 2011-09-13 2:51

LoseThos is multitasking, but for a home computer instead of a mainframe.  On a mainframe, you want to run a lot of users at the same time, so symmetric multiprocessing results in good utilization.

On a home system, for the most part, you have one app and lots of cores.  You simply don't benefit from running multiple apps.  You really want to run one app on multiple cores.  I hope you see how symmetry is a wrong solution to resource allocation.

Name: tdavis 2011-09-13 2:58

>>16
You talk as one who has no actual experience with multicore but like to run-off at the mouth.

Have you ever tried to make a video game work on multiple CPUs when you don't have shared memory!  It's practically worthless without shared memory.  When you have shared memory, it is just barely useful.  I've done dozens of mutlicore games in LoseThos.  LoseThos can use multicore for graphics rendering because it has no GPU, so LoseThos is unrealistically best case for multicore.  Multicore is crap when you have a GPU.

Name: Anonymous 2011-09-13 3:07

>on multiple CPUs when you don't have shared memory!
Why? Can you use mutexes? reassign memory to another program? Just switch control from app A.1 to app A.2?

Name: Anonymous 2011-09-13 3:12

>>18
Also, your OS is using an ugly color scheme, i prefer green/white text and black background(as Unix or terminal emulator) and less of those "pseudo-graphical DOS" windows. Check out what Linux can do with only a command line.
 The interface should be decoupled from the OS. a file browser would be launched as a command, not be integrated as OS parts or be launched everytime.

Name: Anonymous 2011-09-13 3:35

>>19
In a game, you mostly want to render the screen.  GPUs are massively parallel, but actually aren't so good for multicore CPU interaction.  LoseThos can render graphics with multicores and that's mostly what my games have done.  Physics can take significant CPU.  If you have N^2 operations, kinda a pain.  I guess you do physics in zones and work very hard to get multicore benefit.  I have a classic example where I used multicore.  In my tank game -- strategy hex-board game, I implemented a line-of-sight display for a large map 500x500 where it dynamically shows LOS under the cursor as you move the cursor around the map.  I did that multicored.  Basically, it sucks if you don't have shared memory, trust me!

>>20
My command-line prints graphics.

Name: Anonymous 2011-09-13 3:45

>>21
>My command-line prints graphics.
What i see is just a huge,complicated and clunky interface reminiscent of DOS "text window" managers(like first MS Windows).  Not a command line.

Name: Anonymous 2011-09-13 3:46

Name: Anonymous 2011-09-13 3:52

>>22
I'm French.  LoseThos is nothing if not elegant.  The command-line feeds into a C compiler, so you don't have those silly stupid separate command-line languages like Unix.  Get a real language... and use it for everything.

instead of a history, it's like the C-64 where you can move-up from the bottom row of text.  There is a marker for the beginning and end of a user input so it can be multi-lined.  Further, it feeds into a compiler that has no limits on entry of code.

There is one document for command-line, source code, help system documentation, form/dialog boxes... everything and it has graphics and links.

Instead of command history, your menu key takes you to a macro-sheet where you place macros.  Macros can be activated with icons.

In short, it's the ultimate in elegance whereas Unix is an ugly command line for administrators.

Name: Anonymous 2011-09-13 7:16

It's funny because LoseThos games look like shit and run at abysmal speed while maxing 8 hardware threads. Meanwhile Unreal Tournament (released in 1999 and not using any fancy new SIMD extensions) renders a pretty 1024x768 picture at 32-bit color with the software renderer using a single thread at well over 60fps. Game over.

By the way, game physics (and most logic, really) are trivially parallelizable in the vast majority of cases. The approach most games seem to use is to double-buffer state and process all entities in parallel. This adds one "tick" of latency for entity-entity interactions, fortunately nobody gives a fuck in practice.

Also, it's funny to hear "it's unsafe" as an argument against C++. The C++ software I use never crashes, and given the debugging and memory checking tools we have today, I don't see why would it. Now what?

Name: Anonymous 2011-09-13 8:04

>>24
The command-line feeds into a C compiler, so you don't have those silly stupid separate command-line languages like Unix. Get a real language... and use it for everything.
That's really innovative! It's not like various Lisp, Smalltalk and Forth systems did the exactly same thing 20~40 years ago!

Name: Anonymous 2011-09-13 8:27

>>25
he probably wrote that game in a weekend, UT was written and optimized by team of top game devs for years

Name: Anonymous 2011-09-13 9:36

/prog/ is now /strawmen/

Name: Anonymous 2011-09-13 12:12

Has anyone created a Racket or Chicken interface?

I would love to see a SICP oriented system, especially since we're on the verge of fucking 8-core desktops with 16gb of ram, yet there _still_ isn't a decent Lisp OS

Name: Anonymous 2011-09-13 14:17

>>29
You're not allowed to use "decent" and "Lisp" in the same phrase.

Name: Anonymous 2011-09-13 15:06

>>25
>By the way, game physics (and most logic, really) are trivially parallelizable in the vast majority of cases.

but collision ruins everything in that respect.

>The C++ software I use never crashes

you have no standards then. You don't even know how bad it is.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List