Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Programming Language to Replace C++

Name: Anonymous 2010-08-11 21:49

I think we can all agree that C++ is a terrible language. So why is it still around?

When talking to most C++ users (game developers, systems programmers), I've found that most seem to recognize C++'s faults, but they don't really care. They aren't even the slightest bit interested in a new language that might solve its problems, even one that gives them all the power of C++ with none of the downsides. You can't even get them to look at something new.

Why is that? Why does everyone just 'live with it' without wanting to improve the situation?

Name: Anonymous 2010-08-11 21:51

game developers, systems programmers
There's the problem right there. Neither of these two groups has any interest in finding and developing better tools.

Name: Anonymous 2010-08-11 22:08

Well that's depressing. I am both, and I can't stand C or C++, but there is almost no other choice if you want to write a 3D engine or embedded real-time system.

It just strikes me as odd because there are so many game developers and embedded developers, but all language advancements focus around web and application development (where they don't give a shit about GC pauses, and don't give you any raw control for "safety"), or totally academic pursuits (like Haskell, full of great ideas but nearly unusable for typical applications.)

The few new languages that are trying to replace C++ are trying too hard to cater to the C++ mentality, so they are basically band-aid versions of C++. D is a good example: massively complicated, semantically identical and syntactically nearly identical to C++. What's the point?

Name: Anonymous 2010-08-11 22:16

>>1,3
I find your position incomplete.  First off, refer individually to C or C++ one at a time.  Secondly, perhaps you can emphasize what you would rather see in the language you would like to write your next 3D engine or embedded system that is too obfuscated or not possible in C or in C++.

Name: Anonymous 2010-08-11 22:18

There isn't much point when:
1) We're already familiar with it - faults and all - our systems are mostly complete and working in C++
2) It's well entrenched with libraries
3) newbies everywhere have a superficial understanding of C++

Why bother with exotic new languages when we have to take effort to learn a new paradigm. If the new language is already similar to C++, why learn it when we could stick with C++.

P.S. My favourite language is Haskell and esolang is Grass.

Name: Anonymous 2010-08-11 22:19

>>3
This may suprise you, but the apache web server version 3 is being written in haskell

Name: Anonymous 2010-08-11 22:43

>>2
That's not so true. No group on the whole is looking to replace their working solutions viz. PL by inventing better ones, however game developers have used everything from assembler to lisp to lua. It's simply a C++-heavy environment because performance and abstraction are equally important given the need for industrial flash (usually in the form of 3D performance) and short development cycles.

Systems programmers have fewer excursions, but are more inventive. Take a look at BitC, Rust and Go. None of these are flavours of C, so I'll brace for the argument by foregone conclusion. Meanwhile, even if you need to build an OS kernel (with loader) from the ground up BitC intends to accommodate you. If you don't require that in your definition of systems language, the others will blip on the radar.

>>3
where they don't give a shit about GC pauses
Some of them do. Go at least pays lip service to this concern, but I don't know what the current state of their GC is. malloc also gives you GC pauses, by the way.

The advantage of doing it manually is that you will never call malloc while handling a real time event. You probably avoid it because malloc isn't reentrant or just because you were told not to, but the real advantage of manual GC is that you can control the timing, and a happy coincidence often prevents you from writing correct programs with bad timing. So if a language will allow you to suspend auto GC, there's no need to complain.

Name: Anonymous 2010-08-11 22:53

>>6
0/10

>>3
Exactly, GC isn't viable for people trying to get the most of the hardware, not even incremental GC with real-time constraints. And yes, you can point out that old school developers used to say the same thing about C or other higher level languages... why use that when you can program straight in assembly language to get the most of the hardware?

Thing is that C++ fills the void in providing the right mix of low-level features that lets you program right down to the metal along with higher level constructs and abstractions that facilitate managing change in large-scale one million plus LoC projects.

You don't pay for what you don't use in C++ either. Don't want the overhead of exception handling and stack unwinding? You can turn that off at the compiler.

Want full control over memory management? You can overload the new and delete operators, or call object constructors and destructors explicitly to operate on a given region of memory--and you have the choice of using either the program heap (or a custom heap of your construction) or the program stack to allocate your objects from.

Want to go all out with multi-threading? Not a problem, you can drop down to assembly language to access your hardware's atomic CAS instructions, or build a fully NUMA aware scheduling engine to drive your thread pools coupled with a task dependency graph.

Yeah, the language has it's warts, but nothing else comes close.

Name: Anonymous 2010-08-11 23:02

>>7
malloc generally is re-enterant in modern operating systems or C standard library implementations. And the pause you get with malloc isn't a "GC pause", it's the cost of acquiring a global lock, and searching for a block of memory large enough to accommodate the allocation. And generally this cost is a fixed cost for small allocations under a given size as things like pool allocators are employed.

Perhaps you should read about how malloc is implemented before comparing it to GC.

http://en.wikipedia.org/wiki/Malloc#Implementations

Name: Anonymous 2010-08-11 23:25

>>9
malloc is GC, just because it's not automatic doesn't meant it's not GC. And yes, modern implementations minimize the pause, same with auto GC. Note that even if malloc itself is made reentrant, that does compromise portability. Regardless, it has no bearing on the main point that keeping it outside of realtime events keeps things GC pause free.

Name: Anonymous 2010-08-11 23:37

>>10
>malloc is GC
I absolutely dare you to post that statement on say the LinuX Kernel developer's mailing list. You'll be the laughing stock of everyone for the neXt 10 minutes until everyone forgets you ever eXisted.

It is one of the dumbest things I've ever heard a so called programmer claim.

You obviously grew up thinking GC is the end-all-be-all of memory management, and you attempt to shoe-horn the idea of memory management under the paradigm of GC, because that's all you know.

Also, my 'p' and 'X' keys just stopped working, I've had to copy and paste them. Going to restart now, but I fear my keyboard may be on its way out.

Name: Anonymous 2010-08-11 23:45

>>11
Restarting did nothing, Im fucked, my aostrohe key doesnt work either. Time for a new keyboard that doesnt suck anus.

Name: Anonymous 2010-08-12 0:02

>>12
Get a Filco or a HHKB, they make typing fun.  Sort of.  Well, you get a big red escape key with them, so at least vim is fun.

>>1
The same reason people didn't migrate to Plan 9, the same reason the whole internet runs on Flash, the same reason we're still on IPv4 and the same reason I won't hire a professional to clean up the 7+ gallons of vomit staining my carpet: because the existing solution is just barely good enough, and it's a gigantic pain to create something new, migrate to it, convince others to migrate to it and then convert all the old shit to new code.

Some of those causes can be attributed to laziness, but in some cases there is something of a point: have you seen the source code for some of the bigger C++ programs out there?  It's like people have built a gigantic upside-down pyramid out of badly aligned, dryrotten wooden boards and rusty nails and they're holding it together with a combination of duck tape and a few puny supports.  Every time they hit compile they know there's a 50% chance the whole thing will come crashing down, but until it's totally gone, it's still less work and lower cost compared to rewriting from scratch.

Large companies pushing their interests in the language don't help, either.  Microsoft has a lot invested in DirectX and VC++, which means they push neophytes towards the language.

Name: Anonymous 2010-08-12 0:31

>>11
I absolutely dare you to post that statement on say the LinuX Kernel developer's mailing list.
That is a strange appeal to authority. If they were to respond on the topic it's a symptom of something bad. Do you disagree?

My points stand even if I concede on terminology, which I'm not inclined to do, but you just want to shit everything up with terminology. You don't really have a point to make outside of terminology or you would have made it. >>9 states malloc doesn't have a GC pause, arguing that it's doing certain other things things instead. It happens that those things turn out to be typical GC things (ironically these things are more typical in the expressly automatic case, because malloc isn't required to conform to the particular strategy characterized.)

You obviously grew up thinking GC is the end-all-be-all of memory management,
Nothing of the sort. I like using C for the reasons I've already brought up earlier with malloc. Why is it all or nothing, us vs. them with you? You don't have to suffer from cognitive dissonance on every single decision you make, you know.

Name: Anonymous 2010-08-12 0:40

>>8
That is a complete fucking farce and if you knew C++ you'd recognize it immediately. C++ could hardly be designed differently if your goal was to obscure as much as possible what's actually going on, except possibly to allow more operator overloading.

C++ scales marginally better than C. Enough to make it worthwhile in some areas. It gained its foothold precisely because of its relationship to C. Without that, it would have never, ever, ever, in a million years, ever have gotten off the ground.

It is a marginal solution at best and needs to fucking die.

Name: Anonymous 2010-08-12 0:57

This thread does not look like a /prog/ thread. Maybe it's the lack of HAXING OF ANII

Name: Anonymous 2010-08-12 1:45

game developers, systems programmers
I was a game developer before my current job, which is system programming, oddly enough.

Game developers and especially systems programmers are actually perfectly happy with C.  The only real draw to C++ is the convenience of all the libraries available for it -- mainly STL and Boost.  And the reason we are willing to tolerate C++ to get those libraries is that C++ is "good enough" performance-wise, because it doesn't fuck around with garbage collection or some virtual machine bullshit, and has been around long enough to be made efficient.

Name: Anonymous 2010-08-12 6:03

>>17
Convenience?  I'd rather call them distractions that cause you to spend a lot of more time making them do the things you want them to do.

Name: Anonymous 2010-08-12 9:41

>>5
Why bother with exotic new languages when we have to take effort to learn a new paradigm. If the new language is already similar to C++, why learn it when we could stick with C++.
Why bother? Well the most obvious advantage is massively reduced development time.

There is a tremendous amount of boilerplate and bookkeeping to do in C++; code is at least 10 times as long as it should be, and has to be laid out illogically due to the mess of header files and templates. Compilation times are abysmal, and there is nothing remotely resembling interactive programming or a REPL. Debugging is extremely time-consuming and compiler errors are almost comically unreadable. There are no significant tools to aid in refactoring or static analysis because the language is impossible to parse.

A language without these flaws could drop the cost of embedded development and 3D middleware/engine development to 1/10th of what it costs now.

Google understands this, and this is pretty much what they are trying to accomplish with Go. The problem is it is tailored to Google, so it is useless for realtime. They call it a "systems language"; they are lying.

>>12
You probably spilled something in it. Get a spill proof one. They're like 20 bucks. Don't listen to >>13; you should be using caps lock for escape anyway.

>>13
it's a gigantic pain to create something new, migrate to it, convince others to migrate to it and then convert all the old shit to new code.
See, this is why it's even more surprising to me that game developers are set in their ways: game engine code has an extremely short shelf life. It all gets thrown away after just a few years because new technology means a whole new rendering engine, physics engine, etc. needs to be written.

Just look at id as a prime example. John Carmack is now starting a new FPS engine from scratch for the 7th time (id tech 6). Rage is not even out yet and the engine is already essentially in maintenance phase. Remember, id tech 6 will have identical gameplay to all the others, except this one uses raytracing and voxels. The last one has streaming megatextures and HDR. The one before it had per-pixel lighting and shaders. The one before that, hardware rendering.

>>4,17
I've mainly been talking about C++ because that's what almost all game development studios use. C shares many of its flaws; compared to modern languages they have a lot of similarities.

Name: Anonymous 2010-08-12 9:44

>>19
>The problem is it is tailored to Google, so it is useless for realtime. They call it a "systems language"; they are lying.

Would you kindly elaborate on this please?

Name: Anonymous 2010-08-12 9:54

D in say 5 years.

Name: Anonymous 2010-08-12 10:16

>>20
Web shit.

Name: Anonymous 2010-08-12 10:27

>>20
Certainly.

First off, it has forced garbage collection. Not only that, but the garbage collector is *terrible*. Not only THAT, but it's not possible to write a good garbage collector for Go because of the language's semantics. They allow "unsafe" raw pointers (in a futile attempt to woo C/C++ developers), which means the collector can't make bindings indirect, and so it can't move memory. That means you can't make a generational copying collector. At least Java forbids raw pointers; its collector is excellent. Moreover, because Java is on a VM, it can perform escape analysis across library boundaries, transforming many allocations into stack allocations. If I'm forced to use GC, I'll take the JVM any day.

Secondly, it gives preferential syntax to its built-in data structures. Sure you can write your own maps, but they won't be anything like the built-in one. This is a horrible design decision. One of the most important and fundamental code optimizations is the ability to write your own data structures. There are a thousand different ways of writing a map, ranging from a red-black or AVL tree to a hash-table, from purely functional to non-atomic mutating, and everything in between. You need to be able to choose based on the problem at hand. This is one thing that C++ does absolutely right; you can just change one typedef and bang, different map implementation. At least in C, there is no syntax for any data structure (aside from arrays), so it's a trivial search-and-replace or changing a couple #defines to transform code that uses one map into another. Good luck trying out a different map from the built-in one in those twenty thousand lines of Go. They literally lock you in; you need to either eschew the built-ins from the start, or live with them forever.

Third of all, Go has no meta-programming or preprocessor. Conditional compilation is an absolute necessity for embedded development, and is extremely useful for debugging purposes. Compile-time code generation should be a fundamental part of any modern language; since they obviously don't care for a template system, even simple substitution macros are a lot better than nothing. Google's tunnel-vision as server developers is most extremely obvious from this point. They are completely blind to the use cases.

That's all I can think of off the top of my head just now. I'm sure there are many more reasons. Go just does not feel right to me; it does not feel like a systems language. That is pure marketing BS in a futile effort to woo C/C++ developers. It is a server applications language, nothing more.

Name: Anonymous 2010-08-12 10:34

>>21
There is no way. D is *massive*. It's even more complicated than C++, if you can believe such a thing; it just has way less gotchas and idiosyncrasies. It's just a little bit more consistent. Almost no one actually wants to use it because it really is pointless.

There is no need for such a huge feature list. I've heard it said that the reason D is so complicated is because it's written by someone who likes to implement compiler features. Walter Bright basically just couldn't stop himself; he felt like he needed to replicate every single C++ feature with identical semantics in three different ways, otherwise no one would take D seriously.

I honestly think that in 5 years, D and the C++ standard that follows C++0x (which will be well into the standardization process by then, and which is supposed to feature garbage collection) will be essentially the exact same language.

Name: Anonymous 2010-08-12 10:54

In C, the declaration

    int* a, b

Stopped reading right there. Bad habits should be pruned, not facilitated.

Name: Anonymous 2010-08-12 11:26

GO will replace Cpp. http://golang.org/

Because (1) it can be run interpreted or compiled, so development is fast and compiled code is even faster; (2) it gets rid of most flaws of Cpp yet still is sufficiently similar to Cpp; and (3) it is developed and will be pushed by Google.

Name: Anonymous 2010-08-12 11:36

First off, it has forced garbage collection. Not only that, but the garbage collector is *terrible*. [...] If I'm forced to use GC, I'll take the JVM any day.
For a bit of perspective, this is a bad argument against Go, even the elided portions. You say it can't be fixed and mention you'd prefer Java, but a strong argument can be made that Go was developed with Android (among other things) in mind, because of the troubles with the JVM -- it does offer a safe alternative, and unsafe pointers were a necessary part of that. It is also intended to replace Python in-house (it's a shame they didn't consider an extension language instead, and treat Android separately.)

Sure, Go isn't any more suitable for your purposes, but in the grand scheme it fills a particular need.

Secondly, it gives preferential syntax to its built-in data structures.
This and other related decisions are pretty bad. I lost interest in programming in Go after bumping into this kind of problem way too many times. I'd still consider it for some things, but in general I wouldn't recommend it.

Third of all, Go has no meta-programming or preprocessor. Conditional compilation is an absolute necessity for embedded development, and is extremely useful for debugging purposes.
I'm glad they tried to avoid the need for this, but I'm sad they consider themselves successful.

Name: Anonymous 2010-08-12 12:10

>>23
They allow "unsafe" raw pointers (in a futile attempt to woo C/C++ developers), which means the collector can't make bindings indirect, and so it can't move memory.

First of all, while indirect bindings were all rage in 1970, no one uses this approach anymore, you might want to refresh your knowledge of modern GC implementation techniques.

Then, the problem with pointers remains, but might be eased depending on the exact wording in the language specification. If they follow C in declaring that pointer arithmetic is defined only within the limits of one allocation unit (a single structure or an array in automatic storage, or a memory block returned by malloc) and additionally forbid reinterpret casts from pointers to integers and vice versa, then they can move memory all the way they want at small constant price in performance.

Another approach would be to follow C#, which also provides raw pointers, but to obtain a pointer to a managed object you must use the ``fixed'' construct which makes the object temporarily pinned, and also is lexically scoped which prevents leaking pinned memory.

Name: Anonymous 2010-08-12 12:32

GC MY MEMORY MAPPED ADDRESSES

Name: Anonymous 2010-08-12 14:02

>>26
GO will replace Cpp.
On your hard drive maybe.

Name: Anonymous 2010-08-12 14:40

>>30
dohoho!

Name: Anonymous 2010-08-12 17:19

>>29
It is also intended to replace Python in-house
[citation needed]. I call bullshit on this, for a lot of reasons, but probably the most important is that Go is a whole lot less powerful and more limited than Python. It is not suitable as a replacement. If they want something as powerful they need a dialect of Lisp, or Ruby, or a standalone JavaScript, or something. They could have just pulled V8 out of the browser, scrapped the shitty API, and started with that instead.

Another big reason is that there is absolutely no point to switching away from Python. If they are trying to make it fast, they could just switch to a different interpreter like PyPy, or write their own from scratch. If they really want micro-threading, why wouldn't they just switch to Stackless?

The reason they have not done this is because they have a massive volume of C code locked into CPython's ABI. This is why they are trying so hard to make CPython fast with Unladen Swallow (the most underachieving optimization project in history.)

>>28
you might want to refresh your knowledge of modern GC implementation techniques.
This is almost certainly true; I stopped caring about it a while ago because Java is just as bad as the JVM is good. The JVM is also (currently) ill-equipped for the dynamic languages that are being built on it, like Clojure or Jython. Seemed like a whole lot of wasted effort.

Another approach would be to follow C#, which also provides raw pointers, but to obtain a pointer to a managed object you must use the ``fixed'' construct which makes the object temporarily pinned, and also is lexically scoped which prevents leaking pinned memory.
That would be the ideal way to do it. Unfortunately, unless they want to make a major backwards-incompatible change, it's too late to do this.

I'm still not convinced that it's even possible to make a copying collector for Go. It just allows too much unsafe shit. Even if you could, the developers have *zero* interest in doing it; they just say "we hope to have a fast collector someday", as though they expect it to magically pop out of the ether. The current collector does not even retain object layout information, so it can mistake regular ints for pointers and improperly retain memory. Boehm's has built-in facilities to supply layout information! Why are they not using it!?

Bleh. I did not want to derail this thread into a discussion about Go.

Name: Anonymous 2010-08-12 17:51

>>32
If they want something as powerful they need a dialect of Lisp
( ≖‿≖)

Name: Anonymous 2010-08-12 18:05

Didn't you guys get the memo?

Object Pascal is going to replace C++ within a couple of years

Name: Anonymous 2010-08-12 18:38

>>32
[citation needed]
This comes from the Go team. It's strange, the Android angle is the one I thought I'd have to argue over if anything. When Go was presented it was on the heels of the edict against using Python at Google. The Go team had presented Go, explicitly, as an alternative to Python and it was the only actual concrete justification made of Go's existence that I am aware of. What they hadn't detailed was an Android strategy, but supporting ARM on day 1 makes it a bit obvious, not to mention no one ever being contradicted by the Go team when they considered this aspect in conversations the team participated in.

Anyway, if you want a citation I believe it was mentioned in the webcasts. If not, I would search the mailing list or maybe see if there are FreeNode/#go irc logs somewhere. There's also this from the Go blog (http://blog.golang.org/2010/05/go-at-io-frequently-asked-questions.html), which doesn't mention Python, but does mention that they're actually using it:
Go is ready and stable now. We are pleased to report that Google is using Go for some production systems, and they are performing well.

As for your questions about the unsuitability of Python, I would direct you to the Go mailing list. There is a lot of substance there on the matter, especially in the older posts. I don't remember most of it, but your objections sound all too familiar.

I don't think Go's concurrency is all that great either, by the way, Go does not avoid any of the gotchas present in the use of standard coroutines, nor does it provide any power beyond them (the use of channels notwithstanding.) Go does do a great job of making the gotchas seem like they're not there, which is very astonishing to the person who learns concurrency from Go.

[Stuff about GC.]
I appreciate your pessimism, but you have a habit of saying "can't" when you seem to mean "won't." Most of the problems are solved in one way or another, but like you say, not in Go--even when suitable implementations are right there.

The optimist would point out that they're researching a new collector, based on an existing collector which is covered by patents.

Bleh. I did not want to derail this thread into a discussion about Go.
It's no worse than the thread's premise: bitch about C++. It's more interesting to bitch about Go. Besides, by not taking C++ seriously enough we're giving it an even harder time.

If you want a serious thread which is on topic I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks, or is good enough or bitching about how slow Python is(n't). The Go aspect is at least a conversation that hasn't already been had (and done to death) years ago, unlike all of the others you will find in this thread.

Name: Anonymous 2010-08-12 18:43

>>24
C++0x doesn't have GC, and it'll be standardized by October 2011. The draft has already been finalized and has been voted on. They're just performing "bug fixes" on the specification now to remove ambiguities.

Name: Anonymous 2010-08-12 19:09

and it'll be standardized by October 2011
It's cute that you actually think that

Name: Anonymous 2010-08-12 20:48

>>36
C++0x doesn't have GC
That's why I said the standard *AFTER* C++0x. It will have GC, and will also have things like proper module support (so we can finally get rid of header files), scoped macros (which will probably end up extremely similar to mixins from D), and more. I am telling you, these two languages are converging. In ten years they will be semantically nearly identical. (I also think this is why D will never gain any traction; its useful features will just be added to C++, and people will put up with C++'s warts for compatibility with existing libraries.)

>>35
The optimist would point out that they're researching a new collector, based on an existing collector which is covered by patents.
I'll believe it when I see it. Maybe I'm too pessimistic, but Go has already made enough horrible design decisions to make me extremely skeptical of such promises.

It's no worse than the thread's premise: bitch about C++. [...] I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks
That's not what this thread is about at all. I'm not here to point out C++'s faults; the very first sentence in this thread shows that I'm going on the assumption that we already all know its faults.

This thread is about why no one cares to replace it. Essentially no language research is being done in this field. The only language I know of that even attempts at being a replacement is D (we've already established that Go is a lie), but D does not solve C++'s biggest problems, such as verbosity and development speed. D is just a band-aid; it's far too similar to C++.

I challenge you to present a suitable replacement systems language, or a description thereof

Again, this isn't really what this thread is about, but I'll try to talk a bit about what features I would like.

First off, I would think a replacement language could start with full type inference. I would like to just leave out types entirely, with code looking something like JavaScript. Type annotations could be added where desired with a simple predicate, like @int. That alone would be the biggest step towards cleaning up some of this verbosity and development speed. While we're at it, I'd like implicit type parametrization. I shouldn't have to declare that something is a template. If I leave off type annotations, it's a template. I could also do with struct and enum inference. I'll just declare a struct and leave it empty; treat it as a hashtable in the meantime. Python does this: you use fields on an object as needed, and when you want to make it fast, you declare slots.

Next, the language should make references more implicit. A lot of time in C/C++ is spent dealing with addresses, pointers, references, etc. Just look at how thick C++ code is with ampersands and asterisks. This could be automated in an obvious fashion, but still available when I need to override the obvious defaults. I think C# does this well: you have value types and object types which behave in a straightforward manner, but the ref keyword lets you make references of value types when needed.

It should also make things like RAII simpler and more obvious. I shouldn't need to write a destructor for *anything* unless I need to close a system resource, like a file handle or socket; instead I should just annotate which pointers are "smart pointers", i.e. owned and freed via RAII, and the compiler fills everything in. C++ can sort of fill it in for you if you use only smart pointers everywhere, but smart pointer syntax is horrible, and it has problems such as automatically inlining implicit destructors. That's very bad. I don't really know to what extent D can solve this, but I see a whole lot of destructors in D code.

I would like garbage collection to be used as the default, and once in the optimization stages, I'd like to be able to transform a GC'd program into a manually managed program just by reasoning about object lifetimes and adding appropriate type annotations. D is the only language I've ever heard of that attempts this: you declare variables with the scope keyword and the memory lives there, instantly RAII. This is somewhat limited however; there aren't smart pointers, so it has to be a stack or field allocation to be RAII.

I'd like a much more powerful type system for proving things at compile time. For instance, why the FUCK do all these languages still have null pointers? They should not have null at all; instead, they should have tagged unions. If you want a variable to possibly be null, you make it a tagged union with the null type; the compiler then forces you to check for null wherever you dereference it. There is no overhead for this; it compiles to the same damn machine code, so there is no reason not to do this. Why can't I statically prove other things about code, like that memory is never leaked? This could be done easily in C++ for example if new returned a unique_ptr<>, and did not ever let you reset a unique_ptr<> or create one from a raw pointer. All pointers would then be trapped inside exactly one unique_ptr<>, forcing them to be cleaned up. These are about the simplest possible improvements to the type system; you can go a whole lot farther than this.

When I compare my productivity and code quality between developing in Python and C++, these are the most obvious issues that cause such a massive discrepancy. I'm sure I could think of more.

Name: Anonymous 2010-08-12 22:40

>>38
I'll believe it when I see it. Maybe I'm too pessimistic, ...
Oh I don't blame you. I think they still intend to make good on that, and I believe it's completely possible for them to do so. I don't have perfect faith that they will actually see it though though.

That's not what this thread is about at all. I'm not here to point out C++'s faults;
Yes, but about the road to hell, again:
I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks
In a vacuum you'll only get the complaints. Barring some very light commenting on C++0x (not a great start), Go is the only thing discussed that hasn't already been done to death.

full type inference
implicit type parametrization

Have you looked at Clay? http://tachyon.in/clay/ I wish I knew more about it, but there are (still) no docs yet. The tagline is Efficient, concise, generic - Pick any three. Doesn't look half bad considering.

the language should make references more implicit
I disagree here. I will admit to being very C-minded and this is probably a symptom of that. If you already have by-ref and by-value types then it's no big deal, but switching over to that model for no other reason is probably not great. I'll pretend you said something about safety and I'll drop it, deal?

I'd like to be able to transform a GC'd program into a manually managed program just by reasoning about object lifetimes and adding appropriate type annotations
I'm not clear on this. Do you mean the compiler should perform analysis and insert malloc/free or equivalent calls in your code automatically?

About null pointers, I'm not quite convinced on the performance comment. If you can prove certain things you can eliminate the need to perform checks in certain cases, however. This is typical in manually checked pointers (where 'proof' is on account of "I'm the programmer and I reckon it"), but if you want to meet that with automatically checked pointers, you need to prove it automatically. Again: safety. Then again: systems.

Why can't I statically prove other things about code, like that memory is never leaked? ...
That's a tall order. Your solution doesn't add up to me: doesn't that work at run time and require a trace? How would that be done statically when the problem is non-deterministic in many (most?) running programs?

Name: Anonymous 2010-08-13 3:06

>>39
I've looked briefly at Clay; I did think about it when posting that. It is very interesting but it has some problems. The syntax for some things is terrible; postfix ^ is used for dereferencing, so foo^.bar is for accessing struct fields. The fact that it primarily uses statements instead of expressions, e.g. no conditional or comma operators, is kind of a throwback to Fortran and COBOL. This makes it verbose and difficult for functional/concurrent programming. And I really would like a GC during experimental development, as long as I can pull it out later.

Basically it doesn't look fun. It looks like Fortran without type declarations. I will probably try writing a serious program in it at some point soon to get a feel for how it works.

I disagree here. I will admit to being very C-minded and this is probably a symptom of that. If you already have by-ref and by-value types then it's no big deal, but switching over to that model for no other reason is probably not great. I'll pretend you said something about safety and I'll drop it, deal?
No deal, I actually do want to talk about it! I will fully admit that this is the most controversial of my points. I'm really not 100% behind it myself; mostly I just want to believe that it's possible to do it without sacrificing anything.

I just find that a lot of my time in C and C++ is spent not just writing syntax for pointers, but *thinking* about it. If I take a pointer to a struct as a function argument, I refer to its fields through dereferencing (->). But if I instead declare this struct on the stack, now I access it with regular member syntax (.). Now why exactly does it matter where this object lives? Why do I need to think about where it lives when I access it? Why isn't its location in memory a mere part of its declaration, nothing more?

Many languages *partially* solve this in a variety of ways. For instance in Python, everything is a binding to an object. Writing "x += 5" creates a new object which is the sum of x and 5, and then binds x to the new object. It does not actually modify any objects though, because the number objects themselves are immutable. This happy restriction means the compiler is free to optimize it by copying the numbers directly by value instead of allocating them from the heap. (I'm not sure if CPython actually does this, but CPython sucks. Haskell probably does this.) The unfortunate downside is that there is no way to have a value modified by reference; single-element lists are a common workaround in Python.

Scheme is similar in that everything is a binding, except that numbers are not immutable. So you can modify them by reference. However, someone might set! on your int, so you have to always pass it as a reference. Someone might keep the reference expecting it to be modified later, so you have to allocate it from the heap and let it be garbage collected. The only way to optimize this into by-value semantics is through whole-program analysis in order to prove that the value is not changed or retained.

Clearly I'm not the only one who thinks pointer syntax is a pain. Just look at Apple's libraries like CoreFoundation. For most objects, stack allocations are banned (because structs are hidden), and all types have a typedef'd pointer with "Ref" on the end, which you only get through Create and Release methods. Thus it is always a reference. A few types are value types (raw structs), such as CGRect. These are *never* passed as a pointer; they are always copied by value. I don't think I've ever seen a single *, &, or -> when dealing with Apple's C code. They are essentially emulating the syntax-free semantic distinction between by-value and by-reference types.

If Apple gets rid of pointer syntax in C with some typedefs, clearly it's worth it to do so as a fundamental part of any new language. Isn't it?

I'm not clear on this. Do you mean the compiler should perform analysis and insert malloc/free or equivalent calls in your code automatically?

No, I mean specifying the memory location of objects through annotations. For example, the scope keyword in D means the object should be allocated directly in the current struct or stack frame. You can just add this to a variable and the object is "scoped". Or you can make a class manually memory managed by overriding operator new and operator delete. The point is that you do not need to significantly change code in order to remove the garbage collector; it is not as though you are rewriting the program, just annotating it. Unfortunately this is very limited in D. It's hard to allow an object to be allocated and constructed from different allocators. There are no smart pointers to clean up this sort of special memory with RAII.

About null pointers, I'm not quite convinced on the performance comment. If you can prove certain things you can eliminate the need to perform checks in certain cases, however.

The idea is that 99% of the time, you would need to do the check for nil anyway (otherwise why would it be a union with nil?), so you're not losing performance. You also aren't losing space; any decent compiler of course handles a tagged union between a reference and the nil type just by using the zero address for nil. If you know for a fact that a possibly-nil pointer is actually not nil, but the compiler is unable to verify it, it could supply a simple construct like __assume(x != nil). The MSVC compiler actually already supports this in C++ for optimization purposes, so it's not much of a stretch.

That's a tall order.
Doesn't seem that tall. What are the holes in my reasoning? Couldn't returning safe unique_ptrs from new solve this problem, barring reinterpret_casts and such?

In fact, I will try it. The next C++ project I do, I hereby resolve to turn on sepplesox, and exclusively use a wrapper to new for allocation which returns unique_ptrs (and if possible, I will disable reset() and all of unique_ptrs' constructors, or perhaps write my own version of it.) We'll see how that turns out.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List