Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-8081-

Programming Language to Replace C++

Name: Anonymous 2010-08-11 21:49

I think we can all agree that C++ is a terrible language. So why is it still around?

When talking to most C++ users (game developers, systems programmers), I've found that most seem to recognize C++'s faults, but they don't really care. They aren't even the slightest bit interested in a new language that might solve its problems, even one that gives them all the power of C++ with none of the downsides. You can't even get them to look at something new.

Why is that? Why does everyone just 'live with it' without wanting to improve the situation?

Name: Anonymous 2010-08-11 21:51

game developers, systems programmers
There's the problem right there. Neither of these two groups has any interest in finding and developing better tools.

Name: Anonymous 2010-08-11 22:08

Well that's depressing. I am both, and I can't stand C or C++, but there is almost no other choice if you want to write a 3D engine or embedded real-time system.

It just strikes me as odd because there are so many game developers and embedded developers, but all language advancements focus around web and application development (where they don't give a shit about GC pauses, and don't give you any raw control for "safety"), or totally academic pursuits (like Haskell, full of great ideas but nearly unusable for typical applications.)

The few new languages that are trying to replace C++ are trying too hard to cater to the C++ mentality, so they are basically band-aid versions of C++. D is a good example: massively complicated, semantically identical and syntactically nearly identical to C++. What's the point?

Name: Anonymous 2010-08-11 22:16

>>1,3
I find your position incomplete.  First off, refer individually to C or C++ one at a time.  Secondly, perhaps you can emphasize what you would rather see in the language you would like to write your next 3D engine or embedded system that is too obfuscated or not possible in C or in C++.

Name: Anonymous 2010-08-11 22:18

There isn't much point when:
1) We're already familiar with it - faults and all - our systems are mostly complete and working in C++
2) It's well entrenched with libraries
3) newbies everywhere have a superficial understanding of C++

Why bother with exotic new languages when we have to take effort to learn a new paradigm. If the new language is already similar to C++, why learn it when we could stick with C++.

P.S. My favourite language is Haskell and esolang is Grass.

Name: Anonymous 2010-08-11 22:19

>>3
This may suprise you, but the apache web server version 3 is being written in haskell

Name: Anonymous 2010-08-11 22:43

>>2
That's not so true. No group on the whole is looking to replace their working solutions viz. PL by inventing better ones, however game developers have used everything from assembler to lisp to lua. It's simply a C++-heavy environment because performance and abstraction are equally important given the need for industrial flash (usually in the form of 3D performance) and short development cycles.

Systems programmers have fewer excursions, but are more inventive. Take a look at BitC, Rust and Go. None of these are flavours of C, so I'll brace for the argument by foregone conclusion. Meanwhile, even if you need to build an OS kernel (with loader) from the ground up BitC intends to accommodate you. If you don't require that in your definition of systems language, the others will blip on the radar.

>>3
where they don't give a shit about GC pauses
Some of them do. Go at least pays lip service to this concern, but I don't know what the current state of their GC is. malloc also gives you GC pauses, by the way.

The advantage of doing it manually is that you will never call malloc while handling a real time event. You probably avoid it because malloc isn't reentrant or just because you were told not to, but the real advantage of manual GC is that you can control the timing, and a happy coincidence often prevents you from writing correct programs with bad timing. So if a language will allow you to suspend auto GC, there's no need to complain.

Name: Anonymous 2010-08-11 22:53

>>6
0/10

>>3
Exactly, GC isn't viable for people trying to get the most of the hardware, not even incremental GC with real-time constraints. And yes, you can point out that old school developers used to say the same thing about C or other higher level languages... why use that when you can program straight in assembly language to get the most of the hardware?

Thing is that C++ fills the void in providing the right mix of low-level features that lets you program right down to the metal along with higher level constructs and abstractions that facilitate managing change in large-scale one million plus LoC projects.

You don't pay for what you don't use in C++ either. Don't want the overhead of exception handling and stack unwinding? You can turn that off at the compiler.

Want full control over memory management? You can overload the new and delete operators, or call object constructors and destructors explicitly to operate on a given region of memory--and you have the choice of using either the program heap (or a custom heap of your construction) or the program stack to allocate your objects from.

Want to go all out with multi-threading? Not a problem, you can drop down to assembly language to access your hardware's atomic CAS instructions, or build a fully NUMA aware scheduling engine to drive your thread pools coupled with a task dependency graph.

Yeah, the language has it's warts, but nothing else comes close.

Name: Anonymous 2010-08-11 23:02

>>7
malloc generally is re-enterant in modern operating systems or C standard library implementations. And the pause you get with malloc isn't a "GC pause", it's the cost of acquiring a global lock, and searching for a block of memory large enough to accommodate the allocation. And generally this cost is a fixed cost for small allocations under a given size as things like pool allocators are employed.

Perhaps you should read about how malloc is implemented before comparing it to GC.

http://en.wikipedia.org/wiki/Malloc#Implementations

Name: Anonymous 2010-08-11 23:25

>>9
malloc is GC, just because it's not automatic doesn't meant it's not GC. And yes, modern implementations minimize the pause, same with auto GC. Note that even if malloc itself is made reentrant, that does compromise portability. Regardless, it has no bearing on the main point that keeping it outside of realtime events keeps things GC pause free.

Name: Anonymous 2010-08-11 23:37

>>10
>malloc is GC
I absolutely dare you to post that statement on say the LinuX Kernel developer's mailing list. You'll be the laughing stock of everyone for the neXt 10 minutes until everyone forgets you ever eXisted.

It is one of the dumbest things I've ever heard a so called programmer claim.

You obviously grew up thinking GC is the end-all-be-all of memory management, and you attempt to shoe-horn the idea of memory management under the paradigm of GC, because that's all you know.

Also, my 'p' and 'X' keys just stopped working, I've had to copy and paste them. Going to restart now, but I fear my keyboard may be on its way out.

Name: Anonymous 2010-08-11 23:45

>>11
Restarting did nothing, Im fucked, my aostrohe key doesnt work either. Time for a new keyboard that doesnt suck anus.

Name: Anonymous 2010-08-12 0:02

>>12
Get a Filco or a HHKB, they make typing fun.  Sort of.  Well, you get a big red escape key with them, so at least vim is fun.

>>1
The same reason people didn't migrate to Plan 9, the same reason the whole internet runs on Flash, the same reason we're still on IPv4 and the same reason I won't hire a professional to clean up the 7+ gallons of vomit staining my carpet: because the existing solution is just barely good enough, and it's a gigantic pain to create something new, migrate to it, convince others to migrate to it and then convert all the old shit to new code.

Some of those causes can be attributed to laziness, but in some cases there is something of a point: have you seen the source code for some of the bigger C++ programs out there?  It's like people have built a gigantic upside-down pyramid out of badly aligned, dryrotten wooden boards and rusty nails and they're holding it together with a combination of duck tape and a few puny supports.  Every time they hit compile they know there's a 50% chance the whole thing will come crashing down, but until it's totally gone, it's still less work and lower cost compared to rewriting from scratch.

Large companies pushing their interests in the language don't help, either.  Microsoft has a lot invested in DirectX and VC++, which means they push neophytes towards the language.

Name: Anonymous 2010-08-12 0:31

>>11
I absolutely dare you to post that statement on say the LinuX Kernel developer's mailing list.
That is a strange appeal to authority. If they were to respond on the topic it's a symptom of something bad. Do you disagree?

My points stand even if I concede on terminology, which I'm not inclined to do, but you just want to shit everything up with terminology. You don't really have a point to make outside of terminology or you would have made it. >>9 states malloc doesn't have a GC pause, arguing that it's doing certain other things things instead. It happens that those things turn out to be typical GC things (ironically these things are more typical in the expressly automatic case, because malloc isn't required to conform to the particular strategy characterized.)

You obviously grew up thinking GC is the end-all-be-all of memory management,
Nothing of the sort. I like using C for the reasons I've already brought up earlier with malloc. Why is it all or nothing, us vs. them with you? You don't have to suffer from cognitive dissonance on every single decision you make, you know.

Name: Anonymous 2010-08-12 0:40

>>8
That is a complete fucking farce and if you knew C++ you'd recognize it immediately. C++ could hardly be designed differently if your goal was to obscure as much as possible what's actually going on, except possibly to allow more operator overloading.

C++ scales marginally better than C. Enough to make it worthwhile in some areas. It gained its foothold precisely because of its relationship to C. Without that, it would have never, ever, ever, in a million years, ever have gotten off the ground.

It is a marginal solution at best and needs to fucking die.

Name: Anonymous 2010-08-12 0:57

This thread does not look like a /prog/ thread. Maybe it's the lack of HAXING OF ANII

Name: Anonymous 2010-08-12 1:45

game developers, systems programmers
I was a game developer before my current job, which is system programming, oddly enough.

Game developers and especially systems programmers are actually perfectly happy with C.  The only real draw to C++ is the convenience of all the libraries available for it -- mainly STL and Boost.  And the reason we are willing to tolerate C++ to get those libraries is that C++ is "good enough" performance-wise, because it doesn't fuck around with garbage collection or some virtual machine bullshit, and has been around long enough to be made efficient.

Name: Anonymous 2010-08-12 6:03

>>17
Convenience?  I'd rather call them distractions that cause you to spend a lot of more time making them do the things you want them to do.

Name: Anonymous 2010-08-12 9:41

>>5
Why bother with exotic new languages when we have to take effort to learn a new paradigm. If the new language is already similar to C++, why learn it when we could stick with C++.
Why bother? Well the most obvious advantage is massively reduced development time.

There is a tremendous amount of boilerplate and bookkeeping to do in C++; code is at least 10 times as long as it should be, and has to be laid out illogically due to the mess of header files and templates. Compilation times are abysmal, and there is nothing remotely resembling interactive programming or a REPL. Debugging is extremely time-consuming and compiler errors are almost comically unreadable. There are no significant tools to aid in refactoring or static analysis because the language is impossible to parse.

A language without these flaws could drop the cost of embedded development and 3D middleware/engine development to 1/10th of what it costs now.

Google understands this, and this is pretty much what they are trying to accomplish with Go. The problem is it is tailored to Google, so it is useless for realtime. They call it a "systems language"; they are lying.

>>12
You probably spilled something in it. Get a spill proof one. They're like 20 bucks. Don't listen to >>13; you should be using caps lock for escape anyway.

>>13
it's a gigantic pain to create something new, migrate to it, convince others to migrate to it and then convert all the old shit to new code.
See, this is why it's even more surprising to me that game developers are set in their ways: game engine code has an extremely short shelf life. It all gets thrown away after just a few years because new technology means a whole new rendering engine, physics engine, etc. needs to be written.

Just look at id as a prime example. John Carmack is now starting a new FPS engine from scratch for the 7th time (id tech 6). Rage is not even out yet and the engine is already essentially in maintenance phase. Remember, id tech 6 will have identical gameplay to all the others, except this one uses raytracing and voxels. The last one has streaming megatextures and HDR. The one before it had per-pixel lighting and shaders. The one before that, hardware rendering.

>>4,17
I've mainly been talking about C++ because that's what almost all game development studios use. C shares many of its flaws; compared to modern languages they have a lot of similarities.

Name: Anonymous 2010-08-12 9:44

>>19
>The problem is it is tailored to Google, so it is useless for realtime. They call it a "systems language"; they are lying.

Would you kindly elaborate on this please?

Name: Anonymous 2010-08-12 9:54

D in say 5 years.

Name: Anonymous 2010-08-12 10:16

>>20
Web shit.

Name: Anonymous 2010-08-12 10:27

>>20
Certainly.

First off, it has forced garbage collection. Not only that, but the garbage collector is *terrible*. Not only THAT, but it's not possible to write a good garbage collector for Go because of the language's semantics. They allow "unsafe" raw pointers (in a futile attempt to woo C/C++ developers), which means the collector can't make bindings indirect, and so it can't move memory. That means you can't make a generational copying collector. At least Java forbids raw pointers; its collector is excellent. Moreover, because Java is on a VM, it can perform escape analysis across library boundaries, transforming many allocations into stack allocations. If I'm forced to use GC, I'll take the JVM any day.

Secondly, it gives preferential syntax to its built-in data structures. Sure you can write your own maps, but they won't be anything like the built-in one. This is a horrible design decision. One of the most important and fundamental code optimizations is the ability to write your own data structures. There are a thousand different ways of writing a map, ranging from a red-black or AVL tree to a hash-table, from purely functional to non-atomic mutating, and everything in between. You need to be able to choose based on the problem at hand. This is one thing that C++ does absolutely right; you can just change one typedef and bang, different map implementation. At least in C, there is no syntax for any data structure (aside from arrays), so it's a trivial search-and-replace or changing a couple #defines to transform code that uses one map into another. Good luck trying out a different map from the built-in one in those twenty thousand lines of Go. They literally lock you in; you need to either eschew the built-ins from the start, or live with them forever.

Third of all, Go has no meta-programming or preprocessor. Conditional compilation is an absolute necessity for embedded development, and is extremely useful for debugging purposes. Compile-time code generation should be a fundamental part of any modern language; since they obviously don't care for a template system, even simple substitution macros are a lot better than nothing. Google's tunnel-vision as server developers is most extremely obvious from this point. They are completely blind to the use cases.

That's all I can think of off the top of my head just now. I'm sure there are many more reasons. Go just does not feel right to me; it does not feel like a systems language. That is pure marketing BS in a futile effort to woo C/C++ developers. It is a server applications language, nothing more.

Name: Anonymous 2010-08-12 10:34

>>21
There is no way. D is *massive*. It's even more complicated than C++, if you can believe such a thing; it just has way less gotchas and idiosyncrasies. It's just a little bit more consistent. Almost no one actually wants to use it because it really is pointless.

There is no need for such a huge feature list. I've heard it said that the reason D is so complicated is because it's written by someone who likes to implement compiler features. Walter Bright basically just couldn't stop himself; he felt like he needed to replicate every single C++ feature with identical semantics in three different ways, otherwise no one would take D seriously.

I honestly think that in 5 years, D and the C++ standard that follows C++0x (which will be well into the standardization process by then, and which is supposed to feature garbage collection) will be essentially the exact same language.

Name: Anonymous 2010-08-12 10:54

In C, the declaration

    int* a, b

Stopped reading right there. Bad habits should be pruned, not facilitated.

Name: Anonymous 2010-08-12 11:26

GO will replace Cpp. http://golang.org/

Because (1) it can be run interpreted or compiled, so development is fast and compiled code is even faster; (2) it gets rid of most flaws of Cpp yet still is sufficiently similar to Cpp; and (3) it is developed and will be pushed by Google.

Name: Anonymous 2010-08-12 11:36

First off, it has forced garbage collection. Not only that, but the garbage collector is *terrible*. [...] If I'm forced to use GC, I'll take the JVM any day.
For a bit of perspective, this is a bad argument against Go, even the elided portions. You say it can't be fixed and mention you'd prefer Java, but a strong argument can be made that Go was developed with Android (among other things) in mind, because of the troubles with the JVM -- it does offer a safe alternative, and unsafe pointers were a necessary part of that. It is also intended to replace Python in-house (it's a shame they didn't consider an extension language instead, and treat Android separately.)

Sure, Go isn't any more suitable for your purposes, but in the grand scheme it fills a particular need.

Secondly, it gives preferential syntax to its built-in data structures.
This and other related decisions are pretty bad. I lost interest in programming in Go after bumping into this kind of problem way too many times. I'd still consider it for some things, but in general I wouldn't recommend it.

Third of all, Go has no meta-programming or preprocessor. Conditional compilation is an absolute necessity for embedded development, and is extremely useful for debugging purposes.
I'm glad they tried to avoid the need for this, but I'm sad they consider themselves successful.

Name: Anonymous 2010-08-12 12:10

>>23
They allow "unsafe" raw pointers (in a futile attempt to woo C/C++ developers), which means the collector can't make bindings indirect, and so it can't move memory.

First of all, while indirect bindings were all rage in 1970, no one uses this approach anymore, you might want to refresh your knowledge of modern GC implementation techniques.

Then, the problem with pointers remains, but might be eased depending on the exact wording in the language specification. If they follow C in declaring that pointer arithmetic is defined only within the limits of one allocation unit (a single structure or an array in automatic storage, or a memory block returned by malloc) and additionally forbid reinterpret casts from pointers to integers and vice versa, then they can move memory all the way they want at small constant price in performance.

Another approach would be to follow C#, which also provides raw pointers, but to obtain a pointer to a managed object you must use the ``fixed'' construct which makes the object temporarily pinned, and also is lexically scoped which prevents leaking pinned memory.

Name: Anonymous 2010-08-12 12:32

GC MY MEMORY MAPPED ADDRESSES

Name: Anonymous 2010-08-12 14:02

>>26
GO will replace Cpp.
On your hard drive maybe.

Name: Anonymous 2010-08-12 14:40

>>30
dohoho!

Name: Anonymous 2010-08-12 17:19

>>29
It is also intended to replace Python in-house
[citation needed]. I call bullshit on this, for a lot of reasons, but probably the most important is that Go is a whole lot less powerful and more limited than Python. It is not suitable as a replacement. If they want something as powerful they need a dialect of Lisp, or Ruby, or a standalone JavaScript, or something. They could have just pulled V8 out of the browser, scrapped the shitty API, and started with that instead.

Another big reason is that there is absolutely no point to switching away from Python. If they are trying to make it fast, they could just switch to a different interpreter like PyPy, or write their own from scratch. If they really want micro-threading, why wouldn't they just switch to Stackless?

The reason they have not done this is because they have a massive volume of C code locked into CPython's ABI. This is why they are trying so hard to make CPython fast with Unladen Swallow (the most underachieving optimization project in history.)

>>28
you might want to refresh your knowledge of modern GC implementation techniques.
This is almost certainly true; I stopped caring about it a while ago because Java is just as bad as the JVM is good. The JVM is also (currently) ill-equipped for the dynamic languages that are being built on it, like Clojure or Jython. Seemed like a whole lot of wasted effort.

Another approach would be to follow C#, which also provides raw pointers, but to obtain a pointer to a managed object you must use the ``fixed'' construct which makes the object temporarily pinned, and also is lexically scoped which prevents leaking pinned memory.
That would be the ideal way to do it. Unfortunately, unless they want to make a major backwards-incompatible change, it's too late to do this.

I'm still not convinced that it's even possible to make a copying collector for Go. It just allows too much unsafe shit. Even if you could, the developers have *zero* interest in doing it; they just say "we hope to have a fast collector someday", as though they expect it to magically pop out of the ether. The current collector does not even retain object layout information, so it can mistake regular ints for pointers and improperly retain memory. Boehm's has built-in facilities to supply layout information! Why are they not using it!?

Bleh. I did not want to derail this thread into a discussion about Go.

Name: Anonymous 2010-08-12 17:51

>>32
If they want something as powerful they need a dialect of Lisp
( ≖‿≖)

Name: Anonymous 2010-08-12 18:05

Didn't you guys get the memo?

Object Pascal is going to replace C++ within a couple of years

Name: Anonymous 2010-08-12 18:38

>>32
[citation needed]
This comes from the Go team. It's strange, the Android angle is the one I thought I'd have to argue over if anything. When Go was presented it was on the heels of the edict against using Python at Google. The Go team had presented Go, explicitly, as an alternative to Python and it was the only actual concrete justification made of Go's existence that I am aware of. What they hadn't detailed was an Android strategy, but supporting ARM on day 1 makes it a bit obvious, not to mention no one ever being contradicted by the Go team when they considered this aspect in conversations the team participated in.

Anyway, if you want a citation I believe it was mentioned in the webcasts. If not, I would search the mailing list or maybe see if there are FreeNode/#go irc logs somewhere. There's also this from the Go blog (http://blog.golang.org/2010/05/go-at-io-frequently-asked-questions.html), which doesn't mention Python, but does mention that they're actually using it:
Go is ready and stable now. We are pleased to report that Google is using Go for some production systems, and they are performing well.

As for your questions about the unsuitability of Python, I would direct you to the Go mailing list. There is a lot of substance there on the matter, especially in the older posts. I don't remember most of it, but your objections sound all too familiar.

I don't think Go's concurrency is all that great either, by the way, Go does not avoid any of the gotchas present in the use of standard coroutines, nor does it provide any power beyond them (the use of channels notwithstanding.) Go does do a great job of making the gotchas seem like they're not there, which is very astonishing to the person who learns concurrency from Go.

[Stuff about GC.]
I appreciate your pessimism, but you have a habit of saying "can't" when you seem to mean "won't." Most of the problems are solved in one way or another, but like you say, not in Go--even when suitable implementations are right there.

The optimist would point out that they're researching a new collector, based on an existing collector which is covered by patents.

Bleh. I did not want to derail this thread into a discussion about Go.
It's no worse than the thread's premise: bitch about C++. It's more interesting to bitch about Go. Besides, by not taking C++ seriously enough we're giving it an even harder time.

If you want a serious thread which is on topic I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks, or is good enough or bitching about how slow Python is(n't). The Go aspect is at least a conversation that hasn't already been had (and done to death) years ago, unlike all of the others you will find in this thread.

Name: Anonymous 2010-08-12 18:43

>>24
C++0x doesn't have GC, and it'll be standardized by October 2011. The draft has already been finalized and has been voted on. They're just performing "bug fixes" on the specification now to remove ambiguities.

Name: Anonymous 2010-08-12 19:09

and it'll be standardized by October 2011
It's cute that you actually think that

Name: Anonymous 2010-08-12 20:48

>>36
C++0x doesn't have GC
That's why I said the standard *AFTER* C++0x. It will have GC, and will also have things like proper module support (so we can finally get rid of header files), scoped macros (which will probably end up extremely similar to mixins from D), and more. I am telling you, these two languages are converging. In ten years they will be semantically nearly identical. (I also think this is why D will never gain any traction; its useful features will just be added to C++, and people will put up with C++'s warts for compatibility with existing libraries.)

>>35
The optimist would point out that they're researching a new collector, based on an existing collector which is covered by patents.
I'll believe it when I see it. Maybe I'm too pessimistic, but Go has already made enough horrible design decisions to make me extremely skeptical of such promises.

It's no worse than the thread's premise: bitch about C++. [...] I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks
That's not what this thread is about at all. I'm not here to point out C++'s faults; the very first sentence in this thread shows that I'm going on the assumption that we already all know its faults.

This thread is about why no one cares to replace it. Essentially no language research is being done in this field. The only language I know of that even attempts at being a replacement is D (we've already established that Go is a lie), but D does not solve C++'s biggest problems, such as verbosity and development speed. D is just a band-aid; it's far too similar to C++.

I challenge you to present a suitable replacement systems language, or a description thereof

Again, this isn't really what this thread is about, but I'll try to talk a bit about what features I would like.

First off, I would think a replacement language could start with full type inference. I would like to just leave out types entirely, with code looking something like JavaScript. Type annotations could be added where desired with a simple predicate, like @int. That alone would be the biggest step towards cleaning up some of this verbosity and development speed. While we're at it, I'd like implicit type parametrization. I shouldn't have to declare that something is a template. If I leave off type annotations, it's a template. I could also do with struct and enum inference. I'll just declare a struct and leave it empty; treat it as a hashtable in the meantime. Python does this: you use fields on an object as needed, and when you want to make it fast, you declare slots.

Next, the language should make references more implicit. A lot of time in C/C++ is spent dealing with addresses, pointers, references, etc. Just look at how thick C++ code is with ampersands and asterisks. This could be automated in an obvious fashion, but still available when I need to override the obvious defaults. I think C# does this well: you have value types and object types which behave in a straightforward manner, but the ref keyword lets you make references of value types when needed.

It should also make things like RAII simpler and more obvious. I shouldn't need to write a destructor for *anything* unless I need to close a system resource, like a file handle or socket; instead I should just annotate which pointers are "smart pointers", i.e. owned and freed via RAII, and the compiler fills everything in. C++ can sort of fill it in for you if you use only smart pointers everywhere, but smart pointer syntax is horrible, and it has problems such as automatically inlining implicit destructors. That's very bad. I don't really know to what extent D can solve this, but I see a whole lot of destructors in D code.

I would like garbage collection to be used as the default, and once in the optimization stages, I'd like to be able to transform a GC'd program into a manually managed program just by reasoning about object lifetimes and adding appropriate type annotations. D is the only language I've ever heard of that attempts this: you declare variables with the scope keyword and the memory lives there, instantly RAII. This is somewhat limited however; there aren't smart pointers, so it has to be a stack or field allocation to be RAII.

I'd like a much more powerful type system for proving things at compile time. For instance, why the FUCK do all these languages still have null pointers? They should not have null at all; instead, they should have tagged unions. If you want a variable to possibly be null, you make it a tagged union with the null type; the compiler then forces you to check for null wherever you dereference it. There is no overhead for this; it compiles to the same damn machine code, so there is no reason not to do this. Why can't I statically prove other things about code, like that memory is never leaked? This could be done easily in C++ for example if new returned a unique_ptr<>, and did not ever let you reset a unique_ptr<> or create one from a raw pointer. All pointers would then be trapped inside exactly one unique_ptr<>, forcing them to be cleaned up. These are about the simplest possible improvements to the type system; you can go a whole lot farther than this.

When I compare my productivity and code quality between developing in Python and C++, these are the most obvious issues that cause such a massive discrepancy. I'm sure I could think of more.

Name: Anonymous 2010-08-12 22:40

>>38
I'll believe it when I see it. Maybe I'm too pessimistic, ...
Oh I don't blame you. I think they still intend to make good on that, and I believe it's completely possible for them to do so. I don't have perfect faith that they will actually see it though though.

That's not what this thread is about at all. I'm not here to point out C++'s faults;
Yes, but about the road to hell, again:
I challenge you to present a suitable replacement systems language, or a description thereof. Anything else is just complaining that C or C++ sucks
In a vacuum you'll only get the complaints. Barring some very light commenting on C++0x (not a great start), Go is the only thing discussed that hasn't already been done to death.

full type inference
implicit type parametrization

Have you looked at Clay? http://tachyon.in/clay/ I wish I knew more about it, but there are (still) no docs yet. The tagline is Efficient, concise, generic - Pick any three. Doesn't look half bad considering.

the language should make references more implicit
I disagree here. I will admit to being very C-minded and this is probably a symptom of that. If you already have by-ref and by-value types then it's no big deal, but switching over to that model for no other reason is probably not great. I'll pretend you said something about safety and I'll drop it, deal?

I'd like to be able to transform a GC'd program into a manually managed program just by reasoning about object lifetimes and adding appropriate type annotations
I'm not clear on this. Do you mean the compiler should perform analysis and insert malloc/free or equivalent calls in your code automatically?

About null pointers, I'm not quite convinced on the performance comment. If you can prove certain things you can eliminate the need to perform checks in certain cases, however. This is typical in manually checked pointers (where 'proof' is on account of "I'm the programmer and I reckon it"), but if you want to meet that with automatically checked pointers, you need to prove it automatically. Again: safety. Then again: systems.

Why can't I statically prove other things about code, like that memory is never leaked? ...
That's a tall order. Your solution doesn't add up to me: doesn't that work at run time and require a trace? How would that be done statically when the problem is non-deterministic in many (most?) running programs?

Name: Anonymous 2010-08-13 3:06

>>39
I've looked briefly at Clay; I did think about it when posting that. It is very interesting but it has some problems. The syntax for some things is terrible; postfix ^ is used for dereferencing, so foo^.bar is for accessing struct fields. The fact that it primarily uses statements instead of expressions, e.g. no conditional or comma operators, is kind of a throwback to Fortran and COBOL. This makes it verbose and difficult for functional/concurrent programming. And I really would like a GC during experimental development, as long as I can pull it out later.

Basically it doesn't look fun. It looks like Fortran without type declarations. I will probably try writing a serious program in it at some point soon to get a feel for how it works.

I disagree here. I will admit to being very C-minded and this is probably a symptom of that. If you already have by-ref and by-value types then it's no big deal, but switching over to that model for no other reason is probably not great. I'll pretend you said something about safety and I'll drop it, deal?
No deal, I actually do want to talk about it! I will fully admit that this is the most controversial of my points. I'm really not 100% behind it myself; mostly I just want to believe that it's possible to do it without sacrificing anything.

I just find that a lot of my time in C and C++ is spent not just writing syntax for pointers, but *thinking* about it. If I take a pointer to a struct as a function argument, I refer to its fields through dereferencing (->). But if I instead declare this struct on the stack, now I access it with regular member syntax (.). Now why exactly does it matter where this object lives? Why do I need to think about where it lives when I access it? Why isn't its location in memory a mere part of its declaration, nothing more?

Many languages *partially* solve this in a variety of ways. For instance in Python, everything is a binding to an object. Writing "x += 5" creates a new object which is the sum of x and 5, and then binds x to the new object. It does not actually modify any objects though, because the number objects themselves are immutable. This happy restriction means the compiler is free to optimize it by copying the numbers directly by value instead of allocating them from the heap. (I'm not sure if CPython actually does this, but CPython sucks. Haskell probably does this.) The unfortunate downside is that there is no way to have a value modified by reference; single-element lists are a common workaround in Python.

Scheme is similar in that everything is a binding, except that numbers are not immutable. So you can modify them by reference. However, someone might set! on your int, so you have to always pass it as a reference. Someone might keep the reference expecting it to be modified later, so you have to allocate it from the heap and let it be garbage collected. The only way to optimize this into by-value semantics is through whole-program analysis in order to prove that the value is not changed or retained.

Clearly I'm not the only one who thinks pointer syntax is a pain. Just look at Apple's libraries like CoreFoundation. For most objects, stack allocations are banned (because structs are hidden), and all types have a typedef'd pointer with "Ref" on the end, which you only get through Create and Release methods. Thus it is always a reference. A few types are value types (raw structs), such as CGRect. These are *never* passed as a pointer; they are always copied by value. I don't think I've ever seen a single *, &, or -> when dealing with Apple's C code. They are essentially emulating the syntax-free semantic distinction between by-value and by-reference types.

If Apple gets rid of pointer syntax in C with some typedefs, clearly it's worth it to do so as a fundamental part of any new language. Isn't it?

I'm not clear on this. Do you mean the compiler should perform analysis and insert malloc/free or equivalent calls in your code automatically?

No, I mean specifying the memory location of objects through annotations. For example, the scope keyword in D means the object should be allocated directly in the current struct or stack frame. You can just add this to a variable and the object is "scoped". Or you can make a class manually memory managed by overriding operator new and operator delete. The point is that you do not need to significantly change code in order to remove the garbage collector; it is not as though you are rewriting the program, just annotating it. Unfortunately this is very limited in D. It's hard to allow an object to be allocated and constructed from different allocators. There are no smart pointers to clean up this sort of special memory with RAII.

About null pointers, I'm not quite convinced on the performance comment. If you can prove certain things you can eliminate the need to perform checks in certain cases, however.

The idea is that 99% of the time, you would need to do the check for nil anyway (otherwise why would it be a union with nil?), so you're not losing performance. You also aren't losing space; any decent compiler of course handles a tagged union between a reference and the nil type just by using the zero address for nil. If you know for a fact that a possibly-nil pointer is actually not nil, but the compiler is unable to verify it, it could supply a simple construct like __assume(x != nil). The MSVC compiler actually already supports this in C++ for optimization purposes, so it's not much of a stretch.

That's a tall order.
Doesn't seem that tall. What are the holes in my reasoning? Couldn't returning safe unique_ptrs from new solve this problem, barring reinterpret_casts and such?

In fact, I will try it. The next C++ project I do, I hereby resolve to turn on sepplesox, and exclusively use a wrapper to new for allocation which returns unique_ptrs (and if possible, I will disable reset() and all of unique_ptrs' constructors, or perhaps write my own version of it.) We'll see how that turns out.

Name: Anonymous 2010-08-13 9:05

>>38
This thread is about why no one cares to replace it. Essentially no language research is being done in this field.
But this is wrong, and it has been pointed out many times; people are looking for a better systems language.

Name: Anonymous 2010-08-13 9:41

>>38
That's why I said the standard *AFTER* C++0x. It will have GC, and will also have things like proper module support (so we can finally get rid of header files), scoped macros (which will probably end up extremely similar to mixins from D), and more.
They're already scraping the bottom of the barrel for syntax. By the time they've added all that, idiomatic C++ will resemble obfuscated Perl.

Name: Anonymous 2010-08-13 10:18

mostly I just want to believe that it's possible to do it without sacrificing anything.
I prefer to deal with this sort of thing myself, so I think there is a sacrifice. Putting it in the type gets you part way, but that will quickly fall apart in (say) C's type system.

The point is that you do not need to significantly change code in order to remove the garbage collector; it is not as though you are rewriting the program, just annotating it
The way you put it originally made me think you meant the compiler should refactor the program with calls to memory management facilities inserted instead of giving responsibility to the programmer.

My only argument is that some allocations would require refactoring to deal with outside of GC, which is able to handle runtime non-deterministic de/allocations which outlive their creation context. Anything that can be solved simply by annotating could probably be handled by the compiler, couldn't it?

The idea is that 99% of the time, you would need to do the check for nil anyway
I don't buy that. Nullable pointers show up out of their initialization context probably at least 50% of the time. So far so good, but it's not true that says every time one is received or passed out of context it is in danger of being null. A good deal of my code checks these once and never has to worry about it again. A lot of my C never checks (because there is no non-nullable type, and in these instances a problem would preclude creation.) Without analysis a compiler couldn't know when this would be okay.

I would go the other way and say that 99% of the automatically inserted checks, barring this kind of optimization, are provably unnecessary. The real figure is probably much better, but I doubt the real-world optimized case approaches parity with the manually checked case (limited to the cases where the manual checking is indeed correct.)

Somewhere there is a paper about all of this, with statistics. The paper will say that the analysis is possible up to the point (or perhaps surpassing) of program complexity that a human can be expected to deal with, according to some metric or estimate. Whether that kind of analyzer is found in implementation is perhaps another story.

Doesn't seem that tall. What are the holes in my reasoning?
I don't fully understand your reasoning, but it sounds like you're pimping a static solution to memory leaks in the face of dynamic allocation. I'm probably mistaken here, but if that is what you mean then I'd say it's impossible.

Name: Anonymous 2010-08-13 10:43

It's probably time to bump this.

Name: Anonymous 2010-08-13 10:55

>>43
My only argument is that some allocations would require refactoring to deal with outside of GC, which is able to handle runtime non-deterministic de/allocations which outlive their creation context.
Correct. This is not most allocations however. I don't think it's even close to most. But of course I have no data to back up this claim, since this hypothetical language does not exist.

Anything that can be solved simply by annotating could probably be handled by the compiler, couldn't it?
No. Manually freeing memory inherently creates unsafe code. You have to figure out what the lifetime of your object needs to be in order to ensure that it is not accessed through old references after it has been freed. It is impossible for a compiler to solve this in the general case; the ol' halting problem and all that. That's why GCs are so ubiquitous in modern languages: they allow you to statically prove that memory is never accessed after it is freed.

I would go the other way and say that 99% of the automatically inserted checks, barring this kind of optimization, are provably unnecessary.
The compile does not automatically insert any checks whatsoever. That is not what a static type system and static analysis system does. The compiler only forces YOU to MANUALLY add checks in places where you need to verify that a pointer isn't null.

These should be few and far between. I think you have misunderstood my 99%: I meant you would be adding it anyway in 99% of places where the compiler would force you to do it; of course not 99% of places where you dereference a pointer!

Your point about a program analyser is not relevant. No whole-program analyser is needed for this. Each function can be compiled individually, in a vacuum, separate from all the others; it only needs to be properly typed based on whether it accepts null. Nice is a variant of Java which does this: if a function argument might be null, you just prepend a question mark on the type of the variable. If you do this, the function needs to check that it is not null before dereferencing it. If you do not, then the responsibility falls on the caller, so you don't need any checks; the compiler will prove for you that it will not be null. This is a syntactically convenient special case of tagged unions from more powerful languages.

Honestly it's pretty mind boggling to me that you are defending null pointers. It's the famed "billion dollar mistake". I can't help but feel that you aren't properly understanding the issue. Maybe I'm the one who isn't understanding the issue.

Name: Anonymous 2010-08-13 11:10

the solution.....PL/I

Name: Anonymous 2010-08-13 11:19

>>34
Pascal vs C by Brian Kernighan
http://www.lysator.liu.se/c/bwk-on-pascal.html

Name: Anonymous 2010-08-13 11:40

>>45
My question would be: how would you resolve the null pointer problem, creating a solution that does not inherently result in the same problems or type-testing requirements a null pointer necessarily incurs?

Name: Anonymous 2010-08-13 12:34

>>45
No. Manually freeing memory inherently creates unsafe code.
Whoa, back up. I thought the object was to tell the GC to piss off? In any event, analysis can ensure that inserted deallocations, which are exactly equivalent to manual deallocation except that they are automatically inserted, are perfectly safe. (And at some point you have to concede that if manual deallocation is inherently unsafe it doesn't improve the GC case either, which is ultimately informed by a programmer as to when it can free memory. Pedantry, I know, but the point keeps things in perspective.)

As far as the general case goes, annotations don't work either. Where analysis fails, annotations can't inform the compiler or GC very well at all--this is what I was trying to bring up. If you have a paper that says otherwise I would really like to see it.

The compile does not automatically insert any checks whatsoever.
After rereading I see I've misread your comment about this. You wouldn't really have to perform the checks, your code could just crash at roughly the same time and place, for a slightly different reason--unless it does actually force you, in which case it either forces you to do it at every dereference or performs some amount of analysis to alleviate the need for it, in either if these cases it might as well insert the checks. For my purposes, your ?var example simply informs analysis.

Honestly it's pretty mind boggling to me that you are defending null pointers. It's the famed "billion dollar mistake".
I know this is a topic that has been debated back and forth forever, but it's not a big safety concern for me--nullables have simply not posed a significant problem to me, and they usually exist in languages that have far more dangerous facilities. My only strong feelings about it are a) it's not nearly as big of a deal (either way) as people tend to make of it, and b) I don't prefer it because it usually relies on exception handling to deal with eg. failed initialization.

As an aside, Go's solution to exceptional cases is a nice try (http://blog.golang.org/2010/08/defer-panic-and-recover.html). I don't think they quite got it, but it does get exception handling out of my face. Combined with the multiple return facility, you can choose to treat errors exceptionally or you can anticipate them. The very best decision Go has made, in my opinion, is not to propagate exceptions out of the standard library. Sadly they used the qualifier "convention" and not "standard" or "requirement" or something even stronger. Personally I would never let an exception escape, be it the standard library or supplied.

Name: Anonymous 2010-08-16 1:25

>>49
In any event, analysis can ensure that inserted deallocations, which are exactly equivalent to manual deallocation except that they are automatically inserted, are perfectly safe.
Not in all cases. Maybe in most cases, yes, and it can certainly insert those automatically. But some of them you are going to need to tell it explicitly what their lifetime is. That was my point; manual memory management inherently creates unsafe code because it's *not possible* for the compiler to statically verify that all of your memory deallocations are safe. This is the same as the halting problem. The compiler can't verify it, therefore the compiler can't automatically do it.

As far as the general case goes, annotations don't work either. Where analysis fails, annotations can't inform the compiler or GC very well at all
I suppose I should clarify what I mean by annotations. They aren't "suggestions" to "help" the compiler; they are explicit instructions. FREE THIS HERE. When this variable goes out of scope, free whatever it is pointing to. That's exactly what the "scope" annotation does in D. This is what I am talking about.

As an aside, Go's solution to exceptional cases is a nice try. I don't think they quite got it, but it does get exception handling out of my face.
Give me a break. Go has exceptions. They just don't fucking call them that, because they are trying to avoid the C programmer prejudice against them. Other than that, they are *identical*. defer is merely a convenient syntactic wrapper to a finally block.

Name: Anonymous 2010-08-16 1:48

When this variable goes out of scope, free whatever it is pointing to. That's exactly what the "scope" annotation does in D. This is what I am talking about.
I remember someone in my university pointing a relationship like this out and wondering, then, why it isn't common practice to auto deallocate variables once scope changes.  So it may be a naive question but: why would automatic deallocation based on scope change - am I right in simplifying this as change to a program's call stack - be a bad idea?

Name: Anonymous 2010-08-16 2:05

>>51
Because this is too dangerous. It has serious potential to bite you in the ass if you need an object to outlive its scope but forget to make it non-scope.

Name: Anonymous 2010-08-16 2:19

>>50
That was my point; manual memory management inherently creates unsafe code because it's *not possible* for the compiler to statically verify that all of your memory deallocations are safe.
I was pointing out that there exists provably safe memory management which is equivalent to the manual case. This against your argument that manual memory management is inherently unsafe. I was not speaking on memory management in general, and in fact pointed that out at least once.

FREE THIS HERE. When this variable goes out of scope, free whatever it is pointing to.
Unless the compiler can ignore these annotations at its own discretion you've just added an unverified call to free(). How is this purported to be safe while the library call is inherently unsafe? I'm sure there's some subtle aspect of D that intends to make a difference here but I wish you'd be upfront about it.

Give me a break. Go has exceptions.
I never said otherwise. By "out of my face" I mean there is no extra level of indentation. The exceptional case logic is delineated separately, right where it should be. Erlang does an even better job here, but Go can't really adopt its mechanisms.

Other than that, they are *identical*.
Identical to what? C++'s exceptions? No. Erlang? Not a chance. There are some nuances in there, and it's not just syntax. I'm quite confident that there are meaningful if subtle semantic differences that makes Go's model identical to nothing else in existence.

Why are you being so ignorant all of a sudden? You weren't like this before.

Name: Anonymous 2010-08-16 5:31

>>52
You could say that about x in y language. If you forget to do a thing, then it's your problem, not the language's.

Name: Anonymous 2010-08-16 8:08

>>53
How is this purported to be safe
It's not purported to be safe!

You are not understanding what I am saying here. I don't care if it's provably safe; I will reason about whether it is correct myself.

The problem is, if it isn't PROVABLY safe, then the compiler can't AUTOMATICALLY resolve it for you. This means you have to either manage the memory manually or use a garbage collector. A garbage collector will check *at runtime* whether an object needs to be freed, which is the only way to automatically and safely handle memory; that is, as >>51 was wondering, why most modern languages use them.

Remember, you originally said that if all I want is annotations for memory management, then a compiler could just annotate it automatically. I said no, because it couldn't do it *safely*; it couldn't prove that its optimizations were correct. Halting problem.

I was pointing out that there exists provably safe memory management which is equivalent to the manual case.
There is certainly provably safe memory management in restricted subsets of the language. "All allocations are scoped on the stack, and you cannot make a reference that doesn't live on the stack. Therefore all references go out of scope before whatever they point to." Provably safe memory management -- except this sort of limited case is not actually useful for writing real-world programs. I'm not sure if this is what you meant or not.

There are some nuances in there, and it's not just syntax.
Yeah, yeah. They are *extremely* semantically similar. I'm sure you could come up with some minor difference, so fine, they aren't identical. (For the record, not having to indent code is not a semantic difference.)

Many years ago I wrote a macro in C++ that did exactly what defer does in Go. Here, I just rewrote it. Give it a try:

#include <stdio.h>

#define defer3(f, line) struct DEFER##line {~DEFER##line() {f;}} defer##line
#define defer2(f, line) defer3(f, line)
#define defer(f) defer2(f, __LINE__)

int main() {
  printf("a\n");
  defer(printf("e\n"));
  printf("b\n");
  defer(printf("d\n"));
  printf("c\n");
}


Of course Go reinvents this as a built-in, since they refuse to support any form of RAII, or something like Python's with statement (among many other things.)

I'm very sorry if I'm ignorant of Go's exceptions, but they did announce the feature a whole twelve days ago.

Name: 55 2010-08-16 8:12

Let me correct myself there, it does *almost* exactly what Go does. Need to be precise in here! My macro defers at the end of the block (or scope), not the end of the function. This is in my opinion much safer and more useful (in the same way that VLAs are much safer and more useful than alloca()), but whatever, I'm not a Go programmer.

Name: Anonymous 2010-08-16 11:25

Plain and simple.

ActionScript 2.0

Name: Anonymous 2010-08-16 11:41

Most programmers are a bunch of babies.

Despite SICP being one of the foremost standards on methodology and implementation standards, people can't handle it because, "IT HAS PARENTHESES OH NO"

It's gone far enough where Abelson has said he's glad to see the SICP class go because something like "people don't need to know the basics anymore". This excuse is always given off when stupid people become entrenched in a field, and no more focus is actually on understanding the area.

Look at areas like mathematics. The majority of people think mathematics is strict rule based things that have no practical use or interpretation. The top tier mathematicians understand math differently (neurological data easily demonstrates this), with even the majority of educated mathematicians still thinking math is some rule based thing that has no use or practical interpretation. This view has heavily pervaded almost every scientific field, it's the reason why negative numbers didn't practically exist until the Renaissance, and why medicine now is practiced from a disease driven, rather than a malfunction driven approach. For a "recent" specific example, some Australian colleges have taught less ANATOMY to their doctors, and focused more on people skills.

Name: Anonymous 2010-08-16 15:11

>>55
It's not purported to be safe!
Oh, well that makes more sense then. Somehow you had given me the impression that the annotations were something other than manual memory management.

I don't care if it's provably safe; I will reason about whether it is correct myself.
One moment you're going on about the inherent unsafety of manual memory management--which isn't always true, or even meaningful1--the next you're saying you don't care. I take it there is more to your point which is deep between the lines, but I just can't find it. I'm trying really hard to keep up the perception that you are not just trying to posture D above the rest but by now I have to admit that doubt is casting a very long shadow over the affair.

[...] Halting problem.
As I mentioned above, you'd given me the impression that the annotations were something other than equivalent to manual management. Anyway this 'halting problem' thing was cute the first time, but you don't need to repeat it constantly like you just heard about it last week. I'm not trying to shit on you, I just haven't been impressed by anyone making ineloquent equivalences to the halting problem in a very long time.

There is certainly provably safe memory management in restricted subsets of the language.
Your examples are quite limited and don't cover even what naive static analysis can accomplish. I was also thinking specifically of heap allocations--stack allocations limited to non-escaping references don't need any management, they live and die quite organically with program flow. So really you're not really talking about manual memory management at all.

I'm very sorry if I'm ignorant of Go's exceptions, but they did announce the feature a whole twelve days ago.
I don't care that you don't know, but I do care that you don't know and yet try to speak authoritatively on the subject.

It was part of the language from very early on. The announcement isn't an announcement, it's something they figured interesting enough to blog about--almost everything on the Go blog has been around within a month of the initial language announcement. IIRC, defer() has been in since day 1 and panic/recover were discussed and implemented shortly after.

On semantics, since you don't have much interest in Go I don't want to get into the similarities and differences between its defer/panic/recover stuff and whatever language's model you happen to be thinking of at any given time. Semantics aside, the syntactical advantage is huge in my opinion--which is all I was trying to point out when you went after me on semantics. Not cool.

Name: Anonymous 2010-08-16 18:28

>>59
I'm trying really hard to keep up the perception that you are not just trying to posture D above the rest but by now I have to admit that doubt is casting a very long shadow over the affair.
Hardly. I've mentioned D often because it's the only major attempt at replacing C++ in embedded and game development (there are a few toy languages out there, but they are still experimental at this stage). I actually hate D, mostly because it's even more complicated than C++. It does not make development any easier or better; it just gives you slightly less chance of shooting yourself in the foot with some obscure C++ gotcha.

Your examples are quite limited and don't cover even what naive static analysis can accomplish. I was also thinking specifically of heap allocations--stack allocations limited to non-escaping references don't need any management, they live and die quite organically with program flow. So really you're not really talking about manual memory management at all.
True enough. Compilers like Stalin, or JIT compilers like the JVM can eliminate many (even most) allocations through the kind of static analysis you're talking about. The point is that it's simply not possible to handle all allocations in general. All of these use a garbage collector as a backup strategy. You seem to be ignoring this rather major issue, which is why I keep mentioning the halting problem. If it isn't possible to do all of them, then it's rather useless for embedded, isn't it? Even a handful of garbage collected objects mean you have to scan the whole heap, so you still have most of the downsides of a garbage collector.

I think we've pretty much come to a point where we agree on most things now (except for interest in Go, heh.) I can't help but feel disappointed in this thread. Not the posts or posters; more in the general outcome. We just ended up quibbling over minor issues in different languages. I don't know what I expected. Bleh.

Name: Anonymous 2010-08-16 19:47

>>60
Hardly. I've mentioned D often because it's the only major attempt at replacing C++
Fair enough. It just seemed like you were in awe of its GC that prevented you from seeing how it equates to memory management in general.

All of these use a garbage collector as a backup strategy. You seem to be ignoring this rather major issue
I tried to make it clear that I was speaking about the compile-time solvable cases. I think I was very explicit about this at one point. I was speaking about them for two different reasons, the first being about equivalence to a great amount of manual management, and the second regarding the confusion viz. D's MM annotations.

I would like to mention that while I don't think there is a serious problem with manual memory management, if your code cannot, in principle, be proven (perhaps with certain allowances for exceptional cases), then it is not one you should use. I don't expect an analyzer to reproduce any given management strategy chosen by the programmer, or even solve for complex provable cases. I don't even expect a prover to try to verify everything that could, in principle, be verified... or even that it could prove with a higher bailout. What I mean is the extent of what is provable at compile time is quite greater than you seem to let on.

At the same time, it is not practical to automatically verify it in detail (even with a great deal of help from the programmer, but for different reasons) but I take it to be the programmer's responsibility to reason about the management strategy and find it sound. If if you find that's too much work, or you are unable to do it reliably enough or just find it undesirable for whatever reason, use a GC. I won't even call you lazy. The point is putting faith in a GC, an analyzer, or your sensibilities + the guy reviewing your code is largely about confidence. We tend to have more when we can verify it, but as you've been insisting a lot, analysis can't solve the general case. So you might make a mistake, or the analyzer might have a bug or the GC might be pure reference counting and never perform satisfactorily on your incestuous data structure (if safety by abstention is admissible, don't ever call free() and all manual management is inherently 'safe'.)

I don't know what I expected. Bleh.
This is about the best you can do here. Personally I'm not much interested in "replacing C++" because it sounds like a demand for something C++ programmers would find appealing, but I am more interested in PL in general. For solving problems C++ and others are traditionally applied to, I have list of languages I am following. None of them resemble C++ very much.

we agree on most things now (except for interest in Go, heh.)
My interest in Go isn't really in Go per se. There are some interesting things going on and I'd be remiss if I ignored them.

Name: Anonymous 2010-08-16 22:26

Name: Anonymous 2010-08-16 23:28

how would you guys rate Objective-C? it appears to me that Obj-C tries to do what D is doing which is implement more of Smalltalk type object orientation. The only thing I know about Obj-C is that it contains C as a subset and that the OO in Obj-C is supposed to be slower than that of C++. Whats your opinion on Obj-C?

Name: Anonymous 2010-08-17 0:11

Objective-C is closer to Java or C#/.NET than it is to D, minus the virtual machine in Java or .NET.

Cocoa has the same breadth and scope as the Java or .NET libraries.

I'm not a big fan of it, I'd rather program in C++.

Name: Anonymous 2010-08-17 1:16

Except Java is designed to be code for a virtual machine and C# is a full fledged language that happens to be written primarily for a virtual machine.

Java doesn't even have close to the scope of the .Net libraries. Cocoa and other Apple API's like the iFag library is shit eatingly stupid for and OO library because it relies on functional methods.

And C++ is fucking garbage.

Name: Anonymous 2010-08-17 2:17

>>63
Objective C is a real superset of C (as opposed to an almost superset, like C++) that adds objects with dynamic dispatch evocative of smalltalk.  Version 2.0 adds garbage collection and a bunch of syntactic sugar.  Objective C is not only way older than D, but it's almost as old as C++ (C++ was way worse back then).

The Cocoa libraries are only a little bit like the .NET or Java libraries.  The Cocoa libraries are geared almost exclusively towards making desktop / mobile applications, so you won't find an equivalent to, e.g., ASP or JSP for Cocoa.  The Cocoa libraries also assume that you already have a working standard C library, whereas the Java platform and .NET platform start from the ground up.

Cocoa is the best library for writing desktop/mobile UIs bar none.  Qt has a good reputation, but if you've seen the kind of preprocessing they have to do to get a decent UI API in C++, you'll just chuckle.  Of course, most sane people don't use Objective C for non-UI parts of their code, except those eccentric developers that only target Apple platforms.

For big, cross-platform apps go ahead and write your code in C++ and then make the GUI on the Mac/iPhone use Objective C.  Yes, they are completely interoperable (no, you can't subclass classes from one in the other, but nobody wants to anyway).  This way you can keep your cross-platform C++ code without having to try to write an UI in C++, which as every C++ UI library ever demonstrates, is a total pain.

Name: Anonymous 2010-08-17 5:00

>>63
You're pretty much correct. As >>66 says, Obj-C is a true superset of C which adds object-orientation using Smalltalk-like syntax. Other than that it's a lot like Java, but it is more dynamic and does not have a JIT, and it's susceptible to memory corruption. Wikipedia has a long writeup if you want to see what it looks like.

It has a lot of bad warts, and while it is used everywhere on Macs, it is used absolutely nowhere outside of Macs.  I wouldn't use it if I didn't have to.

>>65
Except Java is designed to be code for a virtual machine and C# is a full fledged language that happens to be written primarily for a virtual machine.
Nah, Java was designed for enterprise coders. It is almost featureless for this reason: it is so that you can hire a bunch of bad coders, and they can actually get something working without doing too much damage. It is extremely unproductive for anyone who actually knows what they're doing.

>>66
Yes, they are completely interoperable
To elaborate a bit, they are very interoperable (you can mix them freely in the same source file), but their features do not correspond to each other. They have separate class hierarchies: a C++ object is not an Objective-C object, you cannot subclass one from the other, you cannot template Objective-C classes or methods, etc. They have separate exception handling stacks: the try/catch from one cannot catch exceptions from the other, Objective-C exceptions don't unwind the stack, etc. That last one is particularly dangerous; definitely do not throw exceptions across language boundaries.

And lastly, if you thought C++ error messages were confusing, try Objective-C++; since even C++ is undecideable, Objective-C++ is downright hilarious. One of my favorite compiler errors is "Confused by earlier errors; bailing out." That one is especially fun when it's the only compiler error you get.

For OpenGL games (on iPhone or Mac OS X), virtually all developers just wrap the Objective-C they are forced to use and write everything in C/C++.

Name: Anonymous 2010-08-17 5:06

>>66
(C++ was way worse back then)

you got to be kidding?

Name: Anonymous 2010-08-17 9:10

Sure has been a lot of Apple nonsense on /prog/ lately.

Name: Anonymous 2010-08-17 9:41

>>67
Objective-C and C++ exceptions have been compatible since Objective-C 2.0 was introduced.

Name: Anonymous 2010-08-17 14:42

>>68
No, it's true.  C++ was compiled with Cfront back in 1986, when Objective C was introduced.  Cfront was a godawful piece of shit that turned C++ code into C code, and it had a lot of weird corner cases, many of which made it into the C++ spec.

Name: Anonymous 2010-08-17 23:54

>>70
http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/ObjectiveC/Articles/ocCPlusPlus.html#//apple_ref/doc/uid/TP30001163-CH10-SW2

In addition, multi-language exception handling is not supported. That is, an exception thrown in Objective-C code cannot be caught in C++ code and, conversely, an exception thrown in C++ code cannot be caught in Objective-C code.
What isn't mentioned here is also that Obj-C exceptions will not unwind the stack; no C++ destructors will be run. I'm pretty sure that Obj-C @throw is still a wrapper to longjmp() even in 2.0.

You simply cannot throw exceptions across language boundaries. Don't do it.

Name: Anonymous 2010-08-18 11:14

>>72
http://developer.apple.com/mac/library/releasenotes/Cocoa/RN-ObjectiveC/index.html#//apple_ref/doc/uid/TP40004309-CH1-DontLinkElementID_11

In 64-bit, the implementation of Objective-C exceptions has been rewritten. The new system provides "zero-cost" try blocks and interoperability with C++.
"Zero-cost" try blocks incur no time penalty when entering an @try block, unlike 32-bit which must call setjmp()  and other additional bookkeeping. On the other hand, actually throwing an exception is much more expensive. For best performance in 64-bit, exceptions should be thrown only in exceptional cases.
In 64-bit, C++ exceptions and Objective-C exceptions are interoperable. In particular, C++ destructors and Objective-C @finally blocks are honored when unwinding any exception, and default catch clauses—catch (...) and @catch (...)—are able to catch and re-throw any exception.


So I was wrong in that it's not a feature of 2.0 but the new 64-bit runtime.

Name: Anonymous 2010-08-18 16:05

Aren't there mailing lists for these kinds of discussions? Or reddits? Or overflows? Or anywhere elses?

Name: Anonymous 2010-08-18 16:19

>>74
Hey, you. Yes, you. Fuck you!

Name: Anonymous 2010-08-18 17:45

>>73
Hey, that's interesting. Good news there, except, of course, it doesn't work on iPhone. *sigh*

Name: Anonymous 2010-08-18 21:48

>>74
Die in a fire.  This is probably the most informative thread on the front page.

Name: Anonymous 2010-08-19 4:17

>>77
This is probably the most informative thread on the front page.
Then why did you not bump it?

Name: Anonymous 2010-08-19 4:26

>>78
Didn't you hear? This is /prog/. We sage interesting discussions and only age spam and trolls.

Name: Anonymous 2010-08-19 9:09

>>77
If you want something really informative, you should check out other places instead of turning /prog/ into them.

Just sayin'.

Name: Anonymous 2010-08-19 14:53

C99 is all you need.

Name: Anonymous 2010-08-19 16:10

>>79
That's not true. There is rarely any discussion in threads beyond the front page. It's not like /prog/ is popular enough to really need it anyway.

Name: Anonymous 2010-08-19 17:43

I think this thread is spent, sadly.

Name: Anonymous 2010-08-21 13:30

Trying to spark more discussion here...

>>7
malloc also gives you GC pauses, by the way.
I hate it so much when people say this. malloc() does not have "GC pauses", but yes, it can be unpredictably slow. We get that. That's why we're smart enough to not call malloc() in the middle of our rendering loop.

In most languages with forced GC, you simply do not have the means to perform any non-trivial computation without invoking the garbage collector. In Java, simple container classes like Point2D, which should be value types, are allocated on the heap. JIT can elide *most* of these with escape analysis, but no promises, and no feedback either. Functional languages are absolutely the worst for this; in something like Haskell, seemingly innocuous function calls allocate thunks for lazy evaluation, or reconstruct lists as they process them.

Many embedded apps have very predictable malloc() performance because they never call free(). They allocate all needed memory on startup, and then work strictly within those bounds. When I did game development on mobile platforms before iPhone (BREW, Symbian) this is how we coded our games. You have to in order to make it safe, especially since different phones can have such wildly different available memory; this way you know that as long as the app starts, it will never run out of memory.

>>7
BitC
BitC is sort of neat... but I'm not excited about it for a bunch of reasons. They seems confused about what syntax they want to use for it. They started out with Lisp, but with no Lisp features (e.g. macros.) Now they are trying to transition to a very ML-like or Haskell-like syntax. It also forces garbage collection, but at least it has the means to avoid causing garbage collection, such as refs to value types.

Most importantly it appears stagnant. Based on the bar on the left, nothing has changed since November 2008. The language is not in a usable state.

Name: Anonymous 2010-08-21 14:35

>>85
When I did game development on mobile platforms
Pff. I refuse to believe anyone on /prog/ knows how to program, let alone do it for a living. Enterprise bullshit sure, but not programming.

Name: Anonymous 2010-08-21 15:08

>>85
I hate it so much when people say this. malloc() does not have "GC pauses", but yes, it can be unpredictably slow. We get that. That's why we're smart enough to not call malloc() in the middle of our rendering loop.

>>7
The advantage of doing it manually is that you will never call malloc while handling a real time event.

Might want to calm that jerky knee at least long enough to read the following paragraph.

Name: Anonymous 2010-08-21 15:39

Malloc is slow.
People use either alloca or (usually) preallocated static buffers and manage them themself(SSE memcpy,memmove, DMA hacks,etc).

Name: Anonymous 2010-08-21 16:37

I'm sick of this bullshit about malloc() being slow. Who the fuck sprouts this bullshit?

The malloc() on my system takes less than 0.05 microseconds (that's less than 100 cycles) for random allocations of 0-128KB. There are some "long" pauses here and there, and by "long" I mean less-than-a-microsecond long. The overhead is also ludicrously low, for example allocating 10 million times 2 bytes (discarding the pointers) consumes less than 21MB of virtual space. I've never seen the overhead exceed 10%.

I'm not saying malloc() is the final solution for memory allocation, but goddamn, it's supposed to be a convenient tool and not something that makes you roll your own shit on top. If yours doesn't work properly, replace it.

Name: Anonymous 2010-08-21 17:22

>>88
Taking advantage of hardware features is a hack?

Name: Anonymous 2010-08-21 17:27

>>89
100 cycles is a long time.

Name: Anonymous 2010-08-21 18:46

>>91
It's shorter than the time it takes for me to ejaculate.

Name: Anonymous 2010-08-22 0:02

>>90
I don't see normal programs resorting to DMA for speed.

Name: Anonymous 2010-08-22 0:44

>>91
So how long should it take to safely allocate memory?

Name: Anonymous 2010-08-22 1:19

>>94
It's not a case of "how long should it take" -- it just shouldn't be done in a real time event.

Name: Anonymous 2010-11-14 2:20

Name: Anonymous 2010-12-26 21:11

Name: Anonymous 2011-01-31 20:39

<-- check em dubz

Name: Anonymous 2011-02-18 17:41

dubz
Don't change these.
Name: Email:
Entire Thread Thread List