>>10
Given a node structure with ( parent, left, right ), the following is a possibly correct prefix traversal of a binary search tree which is tail-return optimized to not use the stack.
Similar forms exist for in-order and post-order traversals (you move the call to func around).
def _iterate( curr, last, func ):
if not curr.parent: return None
if curr.parent == last:
func( curr )
if curr.left:
return _iterate( curr.left, curr, func )
elif curr.right:
return _iterate( curr.right, curr, func )
else
return _iterate( last, curr, func )
elif curr.left and curr.left == last:
if curr.right:
return _iterate( curr.right, curr, func )
else:
return _iterate( curr.parent, curr, func )
else:
return _iterate( curr.parent, curr, func )
Obviously unoptimized, but you get the drift. Since the traversal (when viewed as an undirected graph) goes across each edge exactly twice (once in each direction), we can store the state of the traversal simply as which edge we're at, and which direction we're heading. This can be represented as two node pointers (the ones which compose the edge) ordered respecting the current step's 'direction'.
For more details please refer to The Structure and Interpretation of Computer Programs.
>>14
Whether or not you can do that depends on how you implement the tree structure. I always use linked nodes which don't allow you to simply traverse a list (since you have to traverse the tree to generate the list to traverse, making the solution paradoxically recursive).
If you store the tree in a flattened state, however (which incidentally takes less memory at the cost of somewhat painful insertions/deletions if the tree must be re-balanced) it's fairly trivial to treat the flattened entries as a list.
Alternatively, use a separate view of the entire data structure to effectively cache the tree as a list (though this has relatively significant memory overhead and again, makes insertions/deletions painful).
Name:
Anonymous2008-08-05 6:47
>>4
Oh! Oh! Maybe you could help me design an algorithm for calculating an average of values over a random path (loops allowed) in a graph. The path always ends with a leaf.
Name:
Anonymous2008-08-05 7:11
>>17
Are you trying to suggest that's impossible to do iteratively? If so, I would like to suggest you return to Computer Science I.
Name:
Anonymous2008-08-05 7:26
>>15
I'm not sure how you can say "less overhead" when you store the parent in every node.
>>19
Don't underestimate the overhead of function calls.
Name:
Anonymous2008-08-05 7:44
>>21
The overhead of function calls (memorywise, anyway) will be gone once the traversal completes.
Whereas a binary tree with parents stored will always have that wasted space.
If we wanted to conserve both memory and processor time, implementing the traversal iteratively with a stack would be best, since the stack would disappear after the traversal and would only be as long as the height of the tree, whereas storing the parent means an extra value for every node.
I suppose you could make the argument that using a stack would increase the number of cache misses since it's separate from the binary tree, though, but that's more of a real world issue and we seem to be talking more along the lines of an idealized computer. Same with memory allocation and fragmentation associated with creating and using a stack.
Name:
Anonymous2008-08-05 7:47
>>18
I'm not saying it's impossible but with some algorithms it's:
A) Fucking hard
B) imposes more overhead then recursion
etc.
>>19 Obviously unoptimized
You can do it just fine without storing a pointer to the parent if you use a flat tree implementation (such that the parent can be inferred from the node's position in the list and the location of the root node). Using an implicit data structure uses less memory, which has the obvious effect of taking more time (specifically, for insertions/removals) because you have to shuffle nodes around rather than swapping pointers.
But any iterative approach to tree traversal requires you to be able to resolve a node's parent -- naturally this is an advantage for a stack-based traversal.
Ultimately, it really depends on your usage scenario. If you're doing mostly read operations, you can store the tree in an array and not have the "parent" overhead (a recursive approach effectively stores the parent pointers on the stack at runtime). If you've got a nice big stack, that works fine, but you're almost always going to have a larger heap which is where the iterative traversal is throwing it's "parent overhead".
Name:
Anonymous2008-08-05 16:49
recursion is fucking sweet.
Name:
Anonymous2008-08-05 18:48
fucking is sweet recursion.
Name:
Anonymous2008-08-05 19:59
In order to understand recursion, you must first understand recursion.
Name:
Anonymous2008-08-05 21:00
>>28
... or know someone who understands recursion.
Name:
Anonymous2008-08-05 21:50
Recursive loop style makes the loop conditions clear and cleans up flow control
Name:
Anonymous2008-08-05 22:20
>>30
And clogs up the stack anywhere outside the happy world of toy languages.
Name:
Anonymous2008-08-05 22:23
You can always make it obvious using a goto like recur
>>31
Uh, GCC supports tail-recursion. You can verify this yourself by using "gcc -S" and looking at the assembly of a C tail-recursive function.
Name:
Anonymous2008-08-05 22:36
Actually, the compiler's job is to compile your code according to the rules it is supposed to follow. So suck it.
Name:
Anonymous2008-08-06 2:19
>>31
The languages where it clogs up the stack are the toy languages.
>>36
That's why tail call optimization needs to be one of the rules.
Name:
Anonymous2008-08-06 3:21
Recursion works well in the early phases of development because it's more natural in many cases. And many times it's acceptable to leave a function recursive. But recursion makes it difficult to take advantage of destructive operations and thus wastes memory and speed via needless function calls (and yes, function calls are cheap in lisps but they do add up). That's why when you're optimizing, you replace recursive functions with iterative counterparts. That's how real Lisp coders (which excludes most of you SICP worshiping faggots) do things, anyways. They get a prototype working asap, then replace recursion with iteration for performance (in terms of both time & space).
And tail recursion isn't a 'feature' or anything to brag about. If scheme didn't have it, it'd be a completely useless; you couldn't even implement a for loop (without macros stolen from other lisps) because continuations absolutely depends on tail recursions.
Name:
Anonymous2008-08-06 3:55
>>38
A compiler can optimize a tail recursion far better than it can optimize your low-level implementation decisions of how the iteration should work.
>>39
I don't think so Tim. Optimizing tail recursion can get rid of the overhead but it won't magically turn your recursive algorithm into an iterative one. I know because Sussman told me so.
Name:
Anonymous2008-08-06 9:08
In OCaml tail recursive is often way faster than iterative, so go suck a fuck you fags.
An iterative solution is often completely different from the recursive solution. >>15 is a shitty iterative prefix tree traversal -- the recursive variant looks like this:
def traverse( node, func ):
if not func: return
func( node )
traverse( node.left, func )
traverse( node.right, func )
It doesn't use tail recursion. It can't use tail recursion. In order to avoid raping the fuck out of your stack, you must use an iterative approach like the one above.
Tail recursion is the same goddamn thing as an iterative approach -- any tail recursive function can be wrappered into a loop which does the same thing (and this is what any self-respecting compiler does for you), and any iterative loop can be restructured into a tail-recursive function. A true recursive function (qsort, tree traversal, etc) is such that it is not tail recursive, but rather, branches out.
Name:
Anonymous2008-08-06 10:42
>>43
The Erlang VM seems to get along just fine with optimizing deep recursive loops without the typical "tail recursive only!" constraints.
The formula would be useful neither to compute the force between two objects of finite mass nor to compute their motions. If an infinite mass object were to exist, any object of finite mass would be attracted with infinite force (and hence acceleration) by the infinite mass object, which is not what we can observe in reality.