>>16
most dynamic arrays are implemented in such a way that they never over-commit more than double the memory. (ie they double in size every time they grow)
I think it's totally reasonable to assume that the average case for an append to be two lists of the same size.
So no I'd say the average case would require O(n), not constant time.
Also one of the basic things in lisp is decomposing lists into function arguments with the "apply" function. I guess... I guess you could come up with a hack for a dynamic array implementation, but it would not be nearly as straightforward as "the list is now the argument stack. call the function."
also here's a purely functional function: one that takes a list and returns a new list that is twice as big and has each member repeated twice in a row. (a b c) --> (a a b b c c). (it's in a sort of lispy/haskelly pseudocode.)
(recursive-function (a)
when a
append (list (car a) (car a)) (self (cdr a)))
and so: first it would make a bunch of small arrays, and then go about appending them together in the worst way possible because the "head" would move /backward/ each time.
Gross, right? with cells you just move pointers around. Kills the cache but at least the time complexity is 1) good and 2) easy to reason about.
And Clojure's implementation is based on the JVM. I think it's safe to say that we are talking about some kind of typical bit of modern hardware, and not the JVM.