As a general rule, should C programs use linked lists or dynamic arrays?
Name:
Anonymous2013-03-19 4:08
For certain information, like indexed data, which can be queried. You should use some kind of tree. Red-black binary trees are quite simple to implement and have for all operations O(log n). No a hash table won't suffice here.
Because on a tree you can also do the query: give all values between 3 and 5, which is much more natural on a tree than on a hash table.
Name:
Anonymous2013-03-19 5:56
Sure they seem like O(1) but in real world conditions they are terrible.
Insertion to the start of a linked list *is* O(1). O(1) describes a rate of growth, not a duration.
>>39 Sure they seem like O(1) but in real world conditions they are terrible. They take more memory. They destroy cache.
You might need to read this so it doesn't sound like you have absolutely no clue what you're talking about. https://en.wikipedia.org/wiki/Computational_complexity_theory
Name:
Anonymous2013-03-19 6:10
>>42
Malloc implementations under heavy fragmentation would struggle to stay close to O(1). But >>39-sama seemed to be talking about constant factors for memory use and time.
>>45
Not sure what you're on about. Are you delusional?
Meanwhile you could implement dynamic arrays as linked lists of large chunks of memory. struct dyn_arr_chunk { struct dyn_arr_chunk *next; int data[PAGESIZE - sizeof(struct dyn_arr_chunk*)]; }
Then you don't need to waste your time arguing with each other.
Name:
Anonymous2013-03-19 6:21
Just curious, would you say it's better in general to do few large allocations through malloc than many small ones?
Ideally you'd allocate in multiples of page size and make sure all your data structures neatly pack into 4K chunks.
One way to get both O(1) insertion and O(1) indexing from a linked list is to build a "dynamic" array on the first indexing operation as a cache. IIRC Firefox does this for its DOM node structures.
Maybe you should describe the use case. Basicly firefox does this for its DOM node structures isn't a use case. It sounds cool though quoting "dynamic". Like you read firefox before installing.
>>49
Yes, malloc is expensive >>53
Generally you'd want a linked list of static arrays to implement a dynamic array so (when triggering a resize) you don't have to realloc and hope for another contiguous chunk of memory that is bigger than the current one you have.
Lookup is O(n/pagesize).
>>52 One way to get both O(1) insertion and O(1) indexing from a linked list is to build a "dynamic" array on the first indexing operation as a cache. IIRC Firefox does this for its DOM node structures.
Why cant Racket or Common Lisp do that?
Name:
Anonymous2013-03-19 12:10
>>1 As a general rule, should C programs use linked lists or dynamic arrays?
How abut immutable binary trees?
>>60
I dont subject myself to such abstract bullshit that only exists at compile time
Name:
Anonymous2013-03-20 12:02
>>54 you don't have to realloc and hope for another contiguous chunk of memory that is bigger than the current one you have
You don't have to hope anything. If you realloc 4k bigger block than you had before, and your last memory chunk size was divisible by 4k, it's almost certain that realloc returns the same pointer. That's because OS can just allocate new page of memory and map it after your old memory chunk.