Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
758 views
in Technique[技术] by (71.8m points)

algorithm - Data structure name: combination array/linked list

I have come up with a data structure that combines some of the advantages of linked lists with some of the advantages of fixed-size arrays. It seems very obvious to me, and so I'd expect someone to have thought of it and named it already. Does anyone know what this is called:

Take a small fixed-size array. If the number of elements you want to put in your array is greater than the size of the array, add a new array and whatever pointers you like between the old and the new.

Thus you have:

Static array
—————————————————————————
|1|2|3|4|5|6|7|8|9|a|b|c|
—————————————————————————

Linked list
————  ————  ————  ————  ————
|1|*->|2|*->|3|*->|4|*->|5|*->NULL
————  ————  ————  ————  ————

My thing:
————————————  ————————————
|1|2|3|4|5|*->|6|7|8|9|a|*->NULL
————————————  ————————————

Edit: For reference, this algorithm provides pretty poor worst-case addition/deletion performance, and not much better average-case. The big advantage for my scenario is the improved cache performance for read operations.

Edit re bounty: Antal S-Z's answer was so complete and well-researched that I wanted to provide em with a bounty for it. Apparently Stack Overflow doesn't let me accept an answer as soon as I've offered a bounty, so I'll have to wait (admittedly I am abusing the intention bounty system somewhat, although it's in the name of rewarding someone for an excellent answer). Of course, if someone does manage to provide a better answer, more power to them, and they can most certainly have the bounty instead!

Edit re names: I'm not interested in what you'd call it, unless you'd call it that because that's what authorities on the subject would call it. If it's a name you just came up with, I'm not interested. What I want is a name that I can look up in text books and with Google. (Also, here's a tip: Antal's answer is what I was looking for. If your answer isn't "unrolled linked list" without a very good reason, it's just plain wrong.)

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

It's called an unrolled linked list. There appear to be a couple of advantages, one in speed and one in space. First, if the number of elements in each node is appropriately sized (e.g., at most the size of one cache line), you get noticeably better cache performance from the improved memory locality. Second, since you have O(n/m) links, where n is the number of elements in the unrolled linked list and m is the number of elements you can store in any node, you can also save an appreciable amount of space, which is particularly noticeable if each element is small. When constructing unrolled linked lists, apparently implementations will try to generally leave space in the nodes; when you try to insert in a full node, you move half the elements out. Thus, at most one node will be less than half full. And according to what I can find (I haven't done any analysis myself), if you insert things randomly, nodes tend to actually be about three-quarters full, or even fuller if operations tend to be at the end of the list.

And as everyone else (including Wikipedia) is saying, you might want to check out skip lists. Skip lists are a nifty probabilistic data structure used to store ordered data with O(log n) expected running time for insert, delete, and find. It's implemented by a "tower" of linked lists, each layer having fewer elements the higher up it is. On the bottom, there's an ordinary linked list, having all the elements. At each successive layer, there are fewer elements, by a factor of p (usually 1/2 or 1/4). The way it's built is as follows. Each time an element is added to the list, it's inserted into the appropriate place in the bottom row (this uses the "find" operation, which can also be made fast). Then, with probability p, it's inserted into the appropriate place in the linked list "above" it, creating that list if it needs to; if it was placed in a higher list, then it will again appear above with probability p. To query something in this data structure, you always check the top lane, and see if you can find it. If the element you see is too large, you drop to the next lowest lane and start looking again. It's sort of like a binary search. Wikipedia explains it very well, and with nice diagrams. The memory usage is going to be worse, of course, and you're not going to have the improved cache performance, but it is generally going to be faster.

References


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...