Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
316 views
in Technique[技术] by (71.8m points)

algorithm - Merge Sort Time and Space Complexity

Let's take this implementation of Merge Sort as an example

void mergesort(Item a[], int l, int r) {
if (r <= l) return;
int m = (r+l)/2;
mergesort(a, l, m);  ------------ (1)
mergesort(a, m+1, r); ------------(2)
merge(a, l, m, r);

a) The time complexity of this Merge Sort is O(nlg(n)). Will parallelizing (1) and (2) give any practical gain ? Theorotically, it appears that after parallelizing them also you would end up in O(nlg(n). But practically can we get any gains ?

b) Space complexity of this Merge Sort here is O(n). However, if I choose to perform in-place merge sort using linked lists (not sure if it can be done with arrays reasonably) will the space complexity become O(lg(n)), since you have to account for recursion stack frame size ? Can we treat O(lg(n)) as constant since it cannot be more than 64 ? I may have misunderstood this at couple of places. What exactly is the significance of 64 ?

c) http://www.cprogramming.com/tutorial/computersciencetheory/sortcomp.html says merge sort requires constant space using linked lists. How ? Did they treat O(lg(n) constant ?

d) [Added to get more clarity] For space complexity calculation is it fair to assume the input array or list is already in memory ? When I do complexity calculations I always calculate the "Extra" space I will be needing besides the space already taken by input. Otherwise space complexity will always be O(n) or worse.

question from:https://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.

b) Space complexity is always Ω(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.

edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.

c) Lists only need some pointers changed during the merge process. That requires constant additional memory.

d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...