Thu May 03 2018

How does quicksort work? Is it really quick?

Programming2287 views

Quick Sort

As a programmer, you know that the quicksort performance is O(n*log(n)) in average when you work with it. But the confusion is why it is typically 2 or 3 times faster than the others like the merge and heapsort. Here, we will find out is it really quick or by the name only? and how does it perform?

Before diving into the deep, let’s check out about Quicksort.

So, what is Quicksort?

Sometimes Quicksort is called partition exchange sort. It's an efficient sorting algorithm and serving as a systematic method for placing the elements of an array in order wise. It's internally used by a lot of operating systems. In Linux systems, the sort command under the hood is nothing but quicksort. In 1959, Quicksort developed by Tony Hoare but published after two years. It can sort items of any type for which defined as a less-than relation. Quicksort can operate in-place on an array, requiring small additional amounts of memory to perform the sorting. It is very similar to selection sort, but it's worse in worst case performance. So, it rarely uses in worst case O(n2).

Logic

function quicksort(array)
   if length(array) > 1
       pivot = select any element of array
       left = first index of array
       right = last index of array
       while left <= right
           while array[left] < pivot
               left = left + 1
           while array[right] > pivot
               right = right - 1
           if left <= right
               swap array[left] with array[right]
               left = left + 1
               right = right - 1
       quicksort(array from first index to right)
       quicksort(array from left to last index)

Let’s find out why it is quicker than the others like the merge and heapsort

  • The quicksort algorithm is a particularly interesting one and this algorithm breaks down a larger problem into smaller, subproblems.

  • It continues to choose a pivot point and break down the collection into single-element lists, before combing them back together to form one sorted list. So, this algorithm functions work faster than others.

  • Quicksort is also a cache friendly sorting algorithm as it has good locality of reference when used for arrays.

  • Quicksort is also tail recursive, therefore tail call optimizations are done.

  • It almost doesn't do unnecessary element swaps. Swap is time-consuming. With quicksort, you don't swap what is already ordered. If your data is completely ordered, you swap almost nothing.

  • A good reason why quicksort is so fast in practice compared to most other O(n log n) algorithms such as Heapsort, is because it is relatively cache-efficient. Its running time is actually O(n B log(nB)), where B is the block size. Heapsort, on the other hand, doesn't have any such speedup: it's not at all accessing memory cache-efficiently.

  • Most practical implementations of quicksort use randomized version. The randomized version has expected time complexity of O(n log n).

  • Quicksort is slightly sensitive to input that happens to be in the right order, in which case it can skip some swaps. Mergesort doesn't have any such optimizations, which also makes quicksort a bit faster compared to Mergesort.

  • The sort function used in stl is an improved version of quicksort. std::sort is implemented as an intro-sort which is basically a quicksort that keeps track of its recursion depth.

Quick Sort is better than the merge and heapsort

Quicksort has a couple of differences from merge sort or others. Quicksort works in place sort. That means it doesn’t require any extra storage. whereas merge sort requires O(N) extra storage, N denoting the array size which may be quite expensive. Its worst-case running time is not good as selection sort and insertion sort O(n2). But its average-case running time is better than merge sort O(n log⁡2n).

Allocating and deallocating the extra space used for merge sort increases the running time of the algorithm. When comparing average complexity, both types of sorts have O(n log n) average complexity but the constants different.

The base cases are subarrays of fewer than two elements, just as in merge sort. In merge sort, you never see a subarray with no elements, but you can in quicksort if the other elements in the subarray are all less than the pivot or all greater than the pivot.

For arrays, merge sort loses due to the use of extra O(N) storage space. Quicksort practical implementations use a randomized version which has expected time complexity of O(n log n). The worst case is possible in randomized version also, but the worst case doesn’t occur for a particular pattern like the sorted array, and randomized Quicksort works well in practice.

Quick Sort is also a cache friendly sorting algorithm as it has good locality of reference when used for arrays. It's also tail recursive, therefore tail call optimizations are done.

How does it work?

The quicksort uses divide and conquer to gain the advantages, while not using additional storage. Quicksort first divides a large array into two smaller sub-arrays with the low elements and the high elements. Quicksort can then recursively sort the sub-arrays.

Here is how quick sort uses divide-and-conquer.

Think of sorting a subarray array[d..f]. Divide by choosing any element in the subarray array[d..f], called this the pivot. Rearrange the elements in array[d..f]. So, that all elements in array[d..f] that left of the pivot are less than or equal and all elements that are greater than the pivot is to its right. We call this procedure is partitioning. It doesn't matter what order the elements to the left of the pivot are in relation to each other, and the same holds for the elements to the right of the pivot. We just care that each element is somewhere on the correct side of the pivot only. Generally, we'll choose the rightmost element in the subarray, array[f], as the pivot.

Conquer by recursively sorting the subarrays array[d..e-1] and all elements to the left of the pivot, which must be less than or equal to the pivot. In array[e+1..f], all elements to the right of the pivot, which must be greater than the pivot.

Combine is actually nothing. Once the conquer step recursively sorts, we have already completed the task. All elements to the left of the pivot, in array[d..e-1], are less than or equal to the pivot and are sorted, and all elements to the right of the pivot, in array[e+1..f], are greater than the pivot and are sorted. That means the elements in array[d..f]  has sorted.

Here is how the entire quicksort algorithm unfolds.

  • At first, pick an element, called a pivot, from the array.

  • Then partition: rearrange the array such that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it. The pivot element will be in its final position after it

  • After that recursively apply the above steps to the sub-array of elements on the left side of pivot and on the right side of the pivot.

Performance

Worst Case O(n2) - The worst case occurs when the partition process always picks greatest or smallest element as pivot. If we consider above partition strategy where the last element is always picked as pivot, the worst case would occur when the array is already sorted in increasing or decreasing order.

Best-case [ O(n log n) in simple partition or O(n) in three-way partition and equal keys] -  The best case occurs when the partition process always picks the middle element as pivot. In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only log2 n nested calls before we reach a list of size 1. This means that the depth of the call tree is log2 n. But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together that is each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor. The result is that the algorithm uses only O(n log n) time.

Average case O(n log n) - To do the average case, we need to consider all possible permutation of the array and calculate time taken by every permutation which doesn’t look easy. We can get an idea of the average case by considering the case when partition puts O(n/9) elements in one set and O(9n/10) elements in another set.

Worst-case space complexity O(n) auxiliary (naive) O(log n) auxiliary (Sedgewick 1978) - The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies - in-place partitioning is used. This unstable partition requires O(1) space. After partitioning, the partition with the fewest elements is recursively sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. And keeps the stack depth bounded by O(log n). Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space.

 

Quick Sort implementation in different programming languages

Quick Sort in C

Quick Sort in C++

Quick Sort in JAVA

Quick Sort in Python

We use cookies to improve your experience on our site and to show you personalised advertising. Please read our cookie policy and privacy policy.