# Quicksort

Divide and Conquer based sorting algorithm, best suitable for most of the applications.

Quicksort is an important sorting algorithm that is based on the **divide-and-conquer **approach.

It divides (or partitions) input elements according to the **value **of a particular element called **pivot **element (also called as key element)

A **partition **is an arrangement of the array’s elements so that all the elements to the left of some element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater than or equal to it.

Once a partition is achieved, A[s] will be in its final position in the sorted array, and we can continue sorting the two subarrays to the left and to the right of A[s] independently.

In quicksort, the entire work happens in the division stage, and no work required to combine the solutions to the subproblems.

The algorithm for quicksort can be stated as follows. Partition function will be discussed later.

**Partitioning**

Start by selecting a pivot — an element with respect to whose value we are going to divide the subarray. There are several different strategies for selecting a pivot. We use the sophisticated method suggested by C.A.R. Hoare, the prominent British computer scientist who invented quicksort.

Select the subarray’s first element as pivot: p = A[l]. Now scan the subarray from both ends, comparing the subarray’s elements to the pivot.

**The left-to-right scan,**denoted by index pointer i, starts with the second element. Since we want elements smaller than the pivot to be in the left part of the subarray, this scan skips elements that are smaller than the pivot and stops upon encountering the first element greater than or equal to the pivot.**The right-to-left scan,**denoted by index pointer j, starts with the last element of the subarray. Since we want elements larger than the pivot to be in the right part of the subarray, this scan skips over elements that are larger than the pivot and stops on encountering the first element smaller than or equal to the pivot.

After both scans stop, three situations may arise, depending on whether or not the scanning indices have crossed.

**Case 1:**If scanning indices i and j have not crossed, i.e., i< j, we simply exchange A[i] and A[j] and resume the scans by incrementing i and decrementing j, respectively:

**Case 2:**If the scanning indices have crossed over, i.e., i > j, we will have partitioned the subarray after exchanging the pivot with A[j]:

**Case 3:**If the scanning indices stop while pointing to the same element, i.e., i = j, the value they are pointing to must be equal to p. Thus, we have the subarray partitioned, with the split position s = i = j . In implementation we can combine this with the case-2.

The algorithm for partition can be stated as follows.

**Analysis**

In quicksort the basic operation is key comparison. Please note best case, worst case and average case time complexity exists for quick sort.

**Best Case time complexity**

Number of key comparisons made to perform partition is **n+1** if the scanning indices cross over and **n** if they coincide. If all the splits happen in the middle of corresponding subarrays, we will have the best case. The number of key comparisons in the best case can be expressed as a recurrence relation,

Solving this recurrence relation using backward substitution method or by using the Master theorem we get

**Worst case time complexity**

In the worst case, all the splits will be skewed to the extreme: one of the two subarrays will be empty, and the size of the other will be just 1 less than the size of the subarray being partitioned. This unfortunate situation will happen, in particular, for increasing arrays. Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach A[0], indicating the split at position 0: So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with itself, the algorithm will be left with the strictly increasing array A[1..n − 1] to sort. This sorting of strictly increasing arrays of diminishing sizes will continue until the last one A[n−2.. n−1] has been processed. The total number of key comparisons made will be equal to

**Average Case** time complexity

Let Cavg(n) be the average number of key comparisons made by quicksort on a randomly ordered array of size n. A partition can happen in any position s (0 ≤ s ≤ n−1) after n+1comparisons are made to achieve the partition. After the partition, the left and right subarrays will have s and n − 1− s elements, respectively. Assuming that the partition split can happen in each position s with the same probability 1/n, we get the following recurrence relation:

Its solution, which is much trickier than the worst- and best-case analyses, turns out to be,

Thus, on the average, quicksort makes only 39% more comparisons than in the best case. Moreover, its innermost loop is so efficient that it usually runs faster than merge sort on randomly ordered arrays of nontrivial sizes. This certainly justifies the name given to the algorithm by its inventor.

**Variations**

Because of quicksort’s importance, there have been persistent efforts over the years to refine the basic algorithm. Among several improvements discovered by researchers are:

- Better pivot selection methods such as randomized quicksort that uses a random element or the median-of-three method that uses the median of the leftmost, rightmost, and the middle element of the array.
- Switching to insertion sort on very small subarrays (between 5 and 15 elements for most computer systems) or not sorting small subarrays at all and finishing the algorithm with insertion sort applied to the entire nearly sorted array
- Modifications of the partitioning algorithm such as the three-way partition into segments smaller than, equal to, and larger than the pivot

## Limitations

- It is not stable.
- It requires a stack to store parameters of subarrays that are yet to be sorted.
- While Performance on randomly ordered arrays is known to be sensitive not only to the implementation details of the algorithm but also to both computer architecture and data type.

**Reference:** Introduction to the Design and Analysis of Algorithms, Anany Levitin:, 3rd Edition, 2012. Pearson.