Binary search algorithm

This article is about searching a finite sorted array. For searching continuous function values, see bisection method.
Binary search algorithm

Visualization of the binary search algorithm where 4 is the target value.
Class Search algorithm
Data structure Array
Worst case performance O(log n)
Best case performance O(1)
Average case performance O(log n)
Worst case space complexity O(1) iterative,
O(log n) recursive
(without tail call elimination)

In computer science, binary search, also known as half-interval search[1] or logarithmic search,[2] is a search algorithm that finds the position of a target value within a sorted array.[3][4] It works by comparing the target value to the middle element of the array; if they are not equal, the lower or upper half of the array is eliminated depending on the result and the search is repeated until it is successful.

Binary search runs in at worst logarithmic time, making {\textstyle O(\log n)} comparisons, where {\textstyle n} is the number of elements in the array and {\textstyle \log} is the binary logarithm; and using only constant {\textstyle (O(1))} space.[5] Although specialized data structures designed for fast searching—such as hash tables—can be searched more efficiently, binary search can be applied to often a wider range of search problems.

Although the idea is simple, implementing the algorithm correctly requires attention to some subtleties about the exit condition and the midpoint calculation.

There exist numerous variations of binary search. One variation in particular (fractional cascading) speeds up binary searches for the same value in multiple arrays.

Algorithm

Binary search works on sorted arrays. A binary search begins by comparing the middle element of the array with the target value. If the target value matches the middle element, its position in the array is returned. If the target value is less or more than the middle element, the search continues the lower or upper half of the array respectively with a new middle element, eliminating the other half from consideration.[6] This method can be described recursively or iteratively.

Procedure

Given an array A of n elements with values or records A0 ... An−1 and target value T, the following subroutine uses binary search to find the index of T in A.[6]

  1. Set L to 0 and R to n 1.
  2. If L > R, the search terminates as unsuccessful. Set m (the position of the middle element) to the floor of (L + R) / 2.
  3. If Am < T, set L to m + 1 and go to step 2.
  4. If Am > T, set R to m 1 and go to step 2.
  5. If Am = T, the search is done; return m.

This iterative procedure keeps track of the search boundaries via two variables; a recursive version would keep its boundaries in its recursive calls. Some implementations may place the comparison for equality at the end of the algorithm, resulting in a faster comparison loop but costing one more iteration on average.[7]

Approximate matches

The above procedure only performs exact matches, finding the position of a target value. However, due to the ordered nature of sorted arrays, it is trivial to extend binary search to perform approximate matches. Particularly, binary search can be used to compute, relative to a value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries (the number of elements between two values) can be performed with two rank queries.[8]

Performance

A tree representing binary search. The array being searched here is [20, 30, 40, 50, 90, 100], and the target value is 40.

The performance of binary search can be analyzed by reducing the procedure to a binary comparison tree, where the root node is the middle element of the array; the middle element of the lower half is left of the root and the middle element of the upper half is right of the root. The rest of the tree is built in a similar fashion. This model represents binary search; starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration, representing the successive elimination of elements.[5][11]

The worst case is {\textstyle \lfloor \log n + 1 \rfloor} iterations (of the comparison loop), where the {\textstyle \lfloor \rfloor} notation denotes the floor function that rounds its argument down to an integer. This is reached when the search reaches the deepest level of the tree, equivalent to a binary search that has reduced to one element and, in each iteration, always eliminates the smaller subarray out of the two if they are not of equal size.[lower-alpha 1][11]

On average, assuming that each element is equally likely to be searched, by the time the search completes, the target value will most likely be found at the second-deepest level of the tree. This is equivalent to a binary search that completes one iteration before the worst case, reached after {\textstyle \log{n} - 1} iterations. However, the tree may be unbalanced, with the deepest level partially filled, and equivalently, the array may not be divided perfectly by the search in some iterations, half of the time resulting in the smaller subarray being selected. The actual number of average iterations is slightly higher, at {\textstyle \log n - \frac{n - \log n - 1}{n}} iterations.[5] In the best case, where the first middle element selected is equal to the target value, its position is returned after one iteration.[12] No search algorithm that is based solely on comparisons can exhibit better average and worst-case performance (in terms of iterations) than binary search.[11]

Each iteration of the binary search algorithm defined above makes 1.5 comparisons, checking if the middle element is equal to the target value in each iteration. A variation of the algorithm instead checks for equality at the very end of the search, eliminating half a comparison from each iteration. This decreases the time taken per iteration very slightly on most computers, while guaranteeing that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only {\textstyle \log n} times, for all but enormous {\textstyle n}, the slight increase in comparison loop efficiency does not compensate for the extra iteration. Knuth 1998 gives a value of {\textstyle 2^{66}} (more than 73 quintillion)[13] elements for this variation to be faster.[lower-alpha 2][14][15]

The iterative version of binary search only requires three extra variables, taking constant extra space. When compiled with a compiler that does not support tail call elimination, the recursive version takes space proportional to the number of iterations as extra stack space is required to store the variables.[16][17]

Fractional cascading can be used to speed up searches of the same value in multiple arrays. Where {\textstyle k} is the number of arrays, searching each array for the target value takes {\textstyle O(k \log n)} time; fractional cascading reduces this to {\textstyle O(k + \log n)}.[18]

Binary search versus other schemes

Hash tables

For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records;[19] most implementations require only amortized constant time on average.[lower-alpha 3][21] However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record.[22] Binary search is ideal for such matches, performing them in logarithmic time. In addition, all operations possible on a sorted array can be performed—such as finding the smallest and largest key and performing range searches.[23]

Binary search trees

A binary search tree is a binary tree data structure that works based on the principle of binary search: the records of the tree are arranged in sorted order, and traversal of the tree is performed using a logarithmic time binary search-like algorithm. Insertion and deletion also require logarithmic time in binary search trees. This is faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array.[24]

However, binary search is usually more efficient for searching as binary search trees will most likely be imbalanced, resulting in slightly worse performance than binary search. This applies even to balanced binary search trees, binary search trees that balance their own nodes—as they rarely produce optimally-balanced trees—but to a lesser extent. Although unlikely, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching {\textstyle n} comparisons.[lower-alpha 4] Binary search trees take more space than sorted arrays.[26]

Linear search

Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays.[27] If the array must first be sorted, that cost must be amortized (spread) over any searches. Sorting the array also enables efficient approximate matches and other operations.[28]

Other data structures

There exist data structures that may beat binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. However, while these operations can always be done at least efficiently on a sorted array regardless of the keys, such data structures are usually only faster because they exploit the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that do not have that attribute.[23]

Variations

Uniform binary search

A variation of the algorithm forgoes the lower and upper pointers or variables, instead storing the index of the middle element and the width of the search; i.e., the number of elements around the middle element that have not been eliminated yet. Each step reduces the width by about half. This algorithm is called the uniform binary search because the difference between the indices of middle elements and the preceding middle elements chosen remains constant between searches of arrays of the same length.[29]

Fibonacci search

Fibonacci search is an algorithm similar to binary search that, given a finite interval containing a real number which contains the maximum of an unimodal function, finds a smaller such interval by exploiting the properties of the Fibonacci numbers.[30][31]

Exponential search

Main article: Exponential search

Exponential search is an algorithm for searching primarily in infinite lists, but it can be applied to select the upper bound for binary search. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes {\textstyle \lfloor \log{x} + 1\rfloor} iterations of the exponential search and at most {\textstyle \lfloor \log{n} \rfloor} iterations of the binary search, where {\textstyle x} is the position of the target value. Only if the target value is near the beginning of the array is this variation more efficient than selecting the highest element as the upper bound.[32]

Interpolation search

Main article: Interpolation search

Instead of merely calculating the midpoint, interpolation search attempts to calculate the position of the target value, taking into account the lowest and highest elements in the array and the length of the array. This is only possible if the array elements are numbers. It works on the basis that the midpoint is not the best guess in many cases; for example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.[33] When the distribution of the array elements is uniform or near uniform, it makes {\textstyle O(\log \log n)} comparisons.[33][34][35]

In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation, and the slower growth rate of its time complexity compensates for this only for large arrays.[33]

Fractional cascading

Main article: Fractional cascading

Fractional cascading is a technique that speeds up binary searches for the same element for both exact and approximate matching in "catalogs" (arrays of sorted elements) associated with vertices in graphs. Searching each catalog separately requires {\textstyle O(k \log n)} time, where {\textstyle k} is the number of catalogs. Fractional cascading reduces this to {\textstyle O(k + \log n)} by storing specific information in each catalog about other catalogs.[18]

Fractional cascading was originally developed to efficiently solve various computational geometry problems, but it also has been applied elsewhere, in domains such as data mining and Internet Protocol routing.[18]

History

In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, the first ever set of lectures regarding any computer-related topic.[36] Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays.[37] In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration, as well as ensuring that the position of the rightmost element is returned if the target value is duplicated in the array.[7] The uniform binary search was presented to Donald Knuth in 1971 by A. K. Chandra of Stanford University and published in Knuth's The Art of Computer Programming.[36] In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading, a technique used to speed up binary searches in multiple arrays.[18][38][39]

Implementation issues

Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky ... — Donald Knuth[2]

When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that an astounding ninety percent failed to provide a correct solution after several hours of working on it,[40] and another study shows that accurate code for it is only found in five out of twenty textbooks.[41] Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years.[42]

In a practical implementation, the variables used to represent the indices will often be of fixed size, and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as (L + R)/2, then the value of L + R may exceed the range of integers of the data type used to store the midpoint, even if L and R are within the range. This can be avoided by calculating the midpoint as L + (RL)/2.[43]

If the target value is greater than the highest value in the array, and the size of the array is the maximum representable integer, then an overflow will result if one-based instead of zero-based indexing is used. With one-based indexing, R is set to the length to the array instead of one less than the length. Eventually L will equal R after the lower elements are eliminated, while R does not change. Because the target value exceeds the highest value in the array, the next iteration will set L to R + 1. This results in an overflow because L exceeds R, which is equal to the maximum representable integer. This can be avoided by using zero-based indexing or a larger data type for L so it can fit R + 1.[41]

An infinite loop may occur if the exit conditions for the loop—or equivalently, recursive step—are not defined correctly. Once L exceeds R, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that, in his assignment of binary search, this error was made by most of the programmers who failed to implement a binary search correctly.[7][40]

Library support

Many languages' standard libraries include binary search routines:

See also

Notes and references

Notes

  1. This happens as binary search will not always divide the array perfectly. Take for example the array [0, 1, ... 16]. The first iteration will select the midpoint of 8. On the left subarray are eight elements, but on the right are nine. If the search takes the right path, there is a higher chance that the search will make the maximum number of comparisons.[11]
  2. A formal time performance analysis by Knuth showed that the average running time of this variation is {\textstyle 17.5 \log n + 17} units of time compared to {\textstyle 18 \log n - 16} units for regular binary search. The time complexity for this variation grows slightly more slowly, but at the cost of higher initial complexity.[14]
  3. It is possible to perform hashing in guaranteed constant time.[20]
  4. The worst binary search tree for searching can be produced by inserting the values in sorted or near-sorted order or in an alternating lowest-highest record pattern.[25]

Citations

  1. Willams, Jr., Louis F. (1975). A modification to the half-interval search (binary search) method. Proceedings of the 14th ACM Southeast Conference. pp. 95–101. doi:10.1145/503561.503582.
  2. 1 2 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Binary search".
  3. Cormen et al. 2009, p. 39.
  4. Weisstein, Eric W., "Binary Search", MathWorld.
  5. 1 2 3 Flores, Ivan; Madpis, George (1971). "Average binary search length for dense ordered lists". CACM 14 (9): 602–603. doi:10.1145/362663.362752.
  6. 1 2 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Algorithm B".
  7. 1 2 3 Bottenbruch, Hermann (1962). "Structure and Use of ALGOL 60". Journal of the ACM 9 (2): 161–211. Procedure is described at p. 214 (§43), titled "Program for Binary Search".
  8. 1 2 Sedgewick & Wayne 2011, §3.1, subsection "Rank and selection".
  9. Goldman & Goldman 2007, pp. 461–463.
  10. Sedgewick & Wayne 2011, §3.1, subsection "Range queries".
  11. 1 2 3 4 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search".
  12. Chang 2003, p. 169.
  13. Sloane, Neil. Table of n, 2^n for n = 0..1000. Part of OEIS A000079. Retrieved 30 April 2016.
  14. 1 2 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise 23".
  15. Rolfe, Timothy J. (1997). "Analytic derivation of comparisons in binary search". ACM SIGNUM Newsletter 32 (4): 15–19. doi:10.1145/289251.289255.
  16. Alexandrescu 2010, §1.4.2.
  17. Leiss 2007, p. 154.
  18. 1 2 3 4 Chazelle, Bernard; Liu, Ding (2001). Lower bounds for intersection searching and fractional cascading in higher dimension. 33rd ACM Symposium on Theory of Computing. pp. 322–329. doi:10.1145/380752.380818.
  19. Knuth 1998, §6.4 ("Hashing").
  20. Knuth 1998, §6.4 ("Hashing"), subsection "History".
  21. Dietzfelbinger, Martin; Karlin, Anna; Mehlhorn, Kurt; Meyer auf der Heide, Friedhelm; Rohnert, Hans; Tarjan, Robert E. (August 1994). "Dynamic Perfect Hashing: Upper and Lower Bounds". SIAM Journal on Computing 23 (4): 738–761. doi:10.1137/S0097539791194094.
  22. Morin, Pat. "Hash Tables" (PDF). p. 1. Retrieved 28 March 2016.
  23. 1 2 Beame, Paul; Fich, Faith E. (2001). "Optimal Bounds for the Predecessor Problem and Related Problems". Journal of Computer and System Sciences 65 (1): 38–72. doi:10.1006/jcss.2002.1822.
  24. Sedgewick & Wayne 2011, §3.2 ("Binary Search Trees"), subsection "Order-based methods and deletion".
  25. Knuth 1998, §6.2.2 ("Binary tree searching"), subsection "But what about the worst case?".
  26. Sedgewick & Wayne 2011, §3.5 ("Applications"), "Which symbol-table implementation should I use?".
  27. Knuth 1998, §6.2.1 ("Searching an ordered table").
  28. Sedgewick & Wayne 2011, §3.2 ("Ordered symbol tables").
  29. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "An important variation".
  30. Kiefer, J. (1953). "Sequential Minimax Search for a Maximum". Proceedings of the American Mathematical Society 4 (3): 502–506. doi:10.2307/2032161.
  31. Hassin, Refael (1981). "On Maximizing Functions by Fibonacci Search". Fibonacci Quarterly 19: 347–351.
  32. Moffat & Turpin 2002, p. 33.
  33. 1 2 3 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Interpolation search".
  34. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise 22".
  35. Perl, Yehoshua; Itai, Alon; Avni, Haim (1978). "Interpolation search—a log log n search". CACM 21 (7): 550–553. doi:10.1145/359545.359557.
  36. 1 2 Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "History and bibliography".
  37. Lehmer, Derrick (1960). Teaching combinatorial tricks to a computer. Proceedings of Symposia in Applied Mathematics 10. pp. 180–181. doi:10.1090/psapm/010.
  38. Chazelle, Bernard; Guibas, Leonidas J. (1986). "Fractional cascading: I. A data structuring technique" (PDF). Algorithmica 1 (1): 133–162. doi:10.1007/BF01840440.
  39. Chazelle, Bernard; Guibas, Leonidas J. (1986), "Fractional cascading: II. Applications" (PDF), Algorithmica 1 (1): 163–191, doi:10.1007/BF01840441
  40. 1 2 Bentley 2000, §4.4 ("Principles").
  41. 1 2 Pattis, Richard E. (1988). "Textbook errors in binary searching". SIGCSE Bulletin 20: 190–194. doi:10.1145/52965.53012.
  42. Bloch, Joshua (2 June 2006). "Extra, Extra – Read All About It: Nearly All Binary Searches and Mergesorts are Broken". Google Research Blog. Retrieved 21 April 2016.
  43. Ruggieri, Salvatore (2003). "On computing the semi-sum of two integers" (PDF). Information Processing Letters 87 (2): 67–71. doi:10.1016/S0020-0190(03)00263-1.
  44. "bsearch – binary search a sorted table". The Open Group Base Specifications (7th ed.). The Open Group. 2013. Retrieved 28 March 2016.
  45. Stroustrup 2013, §32.6.1 ("Binary Search").
  46. "java.util.Arrays". Java Platform Standard Edition 8 Documentation. Oracle Corporation. Retrieved 1 May 2016.
  47. "java.util.Collections". Java Platform Standard Edition 8 Documentation. Oracle Corporation. Retrieved 1 May 2016.
  48. "List<T>.BinarySearch Method (T)". Microsoft Developer Network. Retrieved 10 April 2016.
  49. "8.5. bisect — Array bisection algorithm". The Python Standard Library. Python Software Foundation. Retrieved 10 April 2016.
  50. Fitzgerald 2007, p. 152.
  51. "Package sort". The Go Programming Language. Retrieved 28 April 2016.
  52. "NSArray". Mac Developer Library. Apple Inc. Retrieved 1 May 2016.
  53. "CFArray". Mac Developer Library. Apple Inc. Retrieved 1 May 2016.

Works

External links

The Wikibook Algorithm implementation has a page on the topic of: Binary search
This article is issued from Wikipedia - version of the Friday, May 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.