Suffix array

Suffix array
Type Array
Invented by Manber & Myers (1990)
Time complexity
in big O notation
Average Worst case
Space \mathcal{O}(n) \mathcal{O}(n)
Construction \mathcal{O}(n) \mathcal{O}(n)

In computer science, a suffix array is a sorted array of all suffixes of a string. It is a data structure used, among others, in full text indices, data compression algorithms and within the field of bioinformatics.[1]

Suffix arrays were introduced by Manber & Myers (1990) as a simple, space efficient alternative to suffix trees. They have independently been discovered by Gaston Gonnet in 1987 under the name PAT array (Gonnet, Baeza-Yates & Snider 1992).

Definition

Let S=S[1]S[2]...S[n] be a string and let S[i,j] denote the substring of S ranging from i to j.

The suffix array A of S is now defined to be an array of integers providing the starting positions of suffixes of S in lexicographical order. This means, an entry A[i] contains the starting position of the i-th smallest suffix in S and thus for all 1 < i \leq n: S[A[i-1],n] < S[A[i],n].

Example

Consider the text S=banana$ to be indexed:

i 1 2 3 4 5 6 7
S[i] b a n a n a $

The text ends with the special sentinel letter $ that is unique and lexicographically smaller than any other character. The text has the following suffixes:

Suffix i
banana$ 1
anana$ 2
nana$ 3
ana$ 4
na$ 5
a$ 6
$ 7

These suffixes can be sorted in ascending order:

Suffix i
$ 7
a$ 6
ana$ 4
anana$ 2
banana$ 1
na$ 5
nana$ 3

The suffix array A contains the starting positions of these sorted suffixes:

i 1 2 3 4 5 6 7
A[i] 7 6 4 2 1 5 3

The suffix array with the suffixes written out vertically underneath for clarity:

i 1 2 3 4 5 6 7
A[i] 7 6 4 2 1 5 3
1 $ a a a b n n
2 $ n n a a a
3 a a n $ n
4 $ n a a
5 a n $
6 $ a
7 $

So for example, A[3] contains the value 4, and therefore refers to the suffix starting at position 4 within S, which is the suffix ana$.

Correspondence to suffix trees

Suffix arrays are closely related to suffix trees:

It has been shown that every suffix tree algorithm can be systematically replaced with an algorithm that uses a suffix array enhanced with additional information (such as the LCP array) and solves the same problem in the same time complexity.[2] Advantages of suffix arrays over suffix trees include improved space requirements, simpler linear time construction algorithms (e.g., compared to Ukkonen's algorithm) and improved cache locality.[1]

Space Efficiency

Suffix arrays were introduced by Manber & Myers (1990) in order to improve over the space requirements of suffix trees: Suffix arrays store n integers. Assuming an integer requires 4 bytes, a suffix array requires 4n bytes in total. This is significantly less than the 20n bytes which are required by a careful suffix tree implementation.[3]

However, in certain applications, the space requirements of suffix arrays may still be prohibitive. Analyzed in bits, a suffix array requires \mathcal{O}(n \log n) space, whereas the original text over an alphabet of size \sigma only requires \mathcal{O}(n \log \sigma) bits. For a human genome with \sigma = 4 and n = 3.4 \times 10^9 the suffix array would therefore occupy about 16 times more memory than the genome itself.

Such discrepancies motivated a trend towards compressed suffix arrays and BWT-based compressed full-text indices such as the FM-index. These data structures require only space within the size of the text or even less.

Construction Algorithms

A suffix tree can be built in \mathcal{O}(n) and can be converted into a suffix array by traversing the tree depth-first also in \mathcal{O}(n), so there exist algorithms that can build a suffix array in \mathcal{O}(n).

A naive approach to construct a suffix array is to use a comparison-based sorting algorithm. These algorithms require \mathcal{O}(n \log n) suffix comparisons, but a suffix comparison runs in \mathcal{O}(n) time, so the overall runtime of this approach is \mathcal{O}(n^2 \log n).

More advanced algorithms take advantage of the fact that the suffixes to be sorted are not arbitrary strings but related to each other. These algorithms strive to achieve the following goals:[4]

One of the first algorithms to achieve all goals is the SA-IS algorithm of Nong, Zhang & Chan (2009). The algorithm is also rather simple (< 100 LOC) and can be enhanced to simultaneously construct the LCP array.[5] The SA-IS algorithm is one of the fastest known suffix array construction algorithms. A careful implementation by Yuta Mori outperforms most other linear or super-linear construction approaches.

Beside time and space requirements, suffix array construction algorithms are also differentiated by their supported alphabet: constant alphabets where the alphabet size is bound by a constant, integer alphabets where characters are integers in a range depending on n and general alphabets where only character comparisons are allowed.[6]

Most suffix array construction algorithms are based on one of the following approaches:[4]

A well-known recursive algorithm for integer alphabets is the DC3 / skew algorithm of Kärkkäinen & Sanders (2003). It runs in linear time and has successfully been used as the basis for parallel[7] and external memory[8] suffix array construction algorithms.

Recent work by Salson et al. (2009) proposes an algorithm for updating the suffix array of a text that has been edited instead of rebuilding a new suffix array from scratch. Even if the theoretical worst-case time complexity is \mathcal{O}(n \log n), it appears to perform well in practice: experimental results from the authors showed that their implementation of dynamic suffix arrays is generally more efficient than rebuilding when considering the insertion of a reasonable number of letters in the original text.

Applications

The suffix array of a string can be used as an index to quickly locate every occurrence of a substring pattern P within the string S. Finding every occurrence of the pattern is equivalent to finding every suffix that begins with the substring. Thanks to the lexicographical ordering, these suffixes will be grouped together in the suffix array and can be found efficiently with two binary searches. The first search locates the starting position of the interval, and the second one determines the end position:

    def search(P):
        l = 0; r = n
        while l < r:
            mid = (l+r) / 2
            if P > suffixAt(A[mid]):
                l = mid + 1
            else:
                r = mid
        s = l; r = n
        while l < r:
            mid = (l+r) / 2
            if P < suffixAt(A[mid]):
                r = mid
            else:
                l = mid + 1
        return (s, r)

Finding the substring pattern P of length m in the string S of length n takes \mathcal{O}(m \log n) time, given that a single suffix comparison needs to compare m characters. Manber & Myers (1990) describe how this bound can be improved to \mathcal{O}(m + \log n) time using LCP information. The idea is that a pattern comparison does not need to re-compare certain characters, when it is already known that these are part of the longest common prefix of the pattern and the current search interval. Abouelhoda, Kurtz & Ohlebusch (2004) improve the bound even further and achieve a search time of \mathcal{O}(m) as known from suffix trees.

Suffix sorting algorithms can be used to compute the Burrows–Wheeler transform (BWT). The BWT requires sorting of all cyclic permutations of a string. If this string ends in a special end-of-string character that is lexicographically smaller than all other character (i.e., $), then the order of the sorted rotated BWT matrix corresponds to the order of suffixes in a suffix array. The BWT can therefore be computed in linear time by first constructing a suffix array of the text and then deducing the BWT string: BWT[i] = S[A[i]-1].

Suffix arrays can also be used to look up substrings in Example-Based Machine Translation, demanding much less storage than a full phrase table as used in Statistical machine translation.

Many additional applications of the suffix array require the LCP array. Some of these are detailed in the application section of the latter.

Notes

References

External links

This article is issued from Wikipedia - version of the Wednesday, May 04, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.