Regular language

For natural language that is regulated, see List of language regulators.
"Kleene's theorem" redirects here. For his theorems for recursive functions, see Kleene's recursion theorem.

In theoretical computer science and formal language theory, a regular language (also called a rational language[1][2]) is a formal language that can be expressed using a regular expression, in the strict sense of the latter notion used in theoretical computer science (as opposed to many regular expressions engines provided by modern programming languages, which are augmented with features that allow recognition of languages that cannot be expressed by a classic regular expression).

Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem.[3] In the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars (regular grammars).

Regular languages are very useful in input parsing and programming language design.

Formal definition

The collection of regular languages over an alphabet Σ is defined recursively as follows:

See regular expression for its syntax and semantics. Note that the above cases are in effect the defining rules of regular expression.

Examples

All finite languages are regular; in particular the empty string language {ε} = Ø* is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of as, or the language consisting of all strings of the form: several as followed by several bs.

A simple example of a language that is not regular is the set of strings { anbn | n ≥ 0 }.[4] Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.

Equivalent formalisms

A regular language satisfies the following equivalent properties:

  1. it is the language of a regular expression (by the above definition)
  2. it is the language accepted by a nondeterministic finite automaton (NFA)[note 1][note 2]
  3. it is the language accepted by a deterministic finite automaton (DFA)[note 3][note 4]
  4. it can be generated by a regular grammar[note 5][note 6]
  5. it is the language accepted by an alternating finite automaton
  6. it can be generated by a prefix grammar
  7. it can be accepted by a read-only Turing machine
  8. it can be defined in monadic second-order logic (Büchi-Elgot-Trakhtenbrot theorem[5])
  9. it is recognized by some finite monoid M, meaning it is the preimage { w∈Σ* | f(w)∈S } of a subset S of a finite monoid M under a monoid homomorphism f: Σ*M from the free monoid on its alphabet[note 7]
  10. the number of equivalence classes of its "syntactic relation" ~ is finite[note 8][note 9] (this number equals the number of states of the minimal deterministic finite automaton accepting L.)

Properties 9. and 10. are purely algebraic approaches to define regular languages; a similar set of statements can be formulated for a monoid M⊂Σ*. In this case, equivalence over M leads to the concept of a recognizable language.

Some authors use one of the above properties different from "1." as alternative definition of regular languages.

Some of the equivalences above, particularly those among the first four formalisms, are called Kleene's theorem in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem".[6] Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem".[7] Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages").[2][8] A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages.[9] Other authors simply define "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular languages".[1][2]

Closure properties

The regular languages are closed under the various operations, that is, if the languages K and L are regular, so is the result of the following operations:

Decidability properties

Given two deterministic finite automata A and B, it is decidable whether they accept the same language.[12] As a consequence, using the above closure properties, the following problems are also decidable for arbitrarily given deterministic finite automata A and B, with accepted languages LA and LB, respectively:

For regular expressions, the universality problem is NP-complete already for a singleton alphabet.[13] For larger alphabets, that problem is PSPACE-complete.[14][15] If regular expressions are extended to allow also a squaring operator, with "A2" denoting the same as "AA", still just regular languages can be described, but the universality problem has an exponential space lower bound,[16][17][18] and is in fact complete for exponential space with respect to polynomial-time reduction.[19]

Complexity results

In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULARAC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[20] On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language \{0^n 1^n : n \in \mathbb N\} can both be recognized in AC0.[21]

If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).[22] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.

Location in the Chomsky hierarchy

Regular language in classes of Chomsky hierarchy.

To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language such as this is not regular, one often uses the Myhill–Nerode theorem or the pumping lemma among other methods.[23]

Important subclasses of regular languages include

The number of words in a regular language

Let s_L(n) denote the number of words of length n in L. The ordinary generating function for L is the formal power series

S_L(z) = \sum_{n \ge 0} s_L(n) z^n \ .

The generating function of a language L is a rational function if L is regular.[26] Hence for any regular language L there exist an integer constant n_0, complex constants \lambda_1,\,\ldots,\,\lambda_k and complex polynomials p_1(x),\,\ldots,\,p_k(x) such that for every n \geq n_0 the number s_L(n) of words of length n in L is s_L(n)=p_1(n)\lambda_1^n+\dotsb+p_k(n)\lambda_k^n.[28][29][30][31]

Thus, non-regularity of certain languages L' can be proved by counting the words of a given length in L'. Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length 2n in the Dyck language is equal to the Catalan number C_n\sim\frac{4^n}{n^{3/2}\sqrt{\pi}}, which is not of the form p(n)\lambda^n, witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues \lambda_i could have the same magnitude. For example, the number of words of length n in the language of all even binary words is not of the form p(n)\lambda^n, but the number of words of even or odd length are of this form; the corresponding eigenvalues are 2,-2. In general, for every regular language there exists a constant d such that for all a, the number of words of length dm+a is asymptotically C_a m^{p_a} \lambda_a^m.[32]

The zeta function of a language L is[26]

\zeta_L(z) = \exp \left({ \sum_{n \ge 0} s_L(n) \frac{z^n}{n} }\right) \ .

The zeta function of a regular language is not in general rational, but that of a cyclic language is.[33][34]

Generalizations

The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton).

Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph[35] often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”[36]

Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called rational languages.[37][38] Also in this context, Kleene's theorem finds a generalization called the Kleene-Schützenberger theorem.

Notes

  1. 1. ⇒ 2. by Thompson's construction algorithm
  2. 2. ⇒ 1. by Kleene's algorithm
  3. 2. ⇒ 3. by the powerset construction
  4. 3. ⇒ 2. since the former definition is stronger than the latter
  5. 2. ⇒ 4. see Hopcroft, Ullman (1979), Theorem 9.2, p.219
  6. 4. ⇒ 2. see Hopcroft, Ullman (1979), Theorem 9.1, p.218
  7. 3. ⇔ 9. by the Myhill–Nerode theorem
  8. u~v is defined as: uwL if and only if vwL for all w∈Σ*
  9. 3. ⇔ 10. see the proof in the Syntactic monoid article, and see p.160 in Holcombe, W.M.L. (1982). Algebraic automata theory. Cambridge Studies in Advanced Mathematics 1. Cambridge University Press. ISBN 0-521-60492-3. Zbl 0489.68046.
  10. check if LALB = LA

References

  1. 1 2 Ruslan Mitkov (2003). The Oxford Handbook of Computational Linguistics. Oxford University Press. p. 754. ISBN 978-0-19-927634-9.
  2. 1 2 3 Mark V. Lawson (2003). Finite Automata. CRC Press. pp. 98–103. ISBN 978-1-58488-255-8.
  3. Sheng Yu (1997). "Regular languages". In Grzegorz Rozenberg and Arto Salomaa. Handbook of Formal Languages: Volume 1. Word, Language, Grammar. Springer. p. 41. ISBN 978-3-540-60420-4.
  4. Eilenberg (1974), p. 16 (Example II, 2.8) and p. 25 (Example II, 5.2).
  5. M. Weyer: Chapter 12 - Decidability of S1S and S2S, p. 219, Theorem 12.26. In: Erich Grädel, Wolfgang Thomas, Thomas Wilke (Eds.): Automata, Logics, and Infinite Games: A Guide to Current Research. Lecture Notes in Computer Science 2500, Springer 2002.
  6. Robert Sedgewick; Kevin Daniel Wayne (2011). Algorithms. Addison-Wesley Professional. p. 794. ISBN 978-0-321-57351-3.
  7. Jean-Paul Allouche; Jeffrey Shallit (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. p. 129. ISBN 978-0-521-82332-6.
  8. Kenneth Rosen (2011). Discrete Mathematics and Its Applications 7th edition. McGraw-Hill Science. pp. 873–880.
  9. Horst Bunke; Alberto Sanfeliu (January 1990). Syntactic and Structural Pattern Recognition: Theory and Applications. World Scientific. p. 248. ISBN 978-9971-5-0566-0.
  10. Salomaa (1981) p.28
  11. Salomaa (1981) p.27
  12. Hopcroft, Ullman (1979), Theorem 3.8, p.64; see also Theorem 3.10, p.67
  13. Aho, Hopcroft, Ullman (1974), Exercise 10.14, p.401
  14. Hunt, H. B., III (1973), "On the time and tape complexity of languages. I", Fifth Annual ACM Symposium on Theory of Computing (Austin, Tex., 1973), Assoc. Comput. Mach., New York, pp. 10–19, MR 0421145
  15. Harry Bowen {Hunt III} (Aug 1973). On the Time and Tape Complexity of Languages (PDF) (Ph.D. thesis). TR. Cornell University.
  16. Hopcroft, Ullman (1979), Theorem 13.15, p.351
  17. A.R. Meyer and L.J. Stockmeyer (Oct 1972). 13th Annual IEEE Symp. on Switching and Automata Theory (PDF). pp. 125129.
  18. L.J. Stockmeyer and A.R. Meyer (1973). Proc. 5th ann. symp. on Theory of computing (STOC) (PDF). ACM. pp. 19.
  19. Hopcroft, Ullman (1979), Corollary p.353
  20. Furst, M.; Saxe, J. B.; Sipser, M. (1984). "Parity, circuits, and the polynomial-time hierarchy". Math. Systems Theory 17: 13–27. doi:10.1007/bf01744431.
  21. Cook, Stephen; Nguyen, Phuong (2010). Logical foundations of proof complexity (1. publ. ed.). Ithaca, NY: Association for Symbolic Logic. p. 75. ISBN 0-521-51729-X.
  22. J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179–190. 1965.
  23. How to prove that a language is not regular?
  24. A finite language shouldn't be confused with a (usually infinite) language generated by a finite automaton.
  25. Volker Diekert, Paul Gastin (2008). "First-order definable languages". In Jörg Flum, Erich Grädel, Thomas Wilke. Logic and automata: history and perspectives (PDF). Amsterdam University Press. ISBN 978-90-5356-576-6.
  26. 1 2 3 Honkala, Juha (1989). "A necessary condition for the rationality of the zeta function of a regular language" (PDF). Theor. Comput. Sci. 66 (3): 341–347. doi:10.1016/0304-3975(89)90159-x. Zbl 0675.68034.
  27. Berstel & Reutenauer (2011) p.220
  28. Flajolet & Sedgweick, section V.3.1, equation (13).
  29. Proof of theorem for irreducible DFAs
  30. http://cs.stackexchange.com/a/11333/683 Proof of theorem for arbitrary DFAs
  31. Number of words of a given length in a regular language
  32. Flajolet & Sedgewick (2002) Theorem V.3
  33. Berstel, Jean; Reutenauer, Christophe (1990). "Zeta functions of formal languages". Trans. Am. Math. Soc. 321 (2): 533–546. doi:10.1090/s0002-9947-1990-0998123-x. Zbl 0797.68092.
  34. Berstel & Reutenauer (2011) p.222
  35. Samuel Eilenberg. Automata, languages, and machines. Academic Press. in two volumes "A" (1974, ISBN 9780080873749) and "B" (1976, ISBN 9780080873756), the latter with two chapters by Bret Tilson.
  36. Straubing, Howard (1994). Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Basel: Birkhäuser. p. 8. ISBN 3-7643-3719-2. Zbl 0816.68086.
  37. Berstel & Reutenauer (2011) p.47
  38. Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. p. 86. ISBN 978-0-521-84425-3. Zbl 1188.68177.

Further reading

External links

This article is issued from Wikipedia - version of the Monday, February 22, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.