Rewriting

For other uses, see Rewriting (disambiguation).

In mathematics, computer science, and logic, rewriting covers a wide range of (potentially non-deterministic) methods of replacing subterms of a formula with other terms. What is considered are rewriting systems (also known as rewrite systems, rewrite engines[1] or reduction systems). In their most basic form, they consist of a set of objects, plus relations on how to transform those objects.

Rewriting can be non-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide an algorithm for changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several declarative programming languages are based on term rewriting.

Intuitive examples

Logic

In logic, the procedure for obtaining the conjunctive normal form (CNF) of a formula can be conveniently written as a rewriting system. The rules of such a system would be:

\neg\neg A \to A (double negative elimination)
\neg(A \land B) \to \neg A \lor \neg B (De Morgan's laws)
\neg(A \lor B)  \to \neg A \land\neg B
 (A \land B) \lor C \to (A \lor C) \land (B \lor C) (Distributivity)
 A \lor (B \land C) \to (A \lor B) \land (A \lor C),

where the symbol (\to) indicates that an expression matching the left hand side of the rule can be rewritten to one formed by the right hand side. In this system, we can perform a rewrite from left to right only when the logical interpretation of the left hand side is equivalent to that of the right.

Linguistics

In linguistics, rewrite rules, also called phrase structure rules, are used in some systems of generative grammar, as a means of generating the grammatically correct sentences of a language. Such a rule typically takes the form A → X, where A is a syntactic category label, such as noun phrase or sentence, and X is a sequence of such labels and/or morphemes, expressing the fact that A can be replaced by X in generating the constituent structure of a sentence. For example, the rule S → NP VP means that a sentence can consist of a noun phrase followed by a verb phrase; further rules will specify what sub-constituents a noun phrase and a verb phrase can consist of, and so on.

Abstract rewriting systems

From the above examples, it's clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called an abstract reduction system, (abbreviated ARS), although more recently authors use abstract rewriting system as well.[2] (The preference for the word "reduction" here instead of "rewriting" constitutes a departure from the uniform use of "rewriting" in the names of systems that are particularizations of ARS. Because the word "reduction" does not appear in the names of more specialized systems, in older texts reduction system is a synonym for ARS).[3]

An ARS is simply a set A, whose elements are usually called objects, together with a binary relation on A, traditionally denoted by →, and called the reduction relation, rewrite relation[4] or just reduction.[3] This (entrenched) terminology using "reduction" is a little misleading, because the relation is not necessarily reducing some measure of the objects; this will become more apparent when we discuss string rewriting systems further in this article.

Example 1. Suppose the set of objects is T = {a, b, c} and the binary relation is given by the rules ab, ba, ac, and bc. Observe that these rules can be applied to both a and b in any fashion to get the term c. Such a property is clearly an important one. Note also, that c is, in a sense, a "simplest" term in the system, since nothing can be applied to c to transform it any further. This example leads us to define some important notions in the general setting of an ARS. First we need some basic notions and notations.[5]

Normal forms, joinability and the word problem

An object x in A is called reducible if there exists some other y in A such that x \rightarrow y; otherwise it is called irreducible or a normal form. An object y is called a normal form of x if x \stackrel{*}{\rightarrow} y, and y is irreducible. If x has a unique normal form, then this is usually denoted with x\downarrow. In example 1 above, c is a normal form, and c = a\downarrow = b\downarrow. If every object has at least one normal form, the ARS is called normalizing.

A related, but weaker notion than the existence of normal forms is that of two objects being joinable: x and y are said to be joinable if there exists some z with the property that x \stackrel{*}{\rightarrow} z \stackrel{*}{\leftarrow} y. From this definition, it's apparent one may define the joinability relation as \stackrel{*}{\rightarrow} \circ \stackrel{*}{\leftarrow}, where \circ is the composition of relations. Joinability is usually denoted, somewhat confusingly, also with \downarrow, but in this notation the down arrow is a binary relation, i.e. we write x\mathbin\downarrow y if x and y are joinable.

One of the important problems that may be formulated in an ARS is the word problem: given x and y, are they equivalent under \stackrel{*}{\leftrightarrow}? This is a very general setting for formulating the word problem for the presentation of an algebraic structure. For instance, the word problem for groups is a particular case of an ARS word problem. Central to an "easy" solution for the word problem is the existence of unique normal forms: in this case if two objects have the same normal form, then they are equivalent under \stackrel{*}{\leftrightarrow}. The word problem for an ARS is undecidable in general.

The Church–Rosser property and confluence

An ARS is said to possess the Church–Rosser property if and only if x\stackrel{*}{\leftrightarrow}y implies x\mathbin\downarrow y. In words, the Church–Rosser property means that any two equivalent objects are joinable. Alonzo Church and J. Barkley Rosser proved in 1936 that lambda calculus has this property;[6] hence the name of the property.[7] (The fact that lambda calculus has this property is also known as the Church–Rosser theorem.) In an ARS with the Church–Rosser property the word problem may be reduced to the search for a common successor. In a Church–Rosser system, an object has at most one normal form; that is the normal form of an object is unique if it exists, but it may well not exist.

Several different properties are equivalent to the Church–Rosser property, but may be simpler to check in some particular setting. In particular, confluence is equivalent to Church–Rosser. An ARS (A,\rightarrow) is said:

Theorem. For an ARS the following conditions are equivalent: (i) it has the Church–Rosser property, (ii) it is confluent.[8]

Corollary.[9] In a confluent ARS if x \stackrel{*}{\leftrightarrow} y then

Because of these equivalences, a fair bit of variation in definitions is encountered in the literature. For instance, in Bezem et al. 2003 the Church–Rosser property and confluence are defined to be synonymous and identical to the definition of confluence presented here; Church–Rosser as defined here remains unnamed, but is given as an equivalent property; this departure from other texts is deliberate.[10] Because of the above corollary, in a confluent ARS one may define a normal form y of x as an irreducible y with the property that x \stackrel{*}{\leftrightarrow} y. This definition, found in Book and Otto, is equivalent to common one given here in a confluent system, but it is more inclusive [note 1] more in a non-confluent ARS.

Local confluence on the other hand is not equivalent with the other notions of confluence given in this section, but it is strictly weaker than confluence. The relation a \rightarrow b, \; b \rightarrow a, \; a \rightarrow c,\; b \rightarrow d is locally confluent, but not confluent, as c and d are equivalent, but not joinable.[11]

Termination and convergence

An abstract rewriting system is said to be terminating or noetherian if there is no infinite chain x_0 \rightarrow x_1 \rightarrow x_2 \rightarrow \cdots. In a terminating ARS, every object has at least one normal form, thus it is normalizing. The converse is not true. In example 1 for instance, there is an infinite rewriting chain, namely a \rightarrow b \rightarrow a \rightarrow b \rightarrow \cdots, even though the system is normalizing. A confluent and terminating ARS is called convergent. In a convergent ARS, every object has a unique normal form.

Theorem (Newman's Lemma): A terminating ARS is confluent if and only if it is locally confluent.

String rewriting systems

A string rewriting system (SRS), also known as semi-Thue system, exploits the free monoid structure of the strings (words) over an alphabet to extend a rewriting relation, R to all strings in the alphabet that contain left- and respectively right-hand sides of some rules as substrings. Formally a semi-Thue systems is a tuple (\Sigma, R) where \Sigma is a (usually finite) alphabet, and R is a binary relation between some (fixed) strings in the alphabet, called rewrite rules. The one-step rewriting relation relation \xrightarrow[R]{} induced by R on \Sigma^* is defined as: for any strings s, t \in \Sigma^* s \xrightarrow[R]{} t if and only if there exist x, y, u, v \in \Sigma^* such that s = xuy, t = xvy, and u R v. Since \xrightarrow[R]{} is a relation on \Sigma^*, the pair (\Sigma^*, \xrightarrow[R]{}) fits the definition of an abstract rewriting system. Obviously R is subset of \xrightarrow[R]{}. If the relation R is symmetric, then the system is called a Thue system.

In a SRS, the reduction relation \xrightarrow[R]{*} is compatible with the monoid operation, meaning that x\xrightarrow[R]{*} y implies uxv\xrightarrow[R]{*} uyv for all strings x, y, u, v \in \Sigma^*. Similarly, the reflexive transitive symmetric closure of \xrightarrow[R]{}, denoted \overset{*}{\underset R \leftrightarrow}, is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation \overset{*}{\underset R \leftrightarrow} is called the Thue congruence generated by R. In a Thue system, i.e. if R is symmetric, the rewrite relation \xrightarrow[R]{*} coincides with the Thue congruence \overset{*}{\underset R \leftrightarrow}.

The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Since \overset{*}{\underset R \leftrightarrow} is a congruence, we can define the factor monoid \mathcal{M}_R = \Sigma^*/\overset{*}{\underset R \leftrightarrow} of the free monoid \Sigma^* by the Thue congruence in the usual manner. If a monoid \mathcal{M} is isomorphic with \mathcal{M}_R, then the semi-Thue system (\Sigma, R) is called a monoid presentation of \mathcal{M}.

We immediately get some very useful connections with other areas of algebra. For example, the alphabet {a, b} with the rules { ab → ε, ba → ε }, where ε is the empty string, is a presentation of the free group on one generator. If instead the rules are just { ab → ε }, then we obtain a presentation of the bicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving the word problem for monoids and groups. In fact, every monoid has a presentation of the form (\Sigma, R), i.e. it may always be presented by a semi-Thue system, possibly over an infinite alphabet.

The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as the Post-Markov theorem.[12]

Term rewriting systems

Pic.1: Schematic triangle diagram of application of a rewrite rule l \longrightarrow r at position p in a term, with matching substitution \sigma
Pic.2: Rule lhs term x*(y*z) matching in term \frac{a*((a+1)*(a+2))}{1*(2*3)}

A term rewriting system (TRS) is a rewriting system where the objects are terms, or expressions with nested sub-expressions. For example, the system shown under Logic above is a term rewriting system. The terms in this system are composed of binary operators (\vee) and (\wedge) and the unary operator (\neg). Also present in the rules are variables, these each represent any possible term (though a single variable always represents the same term throughout a single rule).

In contrast to string rewriting systems, whose objects are flat sequences of symbols, the objects a term rewriting system works on, i.e. the terms, form a term algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a given signature.


Formal definition

A term rewriting rule is a pair of terms, commonly written as l \rightarrow r, to indicate that the left hand side l can be replaced by the right hand side r. A term rewriting system is a set R of such rules. A rule l \rightarrow r can be applied to a term s if the left term l matches some subterm of s, that is, if s \mid_p = l \sigma [note 2] for some position p in s and some substitution \sigma. The result term t of this rule application is then obtained as t = s[r \sigma]_p; [note 3] see picture 1. In this case, s is said to be rewritten in one step, or rewritten directly, to t by the system R, formally denoted as s \rightarrow_R t,  s \xrightarrow[R]{} t, or as s \xrightarrow{R} t by some authors. If a term t_1 can be rewritten in several steps into a term t_n, that is, if t_1 \xrightarrow[R]{} t_2 \xrightarrow[R]{} \ldots \xrightarrow[R]{} t_n, the term t_1 is said to be rewritten to t_n, formally denoted as t_1 \xrightarrow[R]{+} t_n. In other words, the relation \xrightarrow[R]{+} is the transitive closure of the relation \xrightarrow[R]{}; often, also the notation \xrightarrow[R]{*} is used to denote the reflexive-transitive closure of \xrightarrow[R]{}, that is, s \xrightarrow[R]{*} t if s = t or s \xrightarrow[R]{+} t. [13] A term rewriting given by a set R of rules can be viewed as an abstract rewriting system as defined above, with terms as its objects and \xrightarrow[R]{} as its rewrite relation.

For example, x*(y*z) \rightarrow (x*y)*z is a rewrite rule, commonly used to establish a normal form with respect to the associativity of *. That rule can be applied at the numerator in the term \frac{a*((a+1)*(a+2))}{1*(2*3)} with the matching substitution \{ x \mapsto a, \; y \mapsto a+1, \; z \mapsto a+2 \}, see picture 2. [note 4] Applying that substitution to the rule's right hand side yields the term (a*(a+1))*(a+2), and replacing the numerator by that term yields \frac{(a*(a+1))*(a+2)}{1*(2*3)}, which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for * to \frac{a*((a+1)*(a+2))}{1*(2*3)}" in elementary algebra. Alternatively, the rule could have been applied to the denominator of the original term, yielding \frac{a*((a+1)*(a+2))}{(1*2)*3}.

Termination

Beyond section Termination and convergence, additional subtleties are to be considered for term rewriting systems.

Termination even of a system consisting of one rule with a linear left hand side is undecidable.[14] Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finite ground systems. [15]

The following term rewrite system is normalizing,[note 5] but not terminating,[note 6] and not confluent:

f(x,x) → g(x),
f(x,g(x)) → b,
h(c,x) → f(h(x,c),h(x,x)).[16]

The following two examples of terminating term rewrite systems are due to Toyama:[17]

f(0,1,x) \rightarrow f(x,x,x)

and

g(x,y) \rightarrow x,
g(x,y) \rightarrow y.

Their union is a non-terminating system, since f(g(0,1),g(0,1),g(0,1)) \rightarrow f(0,g(0,1),g(0,1)) \rightarrow f(0,1,g(0,1)) \rightarrow f(g(0,1),g(0,1),g(0,1)) \rightarrow \ldots. This result disproves a conjecture of Dershowitz,[18] who claimed that the union of two terminating term rewrite systems R_1 and R_2 is again terminating if all left hand sides of R_1 and right hand sides of R_2 are linear, and there are no "overlaps" between left hand sides of R_1 and right hand sides of R_2. All these properties are satisfied by Toyama's examples.

See Rewrite order and Path ordering (term rewriting) for ordering relations used in termination proofs for term rewriting systems.

Graph rewriting systems

A generalization of term rewrite systems are graph rewrite systems, operating on graphs instead of (ground-) terms / their corresponding tree representation.

Trace rewriting systems

Trace theory provides a means for discussing multiprocessing in more formal terms, such as via the trace monoid and the history monoid. Rewriting can be performed in trace systems as well.

Philosophy

Rewriting systems can be seen as programs that infer end-effects from a list of cause-effect relationships. In this way, rewriting systems can be considered to be automated causality provers.

See also

Notes

  1. i.e. it considers more objects as a normal form of x than our definition
  2. here, s \mid_p denotes the subterm of s rooted at position p, while l \sigma denotes the result of applying the substitution \sigma to the term l
  3. here, s[r \sigma]_p denotes the result of replacing the subterm at position p in s by the term r \sigma
  4. since applying that substitution to the rule's left hand side x*(y*z) yields the numerator a*((a+1)*(a+2))
  5. i.e. for each term, some normal form exists, e.g. h(c,c) has the normal forms b and g(b), since h(c,c) → f(h(c,c),h(c,c)) → f(h(c,c),f(h(c,c),h(c,c))) → f(h(c,c),g(h(c,c))) → b, and h(c,c) → f(h(c,c),h(c,c)) → g(h(c,c),h(c,c)) → ... → g(b); neither b nor g(b) can be rewritten any further, therefore the system is not confluent
  6. i.e. there are infinite derivations, e.g. h(c,c) → f(h(c,c),h(c,c)) → f(f(h(c,c),h(c,c)) ,h(c,c)) → f(f(f(h(c,c),h(c,c)),h(c,c)) ,h(c,c)) → ...

References

  1. Sculthorpe, Neil; Frisby, Nicolas; Gill, Andy (2014). "The Kansas University rewrite engine". Journal of Functional Programming 24 (04): 434–473. doi:10.1017/S0956796814000185. ISSN 0956-7968.
  2. Bezem et al., p. 7,
  3. 1 2 Book and Otto, p. 10
  4. Bezem et al., p. 7
  5. Baader and Nipkow, pp. 8-9
  6. Alonzo Church and J. Barkley Rosser. Some properties of conversion. Trans. AMS, 39:472-482, 1936
  7. Baader and Nipkow, p. 9
  8. Baader and Nipkow, p. 11
  9. Baader and Nipkow, p. 12
  10. Bezem et al., p.11
  11. M.H.A. Neumann (1942). "On Theories with a Combinatorial Definition of Equivalence". Annals of Mathematics 42 (2): 223–243. doi:10.2307/1968867.
  12. Martin Davis et al. 1994, p. 178
  13. N. Dershowitz, J.-P. Jouannaud (1990). Jan van Leeuwen, ed. Rewrite Systems. Handbook of Theoretical Computer Science B. Elsevier. pp. 243–320.; here: Sect.2.3
  14. M. Dauchet (1989). "Simulation of Turing Machines by a Left-Linear Rewrite Rule". Proc. 3rd RTA. LNCS 355. Springer LNCS. pp. 109–120.
  15. Gerard Huet, D.S. Lankford (Mar 1978). On the Uniform Halting Problem for Term Rewriting Systems (PDF) (Technical report). IRIA. p. 8. 283. Retrieved 16 June 2013.
  16. Bernhard Gramlich (Jun 1993). "Relating Innermost, Weak, Uniform, and Modular Termination of Term Rewriting Systems". In Voronkov, Andrei. Proc. International Conference on Logic Programming and Automated Reasoning (LPAR). LNAI 624. Springer. pp. 285–296. Here: Example 3.3
  17. Y. Toyama (1987). "Counterexamples to Termination for the Direct Sum of Term Rewriting Systems" (PDF). Inform. Process. Lett. 25: 141–143. doi:10.1016/0020-0190(87)90122-0.
  18. N. Dershowitz (1985). "Termination". In Jean-Pierre Jouannaud. Proc. RTA (PDF). LNCS 220. Springer. pp. 180–224.; here: p.210

Further reading

String rewriting
Other

External links

Look up rewriting in Wiktionary, the free dictionary.
This article is issued from Wikipedia - version of the Friday, January 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.