Gray code

Gray code
by bit width
2-bit 4-bit
00
01
11
10

0000
0001
0011
0010
0110
0111
0101
0100
1100
1101
1111
1110
1010
1011
1001
1000
3-bit
000
001
011
010
110
111
101
100

The reflected binary code (RBC), also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one bit (binary digit). The reflected binary code was originally designed to prevent spurious output from electromechanical switches. Today, Gray codes are widely used to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems.

Name

Gray's patent introduces the term "reflected binary code"

Bell Labs researcher Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name".[1] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".

The code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code";[2][3] one of those also lists "minimum error code" and "cyclic permutation code" among the names.[3] A 1954 patent application refers to "the Bell Telephone Gray code".[4]

Motivation

Many devices indicate position by closing and opening switches. If that device uses natural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ:

Decimal Binary
... ...
3 011
4 100
... ...

The problem with natural binary codes is that, with physical, mechanical switches, it is very unlikely that switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce, the transition might look like 011 — 001 — 101 — 100. When the switches appear to be in position 001, the observer cannot tell if that is the "real" position 001, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic, then the sequential system may store a false value.

The reflected binary code solves this problem by changing only one switch at a time, so there is never any ambiguity of position,

Decimal Binary Gray Gray as decimal
0 000 000 0
1 001 001 1
2 010 011 3
3 011 010 2
4 100 110 6
5 101 111 7
6 110 101 5
7 111 100 4

Notice that the Gray code for decimal 7 rolls over to decimal 0 with only one switch change. This is called the "cyclic" property of a Gray code. In the standard Gray coding the least significant bit follows a repetitive pattern of 2 on, 2 off ( … 11001100 … ); the next digit a pattern of 4 on, 4 off; and so forth.

More formally, a Gray code is a code assigning to each of a contiguous set of integers, or to each member of a circular list, a word of symbols such that each two adjacent code words differ by one symbol. These codes are also known as single-distance codes, reflecting the Hamming distance of 1 between adjacent codes. There can be more than one Gray code for a given word length, but the term was first applied to a particular binary code for the non-negative integers, the binary-reflected Gray code, or BRGC, the three-bit version of which is shown above.

History and practical application

Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The French engineer Émile Baudot used Gray codes in telegraphy in 1878.[5] He received the French Legion of Honor medal for his work. The Gray code is sometimes attributed, incorrectly,[6] to Elisha Gray (in Principles of Pulse Code Modulation, K. W. Cattermole,[7] for example).

Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube-based apparatus. The method and apparatus were patented in 1953 and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code.[8]

Part of front page of Gray's patent, showing PCM tube (10) with reflected binary code in plate (15)

The use of his eponymous codes that Gray was most interested in was to minimize the effect of error in the conversion of analog signals to digital; his codes are still used today for this purpose, and others.

Position encoders

Rotary encoder for angle-measuring devices marked in 3-bit binary-reflected Gray code (BRGC)
A Gray code absolute rotary encoder with 13 tracks. At the top can be seen the housing, interrupter disk, and light source; at the bottom can be seen the sensing element and support components.

Gray codes are used in position encoders (linear encoders and rotary encoders), in preference to straightforward binary encoding. This avoids the possibility that, when several bits change in the binary representation of an angle, a misread will result from some of the bits changing before others. Originally, the code pattern was electrically conductive, supported (in a rotary encoder) by an insulating disk. Each track had its own stationary metal spring contact; one more contact made the connection to the pattern. That common contact was connected by the pattern to whichever of the track contacts were resting on the conductive pattern. However, sliding contacts wear out and need maintenance, which favors optical encoders.

Regardless of the care in aligning the contacts, and accuracy of the pattern, a natural-binary code would have errors at specific disk positions, because it is impossible to make all bits change at exactly the same time as the disk rotates. The same is true of an optical encoder; transitions between opaque and transparent cannot be made to happen simultaneously for certain exact positions. Rotary encoders benefit from the cyclic nature of Gray codes, because consecutive positions of the sequence differ by only one bit. This means that, for a transition from state A to state B, timing mismatches can only affect when the A  B transition occurs, rather than inserting one or more (up to N  1 for an N-bit codeword) false intermediate states, as would occur if a standard binary code were used.

Towers of Hanoi

The binary-reflected Gray code can also be used to serve as a solution guide for the Towers of Hanoi problem, as well as the classical Chinese rings puzzle, a sequential mechanical puzzle mechanism.[6] It also forms a Hamiltonian cycle on a hypercube, where each bit is seen as one dimension.

Genetic algorithms

Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms. They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.

Karnaugh maps

Gray codes are also used in labelling the axes of Karnaugh maps.[9]

Error correction

In modern digital communications, Gray codes play an important role in error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise.

Communication between clock domains

Main article: clock domain crossing

Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.

Cycling through states with minimal effort

If a system has to cycle through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.

Gray code counters and arithmetic

A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains.[10] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.

Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. A Gray code counter was patented in 1962 US3020481, and there have been many others since. In recent times a Gray code counter can be implemented as a state machine in Verilog. In order to produce the next count value, it is necessary to have some combinational logic that will increment the current count value that is stored in Gray code. Probably the most obvious way to increment a Gray code number is to convert it into ordinary binary code, add one to it with a standard binary adder, and then convert the result back to Gray code. This approach was discussed in a paper in 1996 [11] and then subsequently patented by someone else in 1998 US5754614. Other methods of counting in Gray code are discussed in a report by R. W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter.[12]

Perhaps the most common electronic counter with the "only one bit changes at a time" property is the Johnson counter.

Constructing an n-bit Gray code

4-bit Gray code permutation

The binary-reflected Gray code list for n bits can be generated recursively from the list for n  1 bits by reflecting the list (i.e. listing the entries in reverse order), concatenating the original list with the reversed list, prefixing the entries in the original list with a binary 0, and then prefixing the entries in the reflected list with a binary 1.[6] For example, generating the n = 3 list from the n = 2 list:

2-bit list: 00, 01, 11, 10  
Reflected:   10, 11, 01, 00
Prefix old entries with 0: 000, 001, 011, 010,  
Prefix new entries with 1:   110, 111, 101, 100
Concatenated: 000, 001, 011, 010, 110, 111, 101, 100

The one-bit Gray code is G1 = (0, 1). This can be thought of as built recursively as above from a zero-bit Gray code G0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating Gn+1 from Gn makes the following properties of the standard reflecting code clear:

These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the nth Gray code is obtained by computing n \oplus \lfloor n/2 \rfloor

A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g_i is the ith gray-coded bit (g_0 being the most significant bit), and b_i is the ith binary-coded bit (b_0 being the most-significant bit), the reverse translation can be given recursively: b_0 = g_0, and b_i=g_i \oplus b_{i-1}. Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.

To construct the binary-reflected Gray code iteratively, at step 0 start with the code_0 = 0, and at step i > 0 find the bit position of the least significant 1 in the binary representation of i and flip the bit at that position in the previous code \mathrm{code}_{i-1} to get the next code code_i. The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... (sequence A007814 in OEIS). See find first set for efficient algorithms to compute these values.

Converting to and from Gray code

The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist.[13]

/*
 * This function converts an unsigned binary
 * number to reflected binary Gray code.
 *
 * The operator >> is shift right. The operator ^ is exclusive or.
 */
unsigned int binaryToGray(unsigned int num)
{
    return num ^ (num >> 1);
}

/*
 * This function converts a reflected binary
 * Gray code number to a binary number.
 * Each Gray code bit is exclusive-ored with all
 * more significant bits.
 */
unsigned int grayToBinary(unsigned int num)
{
    unsigned int mask;
    for (mask = num >> 1; mask != 0; mask = mask >> 1)
    {
        num = num ^ mask;
    }
    return num;
}

/*
 * A more efficient version, for Gray codes of 32 or fewer bits.
 */
unsigned int grayToBinary32(unsigned int num)
{
    num = num ^ (num >> 16);
    num = num ^ (num >> 8);
    num = num ^ (num >> 4);
    num = num ^ (num >> 2);
    num = num ^ (num >> 1);
    return num;
}

Special types of Gray codes

In practice, a "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a lists of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).

n-ary Gray code

Ternary number → ternary Gray code
  0 → 000
  1 → 001
  2 → 002
 10 → 012
 11 → 010
 12 → 011
 20 → 021
 21 → 022
 22 → 020
100 → 120
101 → 121
102 → 122
110 → 102
111 → 100
112 → 101
120 → 111
121 → 112
122 → 110
200 → 210
201 → 211
202 → 212
210 → 222
211 → 220
212 → 221
220 → 201
221 → 202
222 → 200

There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n-ary Gray code, also known as a non-Boolean Gray code. As the name implies, this type of Gray code uses non-Boolean values in its encodings.

For example, a 3-ary (ternary) Gray code would use the values {0, 1, 2}. The (n, k)-Gray code is the n-ary Gray code with k digits.[14] The sequence of elements in the (3, 2)-Gray code is: {00, 01, 02, 12, 10, 11, 21, 22, 20}. The (n, k)-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively. An algorithm to iteratively generate the (N, k)-Gray code is presented (in C):

// inputs: base, digits, value
// output: gray
// Convert a value to a graycode with the given base and digits.
// Iterating through a sequence of values would result in a sequence
// of Gray codes in which only one digit changes at a time.
void to_gray(unsigned base, unsigned digits, unsigned value, unsigned gray[digits])
{ 
	unsigned baseN[digits];	// Stores the ordinary base-N number, one digit per entry
	unsigned i;		// The loop variable
 
	// Put the normal baseN number into the baseN array. For base 10, 109 
	// would be stored as [9,0,1]
	for (i = 0; i < digits; i++) {
		baseN[i] = value % base;
		value    = value / base;
	}
 
	// Convert the normal baseN number into the graycode equivalent. Note that
	// the loop starts at the most significant digit and goes down.
	unsigned shift = 0;
	while (i--) {
		// The gray digit gets shifted down by the sum of the higher
		// digits.
		gray[i] = (baseN[i] + shift) % base;
		shift = shift + base - gray[i];	// Subtract from base so shift is positive
	}
}
// EXAMPLES
// input: value = 1899, base = 10, digits = 4
// output: baseN[] = [9,9,8,1], gray[] = [0,1,7,1]
// input: value = 1900, base = 10, digits = 4
// output: baseN[] = [0,0,9,1], gray[] = [0,1,8,1]

There are other graycode algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan,[14] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n  1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two graycode digits is always one.

Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.

See also Skew binary number system, a variant ternary number system where at most 2 digits change on each increment, as each increment can be done with at most one digit carry operation.

Balanced Gray code

Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity".[15] In balanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R-ary complete Gray cycle having transition sequence (\delta_k); the transition counts (spectrum) of G are the collection of integers defined by

\lambda_k = |\{ j \in \mathbb{Z}_{R^n} : \delta_j = k \}| \, , \text { for } k \in \mathbb{Z}_R

A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have \lambda_k = R^n / n for all k. Clearly, when R = 2, such codes exist only if n is a power of 2. Otherwise, if n does not divide R^n evenly, it is possible to construct well-balanced codes where every transition count is either \lfloor R^n / n \rfloor or \lceil R^n / n \rceil. Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two.[16]

For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced:[15]

0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 
0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 0 
0 0 0 0 1 1 1 1 1 0 0 1 1 1 0 0 
0 0 0 1 1 0 0 0 0 0 1 1 1 1 1 1

whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight:[15]

1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 
0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 0 
1 1 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 0 1 1 
1 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 1 
1 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1

We will now show a construction for well-balanced binary Gray codes which allows us to generate an n-digit balanced Gray code for every n.[17] The main principle is to inductively construct an (n + 2)-digit Gray code G' given an n-digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g_0, \ldots, g_{2^n-1} into an even number L of non-empty blocks of the form

\{g_0\}, \{g_1, \ldots, g_{k_2}\}, \{g_{k_2+1}, \ldots, g_{k_3}\}, \ldots, \{g_{k_{L-2}+1}, \ldots, g_{-2}\}, \{g_{-1}\}

where k_1 = 0, k_{L-1} = -2, and k_{L} = -1 (mod 2^n). This partition induces an (n+2)-digit Gray code given by

00g_0,
00g_1, \ldots, 00g_{k_2}, 01g_{k_2}, \ldots, 01g_1, 11g_1, \ldots, 11g_{k_2},
11g_{k_2+1}, \ldots, 11g_{k_3}, 01g_{k_3}, \ldots, 01g_{k_2+1}, 00g_{k_2+1}, \ldots, 00g_{k_3}, \ldots,
00g_{-2}, 00g_{-1}, 10g_{-1}, 10g_{-2}, \ldots, 10g_0, 11g_0, 11g_{-1}, 01g_{-1}, 01g_0

If we define the transition multiplicities m_i = |\{ j : \delta_{k_j} = i, 1 \leq j \leq L \}| to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the (n + 2)-digit Gray code induced by this partition the transition spectrum \lambda'_k is


\lambda'_k = \begin{cases}
4 \lambda_k - 2 m_k, & \text{if } 0 \leq k < n \\
L, & \text{ otherwise }
\end{cases}

The delicate part of this construction is to find an adequate partitioning of a balanced n-digit Gray code such that the code induced by it remains balanced. Uniform codes can be found when R \equiv 0 \mod 4 and R^n \equiv 0 \mod n, and this construction can be extended to the R-ary case as well.[17]

Monotonic Gray codes

Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors.[18] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.

We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q_n = (V_n, E_n) into levels of vertices that have equal weight, i.e.

V_n(i) = \{ v \in V_n : v \text{ has weight } i \}

for 0 \leq i \leq n. These levels satisfy |V_n(i)| = \binom{n}{i}. Let Q_n(i) be the subgraph of Q_n induced by V_n(i) \cup V_n(i+1), and let E_n(i) be the edges in Q_n(i). A monotonic Gray code is then a Hamiltonian path in Q_n such that whenever \delta_1 \in E_n(i) comes before \delta_2 \in E_n(j) in the path, then i \leq j.

An elegant construction of monotonic n-digit Gray codes for any n is based on the idea of recursively building subpaths P_{n,j} of length 2 \binom{n}{j} having edges in E_n(j).[18] We define P_{1,0} = (0, 1), P_{n,j} = \emptyset whenever j < 0 or j \geq n, and


P_{n+1,j} = 1P^{\pi_n}_{n,j-1}, 0P_{n,j}

otherwise. Here, \pi_n is a suitably defined permutation and P^{\pi} refers to the path P with its coordinates permuted by \pi. These paths give rise to two monotonic n-digit Gray codes G_n^{(1)} and G_n^{(2)} given by


G_n^{(1)} = P_{n,0} P_{n,1}^R P_{n,2} P_{n,3}^R \cdots \text{ and } G_n^{(2)} = P_{n,0}^R P_{n,1} P_{n,2}^R P_{n,3} \cdots

The choice of \pi_n which ensures that these codes are indeed Gray codes turns out to be \pi_n = E^{-1}(\pi_{n-1}^2). The first few values of P_{n,j} are shown in the table below.

Subpaths in the Savage–Winkler algorithm
P_{n,j} j = 0 j = 1 j = 2 j = 3
n = 1 0, 1
n = 2 00, 01 10, 11
n = 3 000, 001 100, 110, 010, 011 101, 111
n = 4 0000, 0001 1000, 1100, 0100, 0110, 0010, 0011 1010, 1011, 1001, 1101, 0101, 0111 1110, 1111

These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O(n) time. The algorithm is most easily described using coroutines.

Monotonic codes have an interesting connection to the Lovász conjecture, which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q_{2n+1}(n) is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment"" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n \leq 15, and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N where N is the number of vertices in the middle-level subgraph.[19]

Beckett–Gray code

Another type of Gray code, the Beckett–Gray code, is named for Irish playwright Samuel Beckett, who was interested in symmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once.[20] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first.[20] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth's Art of Computer Programming.[6] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9,500 solutions for the case n = 7 have been found.[21]

Snake-in-the-box codes

Snake-in-the-box codes, or snakes, are the sequences of nodes of induced paths in an n-dimensional hypercube graph, and coil-in-the-box codes, or coils, are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by W. H. Kautz in the late 1950s;[22] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.

Single-track Gray code

Yet another kind of Gray code is the single-track Gray code (STGC) developed by N. B. Spedding[23] and refined by Hiltgen, Paterson and Brandestini in "Single-track Gray codes" (1996).[24][25] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix, each column is a cyclic shift of the first column.[26]

The name comes from their use with rotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1. To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1 degree accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.

If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1 degree accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1 degree resolution" angular encoder to 8 tracks. Reducing the number of tracks still further can't be done with BRGC.

For many years, Torsten Sillke[27] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.

N. B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible.[23] Although it is not possible to distinguish 2n positions with n sensors on a single track, it is possible to distinguish close to that many. For example, when n is itself a power of 2, n sensors can distinguish 2n  2n positions.[28] Hiltgen and Paterson published a paper in 2001 exhibiting a single-track gray code with exactly 360 angular positions, constructed using 9 sensors.[25] Since this number is larger than 28 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here:

Single-track Gray code with 5 sensors.
Single-track Gray code for 30 positions
Angle Code Angle Code Angle Code Angle Code Angle Code
10000 72° 01000 144° 00100 216° 00010 288° 00001
12° 10100 84° 01010 156° 00101 228° 10010 300° 01001
24° 11100 96° 01110 168° 00111 240° 10011 312° 11001
36° 11110 108° 01111 180° 10111 252° 11011 324° 11101
48° 11010 120° 01101 192° 10110 264° 01011 336° 10101
60° 11000 132° 01100 204° 00110 276° 00011 348° 10001

Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes.[29] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes, also called De Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving.[30]

2-Dimensional Gray code

A Gray-coded constellation diagram for rectangular 16-QAM.

Two-dimensional Gray codes are used in communication to minimize the number of bit errors in Quadrature amplitude modulation adjacent points in the constellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits.[31]

Gray isometry

The bijective mapping { 0 ↔ 00, 1 ↔ 01, 2 ↔ 11, 3 ↔ 10 } establishes an isometry between the metric space over the finite field \mathbb{Z}_2^2 with the metric given by the Hamming distance and the metric space over the finite ring \mathbb{Z}_4 (the usual modulo arithmetic) with the metric given by the Lee distance. The mapping is suitably extended to an isometry of the Hamming spaces \mathbb{Z}_2^{2m} and \mathbb{Z}_4^m. Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in \mathbb{Z}_2^2 of ring-linear codes from \mathbb{Z}_4.[32][33]

See also

References

  1. F. Gray. Pulse code communication, March 17, 1953 (filed Nov. 1947). U.S. Patent 2,632,058
  2. J. Breckman. Encoding Circuit, Jan 31, 1956 (filed Dec. 1953). U.S. Patent 2,733,432
  3. 1 2 E. A. Ragland et al. Direction-Sensitive Binary Code Position Control System, Feb. 11, 1958 (filed Oct. 1953). U.S. Patent 2,823,345
  4. S. Reiner et al. Automatic Rectification System, Jun 24, 1958 (filed Jan. 1954). U.S. Patent 2,839,974
  5. Pickover, Clifford A. (2009). The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics. Sterling Publishing Company. p. 392. ISBN 9781402757969.
  6. 1 2 3 4 Knuth, Donald E. "Generating all n-tuples." The Art of Computer Programming, Volume 4A: Enumeration and Backtracking, pre-fascicle 2a, October 15, 2004.
  7. Cattermole, K. W. (1969). Principles of Pulse Code Modulation. New York: American Elsevier. ISBN 0-444-19747-8.
  8. Goodall, W. M. (1951). "Television by Pulse Code Modulation". Bell Sys. Tech. J. 30: 33–49.
  9. Wakerly, John F (1994). Digital Design: Principles & Practices. New Jersey: Prentice Hall. pp. 222, 48–49. ISBN 0-13-211459-3. Note that the two page sections taken together say that K-maps are labeled with Gray code. The first section says that they are labeled with a code that changes only one bit between entries and the second section says that such a code is called Gray code.
  10. "Synchronization in Digital Logic Circuits by Ryan Donohue
  11. Mehta, H.; Owens, R.M. & Irwin, M.J. (1996), Some issues in gray code addressing, in the Proceedings of the 6th Great Lakes Symposium on VLSI (GLSVLSI 96), IEEE Computer Society,pp. 178
  12. The Gray Code by R. W. Doran
  13. Henry Gordon Dietz. "The Aggregate Magic Algorithms: Gray Code Conversion"
  14. 1 2 Guan, Dah-Jyh (1998). "Generalized Gray Codes with Applications". Proc. Natl. Sci. Counc. Repub. Of China (A) 22: 841–848. CiteSeerX: 10.1.1.119.1344.
  15. 1 2 3 Bhat, Girish S.; Savage, Carla D. (1996). "Balanced Gray codes". Electronic Journal of Combinatorics 3 (1): R25.
  16. Suparta, I. N. (2005). "A simple proof for the existence of exponentially balanced Gray codes". Electronic Journal of Combinatorics 12.
  17. 1 2 M. Flahive and B. Bose (2007). "Balancing cyclic R-ary Gray codes". Electronic Journal of Combinatorics 14.
  18. 1 2 C. D Savage and P. Winkler (1995). "Monotone Gray codes and the middle levels problem". Journal of Combinatorial Theory, Series A 70 (2): 230–248. doi:10.1016/0097-3165(95)90091-8. ISSN 0097-3165.
  19. C. D Savage (1997). "Long cycles in the middle two levels of the Boolean lattice".
  20. 1 2 Goddyn, Luis (1999). "MATH 343 Applied Discrete Math Supplementary Materials" (PDF). Dept. of Math, Simon Fraser U.
  21. Sawada, J.; Wong, D. (2007). "A Fast Algorithm to generate Beckett–Gray codes". Electronic Notes in Discrete Mathematics 29: 571–577. doi:10.1016/j.endm.2007.07.091.
  22. Kautz, W. H. (1958). "Unit-distance error-checking codes". IRE Trans. Elect. Comput. 7: 177–180.
  23. 1 2 NZ 264738, Spedding, Norman Bruce, "A position encoder", published 1994 A claim is at http://www.winzurf.co.nz/Single_Track_Grey_Code_Patent/Single_track_Grey_code_encoder_patent.pdf
  24. Hiltgen, Alain P.; Paterson, Kenneth G.; Brandestini, Marco (1996). "Single-Track Gray Codes" (PDF). IEEE Transactions on Information Theory 42 (5): 1555–1561. doi:10.1109/18.532900.
  25. 1 2 Hiltgen, Alain P.; Paterson, Kenneth G. (September 2001). "Single-Track Circuit Codes" (PDF). IEEE Transactions on Information Theory 47 (6): 2587–2595. doi:10.1109/18.945274. (no mention of Spedding)
  26. Etzion, Tuvi; Schwartz, Moshe (1999). "The Structure of Single-Track Gray Codes" (PDF). IEEE Transactions on Information Theory 45 (7): 2383–2396. doi:10.1109/18.796379.
  27. Torsten Sillke. "Gray-Codes with few tracks".
  28. Etzion, Tuvi; Paterson, Kenneth G. (May 1996). "Near Optimal Single-Track Gray Codes" (PDF). IEEE Transactions on Information Theory 42 (3): 779789. doi:10.1109/18.490544.
  29. Ruskey, Frank; Weston, Mark (June 18, 2005). "A Survey of Venn Diagrams: Symmetric Diagrams". Electronic Journal of Combinatorics.
  30. Alciatore, David G.; Histand, Michael B. (1999). Mechatronics. McGraw–Hill Education – Europe. ISBN 978-0-07-131444-2.
  31. http://www.dsprelated.com/showthread/comp.dsp/96917-1.php
  32. Marcus Greferath (2009). "An Introduction to Ring-Linear Coding Theory". In Massimiliano Sala, Teo Mora, Ludovic Perret, Shojiro Sakata, Carlo Traverso. Gröbner Bases, Coding, and Cryptography. Springer Science & Business Media. p. 220. ISBN 978-3-540-93806-4.
  33. http://www.encyclopediaofmath.org/index.php/Kerdock_and_Preparata_codes

References

External links

This article is issued from Wikipedia - version of the Saturday, February 13, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.