In coding theory the BCH codes form a class of cyclic error-correcting codes that are constructed using finite fields. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri.[1] The abbreviation BCH comprises the initials of these inventors' names.
One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
BCH codes are used in applications like satellite communications,[2] compact disc players, DVDs, disk drives, solid-state drives[3] and two-dimensional bar codes.
Contents |
Construction
Primitive Narrow-sense BCH Codes
For a given prime and positive integers and with , a primitive narrow-sense BCH code over the finite field with code length and minimum distance at least is constructed by the following method.
Let be a primitive element of . For any positive integer , let be the minimal polynomial of . The generator polynomial of the BCH code is defined as the least common multiple . It can be seen that is a polynomial with coefficients in and divides . Therefore, the polynomial code defined by is a cyclic code.
Example
Let and (therefore ). We will consider different values of . There is a primitive root satisfying
-
(
its minimal polynomial over is :. Note that in , the equation holds, and therefore . Thus is a root of , and therefore
- .
To compute , notice that, by repeated application of (1), we have the following linear relations:
Five right-hand-sides of length four must be linearly dependent, and indeed we find a linear dependency . Since there is no smaller degree dependency, the minimal polynomial of is :. Continuing in a similar manner, we find
The BCH code with has generator polynomial
It has minimal Hamming distance at least 3 and corrects up to 1 error. Since the generator polynomial is of degree 4, this code has 11 data bits and 4 checksum bits.
The BCH code with has generator polynomial
It has minimal Hamming distance at least 5 and corrects up to 2 errors. Since the generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits.
The BCH code with has generator polynomial
It has minimal Hamming distance at least 7 and corrects up to 3 errors. This code has 5 data bits and 10 checksum bits.
The BCH code with and higher have generator polynomial
This code has minimal Hamming distance 15 and corrects 7 errors. It has 1 data bit and 14 checksum bits. In fact, this code has only two codewords: 000000000000000 and 111111111111111.
General BCH codes
General BCH codes differ from primitive narrow-sense BCH codes in two respects.
First, the requirement that be a primitive element of can be relaxed. By relaxing this requirement, the code length changes from to , the order of the element .
Second, the consecutive roots of the generator polynomial may run from instead of .
Definition. Fix a finite field , where is a prime power. Choose positive integers such that , , and is the multiplicative order of modulo .
As before, let be a primitive th root of unity in , and let be the minimal polynomial over of for all . The generator polynomial of the BCH code is defined as the least common multiple .
Note: if as in the simplified definition, then is automatically 1, and the order of modulo is automatically . Therefore, the simplified definition is indeed a special case of the general one.
Properties
1. The generator polynomial of a BCH code has degree at most . Moreover, if and , the generator polynomial has degree at most .
- Proof: each minimal polynomial has degree at most . Therefore, the least common multiple of of them has degree at most . Moreover, if , then for all . Therefore, is the least common multiple of at most minimal polynomials for odd indices , each of degree at most .
2. A BCH code has minimal Hamming distance at least . Proof: We only give the proof in the simplified case; the general case is similar. Suppose that is a code word with fewer than non-zero terms. Then
Recall that are roots of , hence of . This implies that satisfy the following equations, for :
- .
In matrix form, we have
The determinant of this matrix equals
The matrix is seen to be a Vandermonde matrix, and its determinant is
- ,
which is non-zero. It therefore follows that , hence .
3. A BCH code is cyclic.
Proof: A polynomial code of length is cyclic if and only if its generator polynomial divides . Since is the minimal polynomial with roots , it suffices to check that each of is a root of . This follows immediately from the fact that is, by definition, an th root of unity.
Special cases
- A BCH code with is called a narrow-sense BCH code.
- A BCH code with is called primitive.
The generator polynomial of a BCH code has coefficients from . The polynomial also belongs to the polynomial ring over as long as . In general, a cyclic code over with as the generator polynomial is called a BCH code over . The BCH code over with as the generator polynomial is called a Reed-Solomon code. In other words, a Reed-Solomon code is a BCH code where the decoder alphabet is the same as the channel alphabet.[4]
Decoding
There are many algorithms for decoding BCH codes. The most common ones follow this general outline:
- Calculate the syndromes sj for the received vector
- Determine the number of errors t and the error locator polynomial Λ(x) from the syndromes
- Calculate the roots of the error location polynomial to find the error locations Xi
- Calculate the error values Yi at those error locations
Calculate the syndromes
The received vector is the sum of the correct codeword and an unknown error vector . The syndrome values are formed by considering as a polynomial and evaluating it at . Thus the syndromes are[5]
for to . Since are the zeros of , of which is a multiple, . Examining the syndrome values thus isolates the error vector so we can begin to solve for it.
If there is no error, for all . If the syndromes are all zero, then the decoding is done.
Calculate the error location polynomial
If there are nonzero syndromes, then there are errors. The decoder needs to figure out how many errors and the location of those errors.
If there is a single error, write this as , where is the location of the error and is its magnitude. Then the first two syndromes are
so together they allow us to calculate and provide some information about (completely determining it in the case of Reed-Solomon codes).
If there are two or more errors,
It is not immediately obvious how to begin solving the resulting syndromes for the unknowns and . Two popular algorithms for this task are:
Peterson–Gorenstein–Zierler algorithm
Peterson's algorithm is the step 2 of the generalized BCH decoding procedure. We use Peterson's algorithm to calculate the error locator polynomial coefficients of a polynomial
Now the procedure of the Peterson–Gorenstein–Zierler algorithm[6] for a given BCH code designed to correct errors is
- First generate the matrix of 2t syndromes
- Next generate the matrix with elements that are syndrome values
- Generate a vector with elements
- Let denote the unknown polynomial coefficients, which are given by
- Form the matrix equation
- If the determinant of matrix exists, then we can actually find an inverse of this matrix and solve for the values of unknown values.
- If , then follow
if then declare an empty error locator polynomial stop Peterson procedure. end set continue from the beginning of Peterson's decoding
- After you have values of you have with you the error locator polynomial.
- Stop Peterson procedure.
Factor error locator polynomial
Now that you have the polynomial, you can find its roots in the form using the Chien search algorithm. The exponential powers of the primitive element will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial.
The zeros of Λ(x) are X1-1, ... , Xν-1. The zeros are the reciprocals of the error locations .
Calculate error values
Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword.
For the case of binary BCH, this is trivial; just flip the bits for the received word at these positions, and we have the corrected code word. In the more general case, the error weights can be determined by solving the linear system
- . . .
However, there is a more efficient method known as the Forney algorithm, which is based on Lagrange interpolation. First calculate the error evaluator polynomial[7]
Then evaluate the error values:[7]
For narrow-sense BCH codes, c = 1, so the expression simplifies to:
Λ'(x) is the formal derivative of the error locator polynomical Λ(x):[7]
-
- For the formal derivative, the operation designates i additions of x; it is not the field's multiplication operation.
Decoding Example
Consider the code defined above, with in GF(24). (This generator is used in the QR code.) Let the message to be transmitted be [1 1 0 1 1], or in polynomial notation, . The "checksum" symbols are calculated by dividing by and taking the remainder, resulting in or [ 1 0 0 0 0 1 0 1 0 0 ]. These are appended to the message, so the transmitted codeword is [ 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 ].
Now, imagine that there are two bit-errors in the transmission, so the received codeword is [ 1 0 0 1 1 1 0 0 0 1 1 0 1 0 0 ]. In polynomial notation:
In order to correct the errors, first calculate the syndromes. Taking , we have , s2 = 1001, s3 = 1011, s4 = 1101, s5 = 0001, and s6 = 1001. Next, apply the Peterson procedure by row-reducing the following augmented matrix.
Due to the zero row, S3×3 is singular, which is no surprise since only two errors were introduced into the codeword. However, the upper-left corner of the matrix is identical to [S2×2 | C2×1], which gives rise to the solution , . The resulting error locator polynomial is , which has zeros at and . The exponents of correspond to the error locations. There is no need to calculate the error values in this example, as the only possible value is 1.
Citations
- ^ Reed & Chen 1999, p. 189
- ^ "Phobos Lander Coding System: Software and Analysis". Retrieved 25 February 2012.
- ^ "Sandforce SF-2500/2600 Product Brief". Retrieved 25 February 2012.
- ^ Gill unknown, p. 3
- ^ Lidl & Pilz 1999, p. 229
- ^ Gorenstein, Peterson & Zierler 1960
- ^ a b c Gill unknown, p. 47
References
Primary sources
- Hocquenghem, A. (September 1959), "Codes correcteurs d'erreurs" (in French), Chiffres (Paris) 2: 147–156
- Bose, R. C.; Ray-Chaudhuri, D. K. (March 1960), "On A Class of Error Correcting Binary Group Codes", Information and Control 3 (1): 68–79, ISSN 0890-5401
Secondary sources
- Gilbert, W. J.; Nicholson, W. K. (2004), Modern Algebra with Applications (2nd ed.), John Wiley
- Gill, John (unknown), EE387 Notes #7, Handout #28, Stanford University, pp. 42–45, retrieved April 21, 2010
- Gorenstein, Daniel; Peterson, W. Wesley; Zierler, Neal (1960), "Two-Error Correcting Bose-Chaudhuri Codes are Quasi-Perfect", Information and Control 3 (3): 291–294
- Lidl, Rudolf; Pilz, Günter (1999), Applied Abstract Algebra (2nd ed.)
- Lin, S.; Costello, D. (2004), Error Control Coding: Fundamentals and Applications, Englewood Cliffs, NJ: Prentice-Hall
- MacWilliams, F. J.; Sloane, N. J. A. (1977), The Theory of Error-Correcting Codes, New York, NY: North-Holland Publishing Company
- Reed, Irving S.; Chen, Xuemin (1999), Error-Control Coding for Data Networks, Boston, MA: Kluwer Academic Publishers, ISBN 0-7923-8528-4
- Rudra, Atri, CSE 545, Error Correcting Codes: Combinatorics, Algorithms and Applications, University at Buffalo, retrieved April 21, 2010
2 Komentar
Hі cοllеаgues, how is the whοle thing, аnԁ whаt yοu would like to
BalasHapusѕaу rеgarding thіs pоst, іn
my view іts in faсt rеmarkable in ѕupρort of me.
Feel free to surf my web-site pikavippis.net
Gгeat work! Thiѕ is the kіnd of informatіοn
BalasHapusthat аrе meаnt to be shared агound the internet.
Disgraсe on Google for not positiοning thіѕ submіt higher!
Cοme on οveг and sеeκ aԁvіce from my web
ѕite . Тhank you =)
My website : xtb broker