I-News

6/recent/ticker-posts

Header Ads Widget


Jadwal Open Trip dan Private Trip.
Private Trip, Mountain Guide, Porter Gunung, Paket Honeymoon, Study Tour, Family Gathering, Outbond, Outing, dll +62 85 643 455 865 (( WA / SMS / Telp )

BCH code

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In coding theory the BCH codes form a class of cyclic error-correcting codes that are constructed using finite fields. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri.[1] The abbreviation BCH comprises the initials of these inventors' names.

One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.

BCH codes are used in applications like satellite communications,[2] compact disc players, DVDs, disk drives, solid-state drives[3] and two-dimensional bar codes.

Contents

Construction

Primitive Narrow-sense BCH Codes

For a given prime q and positive integers m and d with d \leq q^m - 1, a primitive narrow-sense BCH code over the finite field \mathrm{GF}(q) with code length n = q^m - 1 and minimum distance at least d is constructed by the following method.

Let \alpha be a primitive element of \mathrm{GF}(q^m). For any positive integer i, let m_i(x) be the minimal polynomial of \alpha^i. The generator polynomial of the BCH code is defined as the least common multiple g(x) = {\rm lcm}(m_1(x),\ldots,m_{d-1}(x)). It can be seen that g(x) is a polynomial with coefficients in \mathrm{GF}(q) and divides x^n - 1. Therefore, the polynomial code defined by g(x) is a cyclic code.

Example

Let q=2 and m=4 (therefore n=15). We will consider different values of d. There is a primitive root \alpha\in GF(16) satisfying

\alpha^4+\alpha+1=0

(1)

its minimal polynomial over GF(2) is :m_1(x) = x^4+x+1. Note that in GF(2^4), the equation (a+b)^2 = a^2 + ab + ab + b^2 = a^2 + b^2 holds, and therefore m_1(\alpha^2) = m_1(\alpha)^2 = 0. Thus \alpha^2 is a root of m_1(x), and therefore

m_2(x) = m_1(x) = x^4+x+1.

To compute m_3(x), notice that, by repeated application of (1), we have the following linear relations:

  • 1 = 0\alpha^3 + 0\alpha^2 + 0\alpha + 1
  • \alpha^3 = 1\alpha^3 + 0\alpha^2 + 0\alpha + 0
  • \alpha^6 = 1\alpha^3 + 1\alpha^2 + 0\alpha + 0
  • \alpha^9 = 1\alpha^3 + 0\alpha^2 + 1\alpha + 0
  • \alpha^{12} = 1\alpha^3 + 1\alpha^2 + 1\alpha + 1

Five right-hand-sides of length four must be linearly dependent, and indeed we find a linear dependency \alpha^{12}+\alpha^9+\alpha^6+\alpha^3+1=0. Since there is no smaller degree dependency, the minimal polynomial of \alpha^3 is :m_3(x) = x^4+x^3+x^2+x+1. Continuing in a similar manner, we find

m_4(x) = m_2(x) = m_1(x) = x^4+x+1,\,
m_5(x) = x^2+x+1,\,
m_6(x) = m_3(x) = x^4+x^3+x^2+x+1,\,
m_7(x) = x^4+x^3+1.\,

The BCH code with d=1,2,3 has generator polynomial

g(x) = m_1(x) = x^4+x+1.\,

It has minimal Hamming distance at least 3 and corrects up to 1 error. Since the generator polynomial is of degree 4, this code has 11 data bits and 4 checksum bits.

The BCH code with d=4,5 has generator polynomial

g(x) = {\rm lcm}(m_1(x),m_3(x)) = (x^4+x+1)(x^4+x^3+x^2+x+1) = x^8+x^7+x^6+x^4+1.\,

It has minimal Hamming distance at least 5 and corrects up to 2 errors. Since the generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits.

The BCH code with d=6,7 has generator polynomial

 \begin{align} g(x) & {} = {\rm lcm}(m_1(x),m_3(x),m_5(x)) \\ & {} = (x^4+x+1)(x^4+x^3+x^2+x+1)(x^2+x+1) \\ & {} = x^{10}+x^8+x^5+x^4+x^2+x+1. \end{align}

It has minimal Hamming distance at least 7 and corrects up to 3 errors. This code has 5 data bits and 10 checksum bits.

The BCH code with d=8 and higher have generator polynomial

 \begin{align} g(x) & {} = {\rm lcm}(m_1(x),m_3(x),m_5(x),m_7(x)) \\ & {} = (x^4+x+1)(x^4+x^3+x^2+x+1)(x^2+x+1)(x^4+x^3+1) \\ & {} = x^{14}+x^{13}+x^{12}+\cdots+x^2+x+1. \end{align}

This code has minimal Hamming distance 15 and corrects 7 errors. It has 1 data bit and 14 checksum bits. In fact, this code has only two codewords: 000000000000000 and 111111111111111.

General BCH codes

General BCH codes differ from primitive narrow-sense BCH codes in two respects.

First, the requirement that \alpha be a primitive element of \mathrm{GF}(q^m) can be relaxed. By relaxing this requirement, the code length changes from q^m - 1 to \mathrm{ord}(\alpha), the order of the element \alpha.

Second, the consecutive roots of the generator polynomial may run from \alpha^c,\ldots,\alpha^{c+d-2} instead of \alpha,\ldots,\alpha^{d-1}.

Definition. Fix a finite field GF(q), where q is a prime power. Choose positive integers m,n,d,c such that 2\leq d\leq n, {\rm gcd}(n,q)=1, and m is the multiplicative order of q modulo n.

As before, let \alpha be a primitive nth root of unity in GF(q^m), and let m_i(x) be the minimal polynomial over GF(q) of \alpha^i for all i. The generator polynomial of the BCH code is defined as the least common multiple g(x) = {\rm lcm}(m_c(x),\ldots,m_{c+d-2}(x)).

Note: if n=q^m-1 as in the simplified definition, then {\rm gcd}(n,q) is automatically 1, and the order of q modulo n is automatically m. Therefore, the simplified definition is indeed a special case of the general one.

Properties

1. The generator polynomial of a BCH code has degree at most (d-1)m. Moreover, if q=2 and c=1, the generator polynomial has degree at most dm/2.

Proof: each minimal polynomial m_i(x) has degree at most m. Therefore, the least common multiple of d-1 of them has degree at most (d-1)m. Moreover, if q=2, then m_i(x) = m_{2i}(x) for all i. Therefore, g(x) is the least common multiple of at most d/2 minimal polynomials m_i(x) for odd indices i, each of degree at most m.

2. A BCH code has minimal Hamming distance at least d. Proof: We only give the proof in the simplified case; the general case is similar. Suppose that p(x) is a code word with fewer than d non-zero terms. Then

p(x) = b_1x^{k_1} + \cdots + b_{d-1}x^{k_{d-1}},\text{ where }k_1<k_2<\cdots<k_{d-1}.

Recall that \alpha^1,\ldots,\alpha^{d-1} are roots of g(x), hence of p(x). This implies that b_1,\ldots,b_{d-1} satisfy the following equations, for i=1,\ldots,d-1:

p(\alpha^i) = b_1\alpha^{ik_1} + b_2\alpha^{ik_2} + \cdots + b_{d-1}\alpha^{ik_{d-1}} = 0.

In matrix form, we have

\begin{bmatrix} \alpha^{k_1} & \alpha^{k_2} & \cdots & \alpha^{k_{d-1}} \\ \alpha^{2k_1} & \alpha^{2k_2} & \cdots & \alpha^{2k_{d-1}} \\ \vdots & \vdots && \vdots \\ \alpha^{(d-1)k_1} & \alpha^{(d-1)k_2} & \cdots & \alpha^{(d-1)k_{d-1}} \\ \end{bmatrix} \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_{d-1} \end{bmatrix} =  \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}

The determinant of this matrix equals

\left(\prod_{i=1}^{d-1}\alpha^{k_i}\right)\det\begin{pmatrix} 1 & 1 & \cdots & 1 \\ \alpha^{k_1} & \alpha^{k_2} & \cdots & \alpha^{k_{d-1}} \\ \vdots & \vdots && \vdots \\ \alpha^{(d-2)k_1} & \alpha^{(d-2)k_2} & \cdots & \alpha^{(d-2)k_{d-1}} \\ \end{pmatrix} = \left(\prod_{i=1}^{d-1}\alpha^{k_i}\right) \det(V)

The matrix V is seen to be a Vandermonde matrix, and its determinant is

\det(V) = \prod_{1\le i<j\le d-1} (\alpha^{k_j}-\alpha^{k_i}),

which is non-zero. It therefore follows that b_1,\ldots,b_{d-1}=0, hence p(x)=0.

3. A BCH code is cyclic.

Proof: A polynomial code of length n is cyclic if and only if its generator polynomial divides x^n-1. Since g(x) is the minimal polynomial with roots \alpha^c,\ldots,\alpha^{c+d-2}, it suffices to check that each of \alpha^c,\ldots,\alpha^{c+d-2} is a root of x^n-1. This follows immediately from the fact that \alpha is, by definition, an nth root of unity.

Special cases

  • A BCH code with c=1 is called a narrow-sense BCH code.
  • A BCH code with n=q^m-1 is called primitive.

The generator polynomial g(x) of a BCH code has coefficients from \mathrm{GF}(q). The polynomial g(x) also belongs to the polynomial ring over \mathrm{GF}(q^p) as long as \mathrm{GF}(q^p) \subseteq \mathrm{GF}(q^m). In general, a cyclic code over \mathrm{GF}(q^p) with g(x) as the generator polynomial is called a BCH code over \mathrm{GF}(q^p). The BCH code over \mathrm{GF}(q^m) with g(x) as the generator polynomial is called a Reed-Solomon code. In other words, a Reed-Solomon code is a BCH code where the decoder alphabet is the same as the channel alphabet.[4]

Decoding

There are many algorithms for decoding BCH codes. The most common ones follow this general outline:

  1. Calculate the syndromes sj for the received vector
  2. Determine the number of errors t and the error locator polynomial Λ(x) from the syndromes
  3. Calculate the roots of the error location polynomial to find the error locations Xi
  4. Calculate the error values Yi at those error locations

Calculate the syndromes

The received vector R is the sum of the correct codeword C and an unknown error vector E. The syndrome values are formed by considering R as a polynomial and evaluating it at \alpha^c,\ldots,\alpha^{c+d-2}. Thus the syndromes are[5]

s_j = R(\alpha^{c+j-1}) = C(\alpha^{c+j-1}) + E(\alpha^{c+j-1})

for j = 1 to d-1. Since \alpha^{c+j-1} are the zeros of g(x), of which C(x) is a multiple, C(\alpha^{c+j-1}) = 0. Examining the syndrome values thus isolates the error vector so we can begin to solve for it.

If there is no error, s_j = 0 for all j. If the syndromes are all zero, then the decoding is done.

Calculate the error location polynomial

If there are nonzero syndromes, then there are errors. The decoder needs to figure out how many errors and the location of those errors.

If there is a single error, write this as E(x) = e\,x^i, where i is the location of the error and e is its magnitude. Then the first two syndromes are

s_1 = e\,\alpha^{c\,i}
s_2 = e\,\alpha^{(c+1)\,i} = \alpha^i s_1

so together they allow us to calculate e and provide some information about i (completely determining it in the case of Reed-Solomon codes).

If there are two or more errors,

E(x) = e_1 x^{i_1} + e_2 x^{i_2} + \cdots \,

It is not immediately obvious how to begin solving the resulting syndromes for the unknowns e_k and i_k. Two popular algorithms for this task are:

  1. Peterson–Gorenstein–Zierler algorithm
  2. Berlekamp–Massey algorithm

Peterson–Gorenstein–Zierler algorithm

Peterson's algorithm is the step 2 of the generalized BCH decoding procedure. We use Peterson's algorithm to calculate the error locator polynomial coefficients  \lambda_1 , \lambda_2, \dots, \lambda_{t} of a polynomial

 \Lambda(x) = 1 + \lambda_1 x + \lambda_2 x^2 + \cdots + \lambda_t x^t

Now the procedure of the Peterson–Gorenstein–Zierler algorithm[6] for a given (n,k,d_\min) BCH code designed to correct \left[t=\frac{d_\min - 1}{2}\right] errors is

  • First generate the matrix of 2t syndromes
  • Next generate the S_{t\times t} matrix with elements that are syndrome values
S_{t \times t}=\begin{bmatrix}s_1&s_2&s_3&\dots&s_t\\ s_2&s_3&s_4&\dots&s_{t+1}\\ s_3&s_4&s_5&\dots&s_{t+2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ s_t&s_{t+1}&s_{t+2}&\dots&s_{2t-1}\end{bmatrix}
  • Generate a c_{t \times 1} vector with elements
C_{t \times 1}=\begin{bmatrix}s_{t+1}\\ s_{t+2}\\ \vdots\\ \vdots\\ s_{2t}\end{bmatrix}
  • Let \Lambda denote the unknown polynomial coefficients, which are given by
\Lambda_{t \times 1} = \begin{bmatrix}\lambda_{t}\\ \lambda_{t-1}\\ \vdots\\ \lambda_{2}\\ \lambda_{1}\end{bmatrix}
  • Form the matrix equation
S_{t \times t} \Lambda_{t \times 1}  = C_{t \times 1\,}
  • If the determinant of matrix S_{t \times t} exists, then we can actually find an inverse of this matrix and solve for the values of unknown \Lambda values.
  • If  \det(S_{t \times t}) = 0 , then follow
       if t = 0        then              declare an empty error locator polynomial              stop Peterson procedure.        end        set  t \leftarrow t -1        continue from the beginning of Peterson's decoding  
  • After you have values of \Lambda you have with you the error locator polynomial.
  • Stop Peterson procedure.

Factor error locator polynomial

Now that you have the \Lambda(x) polynomial, you can find its roots in the form \Lambda(x)=(\alpha^i x + 1) (\alpha ^j x  + 1) \cdots (\alpha^k x + 1) using the Chien search algorithm. The exponential powers of the primitive element \alpha will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial.


The zeros of Λ(x) are X1-1, ... , Xν-1. The zeros are the reciprocals of the error locations X_j = \alpha^{i_j}.

Calculate error values

Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword.

For the case of binary BCH, this is trivial; just flip the bits for the received word at these positions, and we have the corrected code word. In the more general case, the error weights e_j can be determined by solving the linear system

s_1 = e_1 \alpha^{c\,i_1} + e_2 \alpha^{c\,i_2} + \ldots
s_2 = e_1 \alpha^{(c + 1)\,i_1} + e_2 \alpha^{(c + 1)\,i_2} + \ldots
. . .

However, there is a more efficient method known as the Forney algorithm, which is based on Lagrange interpolation. First calculate the error evaluator polynomial[7]

\Omega(x) = S(x)\,\Lambda(x) \pmod{x^{2t}}

Then evaluate the error values:[7]

e_j = - \frac{X_j^{1-c} \, \Omega(X_j^{-1})}{\Lambda'(X_j^{-1})}

For narrow-sense BCH codes, c = 1, so the expression simplifies to:

e_j = - \frac{\Omega(X_j^{-1})}{\Lambda'(X_j^{-1})}

Λ'(x) is the formal derivative of the error locator polynomical Λ(x):[7]

\Lambda'(x) = \Sigma_{i=1}^{\nu} i \cdot \lambda_i x^{i-1}
For the formal derivative, the i \cdot x operation designates i additions of x; it is not the field's multiplication operation.

Decoding Example

Consider the d=7 code defined above, with g(x) = x^{10} + x^8 + x^5 + x^4 + x^2 + x + 1 in GF(24). (This generator is used in the QR code.) Let the message to be transmitted be [1 1 0 1 1], or in polynomial notation, M(x) = x^4 + x^3 + x + 1. The "checksum" symbols are calculated by dividing x^{10} M(x) by g(x) and taking the remainder, resulting in x^9 + x^4 + x^2 or [ 1 0 0 0 0 1 0 1 0 0 ]. These are appended to the message, so the transmitted codeword is [ 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 ].

Now, imagine that there are two bit-errors in the transmission, so the received codeword is [ 1 0 0 1 1 1 0 0 0 1 1 0 1 0 0 ]. In polynomial notation:

R(x) = C(x) + x^{13} + x^5 = x^{14} + x^{11} + x^{10} + x^9 + x^5 + x^4 + x^2

In order to correct the errors, first calculate the syndromes. Taking \alpha = 0010, we have s_1 = R(\alpha^1) = 1011, s2 = 1001, s3 = 1011, s4 = 1101, s5 = 0001, and s6 = 1001. Next, apply the Peterson procedure by row-reducing the following augmented matrix.

\left [ S_{3 \times 3} | C_{3 \times 1} \right ] = \begin{bmatrix}s_1&s_2&s_3&s_4\\ s_2&s_3&s_4&s_5\\ s_3&s_4&s_5&s_6\end{bmatrix} = \begin{bmatrix}1011&1001&1011&1101\\ 1001&1011&1101&0001\\ 1011&1101&0001&1001\end{bmatrix} \Rightarrow \begin{bmatrix}0001&0000&1000&0111\\ 0000&0001&1011&0001\\ 0000&0000&0000&0000\end{bmatrix}

Due to the zero row, S3×3 is singular, which is no surprise since only two errors were introduced into the codeword. However, the upper-left corner of the matrix is identical to [S2×2 | C2×1], which gives rise to the solution \lambda_2 = 1000, \lambda_1 = 1011. The resulting error locator polynomial is \Lambda(x) = 1000 x^2 + 1011 x + 0001, which has zeros at 0100 = \alpha^{-13} and 0111 = \alpha^{-5}. The exponents of \alpha correspond to the error locations. There is no need to calculate the error values in this example, as the only possible value is 1.

Citations

  1. ^ Reed & Chen 1999, p. 189
  2. ^ "Phobos Lander Coding System: Software and Analysis". Retrieved 25 February 2012.
  3. ^ "Sandforce SF-2500/2600 Product Brief". Retrieved 25 February 2012.
  4. ^ Gill unknown, p. 3
  5. ^ Lidl & Pilz 1999, p. 229
  6. ^ Gorenstein, Peterson & Zierler 1960
  7. ^ a b c Gill unknown, p. 47

References

Primary sources

Secondary sources

Posting Komentar

2 Komentar

  1. Hі cοllеаgues, how is the whοle thing, аnԁ whаt yοu would like to
    ѕaу rеgarding thіs pоst, іn
    my view іts in faсt rеmarkable in ѕupρort of me.
    Feel free to surf my web-site pikavippis.net

    BalasHapus
  2. Gгeat work! Thiѕ is the kіnd of informatіοn
    that аrе meаnt to be shared агound the internet.

    Disgraсe on Google for not positiοning thіѕ submіt higher!
    Cοme on οveг and sеeκ aԁvіce from my web
    ѕite . Тhank you =)
    My website : xtb broker

    BalasHapus