Finding Good Binary Linear Block Codes based on Hadamard Matrix and Existing Popular Codes

—Because of their algebraic structure and simple hardware implementation, linear codes as class of error-correcting codes, are used in a multitude of situations such as Compact disk, backland bar code, satellite and wireless communication, storage systems, ISBN numbers and so more. Nevertheless, the design of linear codes with high minimum Hamming distance to a given dimension and length of the code, remains an open challenge in coding theory. In this work, we propose a code construction method for constructing good binary linear codes from popular ones, while using the Hadamard matrix. The proposed method takes advantage of the MacWilliams identity for computing the weight distribution, to overcome the problem of computing the minimum Hamming distance for larger dimensions.


I. INTRODUCTION
The basic digital communication chain includes a source, a communication channel, and a receiver. The message is sent from the source to the receiver through a channel. Unless there is an ideal channel, interference will corrupt the message and cause errors, which can be controlled by an error-correcting code. Thus, inner code redundancy is added to the original message downstream of the source. In fact, this redundancy upstream of the receiver is used to correct potential errors without retransmission.
In his fundamental article [1], Shannon showed via his channel coding theorem, the existence of error-correcting codes (ECC), theoretically allowing to transmit data in a channel with a small probability of error, whatever the noise level in the channel. However, the theorem does not specify how to create these codes. Thus the issue of implementing good error-correcting codes remains open in the field of information theory [2]. Great effort has been constantly devoted to constructing error-correcting codes to totally or almost achieve the channel capacity, following Shannon's work. In this way, Arikan developed the first codes (polar codes) with proven capacity, explicit construction, and low coding and decoding complexity [3], with the implementation of their multi-kernel designs [4]. This paper's inspiration comes from the coding process of polar code.
It is difficult to construct explicitly good codes with the best properties. Therefore, working with the already existing codes, with good properties, could be one of construction alternatives [5]. Thus to determine if the code would be good enough, Markus Grassl made a bounds database [6] for the minimum distance of linear block codes over ( ), with ≤ 9, for given length and dimension, including construction details. Hence, if its parameters allow the current bounds to be achieved, the code is called 'good'.
One of the most recent methods to construct good binary linear block codes is presented in [7]. It consists in constructing linear codes from the Hadamard matrix and Bose-Chaudhuri-Hocquenghem (BCH) codes [8]. However, this method suffers from the problem of computing the minimal Hamming distance for higher code dimensions and it is used only for BCH codes. In this paper, a new method to produce good binary linear bloc codes based on the Hadamard matrix and some popular error-correcting codes often used in coding theory [9], [10] is presented. it allows to design many good binary linear block codes with considerable errorcorrecting capability. This method extends the approach presented in [7] for larger dimensions by exploiting the MacWilliams identity to overcome the problem of computing the minimal distance on the one hand, and to confirm the technique for codes other than BCH codes [8] on the other hand.
The remainder of this paper is structured as follows. In the next section, we detail some of the concepts required in this work, such as linear block codes, dual code of linear block code, MacWilliams identity, and Hadamard matrices. We present a new method of searching good binary linear codes in the third section. In the fourth section, we improve the proposed method by the set of good binary linear block codes found. Finally, we give an interpretation of the results before concluding the paper.

II. NOTATION AND PRELIMINARIES
In digital transmission, binary error-correcting codes denoted as [ , , ], can be employed to limit the incidence of word errors. Converting a -bit word to an -bit codeword ( > ), is the coding process. This conversion creates a code with 2 -bit codewords chosen from a set of 2 codewords. it has three main parameters: the length of codeword , the dimension of coded block message and the minimum Hamming distance between codewords . This minimum distance ensures that a codeword will not be transformed, due to noise, into another codeword, and it allows to get the error correction capability.

A. Linear Block Codes Theory
A binary linear code is a sub-vector space over 2 with dimension . The code is a set of 2 codewords, each one is a linear combination of the basis vectors, that form a * generator matrix, ∈ 2 * . In other words, the codeword space of the code can be obtained as follow: Where = ( 0 , 1 , … , ) is called the message to be sent, and = ( 0 , 1 , … , ) is the codeword produced after encoding the message .
The one-to-one correspondence between messages and codewords is a fundamental force of block codes; thus, a message is successfully retrieved if the decoder identifies its equivalent codeword. So, the minimum Hamming distance parameter of a code allows defining a difference limit between two valid codewords. It is the outcome of: In the case of binary linear block codes, the minimum Hamming distance is equivalent to the smallest non-zero weight of a codeword of , so that the weight of a codeword is the number of its non-zero symbols. It is defined as: Another way to define a linear code is to use a matrix ∈ 2 * ( − ) called parity-check matrix, which yields: So, for each linear block code ( , , ) defined by its generator matrix whose rows structure a basis of a linear vector subspace, another linear block code exists. It is called dual code ⊥ , known by length , dimension ( − ), and the vector space consisting of all orthogonal vectors (codewords) with the linear code vectors. This means that two n-tuples and are orthogonal if their inner product is zero: is the generator matrix of a linear code ( , , ) in the systematic form, then the generator matrix of its dual code is called parity-check matrix, such as:

B. Weight Distribution and MacWilliams Identity
As mentioned above, the minimum distance is the lower weight ( ) as defined in (3), of a nonzero codeword among all of the 2 codewords in linear code. The importance of this parameter lays in the error correction capacity of the code through = 2 + 1, where denotes the number of errors that the code is capable of correcting. However, the minimum distance does not give an idea about the other codewords' weight.
Acquiring knowledge of a code's weight distribution is essential and allows the computation of its analytical performance [11]. The weight distribution of an errorcorrecting code is a vector of size whose ℎ element indicates the number of codewords having the weight ( − 1). Otherwise, the weight distribution can be expressed in polynomial form as follows: where is the number of codewords with weight obtained by (3).
Although the weight distribution does not inherently identify a code, it provides useful information that has both practical and theoretical significance. MacWilliams equation [12], a series of linear relations between the weight distributions of a code and its dual, is one of the most fundamental conclusion in weight distributions.
Let be a ( , , ) linear code over with enumerator polynomial ( ) = ∑ =0 , and let ⊥ ( ) be the enumerator polynomial of the dual code ⊥ . Then:

C. Hadamard Matrix
The Hadamard matrix is a square matrix of order , with being a power of 2, and entries in {−1, +1} as Sylvester presented the first examples of these matrices in 1867 [13], before naming them Hadamard matrices in 1893 [14], after Hadamard who generalized them for orders other than 2 . Many employments for these matrices have been found in telecommunications and signal processing. In fact, the use of Hadamard matrices to construct efficient errorcorrecting codes is one of the reasons that increased interest in discovering new Hadamard matrice constructions.
In a binary case, we can replace {−1, +1} of by {1,0} then is obtained by the following technique: where ⊗ denotes the Kronecker product.
The orthogonality of the Hadamard matrix (9) guarantees that each permutation of rows or columns yields another Hadamard matrix [15].

III. NEW METHOD TO FIND GOOD BINARY LINEAR CODES
In [7], a method based on the outcome of the Kronecker product, between the Hadamard matrix and the redundant part of a generator matrix of a Bose, Ray-Chaudhuri et Hocquenghem (BCH) code is presented, to construct good binary linear codes. It allows us, from a ( , , ) BCH code and a Hadamard matrix of order , to build good binary linear codes having a given dimension ' < 20 and length ′ = * . However, for higher dimensions, this approach has a problem to calculate the minimum Hamming distance, it is one of the open problems [16] in the field of information theory for large dimensions. 447 | P a g e www.ijacsa.thesai.org So for dimensions ' > 20, the method presented in [7] remains restricted according to the performance of a simple computer to calculate the minimum distance for codes with dimensions greater than 20. In this work, we practically took advantage of the dual properties of linear block codes and MacWilliams identity as it can be seen in figure 1 and outlined in the steps bellow, in order to fix this issue and validate the process by constructing good codes with high dimensions. The technique consists of treating the minimum Hamming distance computation problem of the larger dimensions by searching good binary linear codes via their dual codes, with small dimensions, and calculating the weight distribution obtained using the identity of MacWilliams identity as described in (8). By definition, the minimum Hamming distance of a linear code corresponds to the smallest weight of its codewords, so it is obvious to extract the minimum distance of a linear code from its weight distribution, it corresponds to the index of the first non-null element of the weight distribution of a linear code (first element excluded, because it corresponds to the zero's codeword).
The details of the method we propose to improve the dimensions of the constructed good binary linear codes are developed in the following steps. Let's use: • : * ( − ) matrix extracted from a generator matrix of the popular used code in the systematic form. • : Lower bound is the best-known minimum distance found in all pre-existing works.
• ': Dimension of the desired code to be built.
• ⊥ : Dual code constructed from the parity check matrix .
• (): Function to transform a generator matrix to parity check matrix.
• the matrix A after the elimination of unnecessary rows (rows whose weight is less than LB).
Step1: Perform the kronecker product between the and . Step2: Insert the rows of the step1 result whose weight is less than in . Step3: Generate matrices from the output of step 2 by combining ′ rows. Step4: From step 3, for each matrix : -Extract the parity matrix from . Step5: If ' ≥ then add the code to the list of ( ′ = ( − ), ', ') good binary linear codes.
Let's give an example: Consider, the matrix derived from the Kronecker product between the Hadamard matrix of order = 4 and the redundant part matrix extracted from the generator matrix of (7,4,3) BCH code. i.e. is the length of suspect codes that can be constructed. Although the minimum distance of a linear code is equal to the minimum weight of the code, and the rows of a generator matrix are also codewords, it is consequently necessary to eliminate the rows whose weight is less than the lower bound (LB). Note the matrix after the elimination of unnecessary rows. For example, to build a code with dimension ′ = 8 , proceeding to the construction of a code with ′ = 4. In other words, it would be sufficient to check-in a space of size 2 4 instead of searching in a space of dimension 2 8 . From [17], the best-known minimum distance (LB) for ' = 12 and ' = 8 is 3, so will be obtained by eliminating all rows with a weight less than 3 as defined in (11).
By combining 4 rows of as a generator matrix of a suspect (12,4,x) code, calculating the weight distribution of the code and applying the MacWilliams identity, codes with the following weight distribution is obtained: Which means that the minimum distance of the linear code is 3 and it contains 16 codewords of weight 3.

IV. EXPERIMENTAL RESULTS
Three types of results are presented in this section; the first one is obtained by the new method mentioned in the previous section, the second is an extension of [7] for the Golay and Reed-Muller codes, and the third one is based on the codes of the first result. All programs have been implemented in GAP via the GUAVA package over 2 and 3 [18].

A. Results Obtained using the MacWilliams Identity
The method as defined in [7], through a computer calcuations with Intel(R) Core(TM) i5-4210U RAM 4 CPU @1.70GHz configuration, does not permit to generate good binary linear codes with a dimension greater than 20. But, for dimensions greater than 20 and using the same computer, the new approach helps us to verify the validity of the concept, for dimensions greater than 20, and it allowed us to find new good binary linear codes. Table 1 describes the set of good binary  linear codes with larger dimensions( > 20), built using BCH codes by applying the presented approach.
In [7], it is focused on the construction of good binary linear codes from the Hadamard matrix and BCH codes. In this work, we tried to apply the approach for other codes with good properties. Table 2 describes the good codes constructed from Golay code (23,12).
Applicability of the technique for Reed-Muller codes produced satisfactory results, as shown in table 3.

B. Good Extended and Punctured Binary Linear Codes
Extending and puncturing code are two methods of code construction [19], which maintain the code dimension while varying its length . In the case of extending code, parity bits are added, which can contribute to increase a minimum distance. Whereas puncturing removes parity bits, which can lead to decrease a minimum distance. Let us ( + 1, ) a binary linear code who is the extended code of the linear ( , ) . The extension is completed by adding a new coordinate (parity check bit) to each codeword of so that the codeword length goes up. Put differently, each codeword = ( 1 , 2 , … , , +1 ) of the extended code is generated by attaching a coordinate to the codeword = ( 1 , 2 , … , ) from , in order that +1 = ∑ =1 , where sum is modulo 2 addition in binary case.
In this reflection, new good codes were defined by applying the extending and puncturing to the good codes mentioned in Tables 1, 2, and 3, as well as to the codes contained in related previous work [7]. Table 4 shows all the good extended and punctured binary linear codes found.

C. Interpretation
Lately, error-correcting code designers have been concerned with finding a high code rate which is defined as the ratio of the number of information symbols to the length of codeword , to take maximum advantage of the capacity of the channel.
In this work, the focus is on error-correcting codes with a rate greater than 0.5. Most of the constructed codes have a minimum Hamming distance equal to the lower bound, allowing us to identify them as well as good binary linear error-correcting codes. In some of the results above, for given ′ and ′, most of the codes are found with the same wanted minimum distance ( ) existing in [6], and the chosen one is the one with the smallest number of codewords with minimum weight in the weight distribution. However, it should be mentioned that just a few codes with the lower limit have been reported in the literature for the codes that did not achieve the , and that the research discovered multiple different codes with the lower limit ( − 1).
In comparison to the results obtained in [7], the technique provided in this paper allows us to construct good binary linear codes with larger dimensions and good properties. Unlike previous research, instead of shedding light on BCH codes only, the strategy yields positive outcomes for a variety of different codes, such as Golay and Reed-Muller codes.
All of the good codes discovered in this and previous research have been validated in software, designed to solve algebra problems MAGMA [20], [21], which supports several coding theories.
The exponential explosion of possible combinations, from , of suspect codes for higher dimensions continues to be a problem of finding good codes observed during this work. This issue will continue to be a source of reflection in the future. The main objective of this simulation is to demonstrate that the proposed methodology is applicable to larger dimensions as well as codes other than used codes. V. CONCLUSION In this paper, an extension of the method of constructing good linear codes from BCH codes and Hadamard matrices, stated in the literature to higher dimensions and for other popular codes. In this way, a set of good binary linear block codes were discovered by exploring the duality of linear codes and MacWilliams identity on the one hand, and by extending and puncturing the discovered results on the other. The majority of the found codes match the bound of the existing codes in the literature. The search issue for good errorcorrecting code search problem is very large for most standard search techniques. In this case, and to overcome the problem of the exponential explosion of the number of combinations, genetic algorithms can be an efficient way to find good solutions in a relatively short time, and it can be a research direction for future work.