Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Myagmarbayar Nergui et al.

In this chapter, we used the lossless compression technique, Huffman coding, because of using it for watermarked image, which embeds patient text information into retinal fundus images, should recover information without loss. Here, we talk about Huffman coding briefly.

11.3.1. Huffman Coding

During his Ph.D. work, David A. Huffman developed the Huffman coding technique as a class assignment; the class was information theory and was taught by Professor Robert M. Fano at Massachusetts Institute of Technology (MIT). In 1952, Huffman published a paper entitled “A Method for the Construction of Minimum-Redundancy Codes” based on this work. Codes produced using this technique are called Huffman codes. Not all string and numeric characters occur with the same frequency. However, all numerical characters are allocated the same amount of space. The size of one string and numerical character is one byte.

A definition of Huffman coding is an ensemble where X is a set of (x, Ax, Px), x is a value of a random variable, Ax is a set of possible values for x, Ax = {a1, a2, . . . , ai}, Px is a probability for each value,

Px = {p1, p2, . . . , pi}, where P(x) = P(x = ai) = pi, pi > 0.

The Huffman coding algorithm makes optimal symbol codes. In Huffman coding, probable characters are compared with short and less probable characters using codes. Every code can be uniquely decoded to create a binary, or Huffman tree, generated from the exact frequencies of the text. A decoder can use the Huffman tree to decode the string by following the paths according to the string and adding each new character that it finds.

11.4. Error Control Coding

In practice, communication lines, or transmission channels, and digital recordings introduce errors in data transmitted through them. Thus, communication lines and digital recordings are not perfect. ECCs are usually applied to detect and correct the errors influenced by imperfect transmission mediums and digital recording. Working principle of the ECC is adding redundant data to transmitted messages. This redundant data can find and

328

Reliable Transmission of Retinal Fundus Images

make correction the errors added into the data message. Hence, using this special characteristic of ECC strategies, original data can be retrieved from corrupted data, which was introduced by errors passing through imperfect communication channels or digital recording. ECC techniques are currently utilized in securing deep space communications in addition to storage media. ECC schemes also are applied to many of the communication appliances for ensuring information integrity. The operations of ECC are executed in the data-link layer to enhance the reliability of the physical transmission mediums.

Transmission mediums and digital recording can cause errors. Thus, the categories of ECC address different types of damages.17,1923

ECCs are sorted as block codes or trellis codes, depending on their structure. BCH codes and RS codes are examples of block codes, and convolutional codes and turbo codes are examples of trellis codes. In this study, we have applied Hamming codes, BCH codes, RS codes, convolutional codes, and turbo codes to improve the quality of watermarked retinal fundus images while transmitting through communication channel and storing at storage media. Their abilities are measured by comparing the BER and SER as the SNR after applying the ECC. The BER/SER is determined to be the ratio of the number of bits/symbols received in error to the total number of bits/symbols passed through the channel in a given time slot. Hence, the main role of the ECC is to protect information from corruption induced by the imperfect transmission medium and imperfect digital recording. In this study, the AWGN channel model is used because of its simplicity. Then also, the signal strength variation is different while transmitting through channels where the SNR alters randomly with time. This variation in signal strength is a feature of fading channels. We have shown the performance of the Hamming code, the RS code, and the BCH code over such channels in Fig. 11.7, and the performance of the convolutional and the turbo codes, which had code rates of 0.333 and 0.5 over the AWGN channel in Figs. 11.8 and 11.10.

11.4.1. Hamming Codes

In 1950, Richard Hamming described a class of error correction code, which has become popular in usage. Hamming codes are considered an extension

329

Myagmarbayar Nergui et al.

of simple parity codes that are applied in multiple parity bits. Every parity bit is determined over a subset of the information bits in a message. Hamming codes was first applied for long distance telephone use to control errors.

Hamming codes are [n, k] linear error correcting block codes. Hamming codes are commonly expressed as a function of a single integer, where n k is the number of parity or check bits. There are many Hamming codes, namely, (7, 4), (15, 11), and (31, 26), etc.

Here, we briefly explain the Hamming Code (15, 11).

A codeword length is n = 2m 1 = 24 1 = 15 bits,

The number of information bits is k = 2m m 1 = 24 4 1 = 11 bits, The number of parity bits is m = n k = 15 11 = 4 bits,

The minimum distance of this code is n k = 7 4 = 3-bit, and The error correcting capability is t = (dmin 1)/2 = (3 1)/2 = 1.

For block code, we convert a sequence of input bits s with length K, into a sequence of transmitted bits t with length N, to add redundancy bits to transmitted bits. In a linear error correcting block code, the redundancy N–K bits are linear functions of the input K bits and are called paritycheck bits. Hamming code is linear and can be written using the following matrix. The transmitted codeword t is found from the input sequence s by a linear operation, t = [G] · s, where [G] s the generator matrix of the Hamming code.

The Hamming codes are cyclic code and perfect code. The decoding process of the Hamming codes, which determines the codeword, is sent if errors occur in transmission and storage, and is easy to use with the simple form of the parity check matrix of these codes.

For the Hamming (15, 11) code, the parity check matrix equals [H ] =

 

.

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

.

 

]. Ink is [(n k) × (n k)] identity matrix. [P

] is the parity

[Ink . P

 

check matrix.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

0

0

1

0

0

1

1

0

1

0

1

1

1

[

H

 

 

0 1 0

0

1

1

0

1

0

1

1

1

1

0

0

 

] = 0 0 1

0

0

1

1

0

1

0

1

1

1

1

0

 

 

 

 

0

0

0

1

0

0

1

1

0

1

0

1

1

1

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

330

 

 

 

 

 

 

 

 

 

Reliable Transmission of Retinal Fundus Images

The Generator Matrix:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

.

 

 

[n × n] identity matrix. P is parity check matrix.

[G] = [P . Ik] In is

 

 

 

0

1

1

0

0

1

0

0

0

0

0

0

0

0

0

 

 

 

1

1

0

0

1

0

0

0

0

0

0

0

0

0

0

 

 

 

0

0

1

1

0

0

1

0

0

0

0

0

0

0

0

 

 

 

1

1

0

1

0

0

0

1

0

0

0

0

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

1

0

0

0

0

0

1

0

0

0

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G

 

0 1 0

1

0

0

0

0

0

1

0

0

0

0

0

[

 

] =

 

1

1

0

0

0

0

0

0

0

1

0

0

0

 

 

 

 

1

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

1

1

1

0

0

0

0

0

0

0

1

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

1

1

1

0

0

0

0

0

0

0

0

1

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

1

1

0

0

0

0

0

0

0

0

0

1

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

0

1

0

0

0

0

0

0

0

0

0

0

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The above two matrices are [G] and [H ] in systematic form. For linear block codes, [G] and [H ] must satisfy [G] · [H ]T = 0, an all-zeros matrix.

Hence, the right side of the [H ] matrix is all nonzero n-tuples. The left side of the matrix is the (n k)-identity matrix. Thus, [G] can be found from [H ] by taking the transpose of the right side of [H ] with the identity k. The right side of [G] is an identity matrix. The (15, 11) Hamming code has a code rate = 11/15 = 0.7333.

11.4.2. BCH Codes

BCH codes represent a large class of multiple random error-correcting codes. A. Hocquenghem first discovered BCH codes in 1959, and R. C. Bose and D. K. Ray-Chaudhuri discovered BCH codes independently in 1960.23 These researchers brought out the codes, but not the decoding algorithms.

BCH codes are played the significant and powerful role to linear block codes, which are cyclic codes with a variety of parameters. The most common BCH codes are characterized as follows.

For any integer m 3 and t < 2m1;

Codeword length n = 2m 1,

Parity check bits n k mt, and

Minimum distance dmin 2t + 1,

331