- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Myagmarbayar Nergui et al.
where m is the number of parity bits and t is number of errors that can be corrected. t or less than t random errors can be corrected over a span of 2m − 1 bit positions. This ECC is called a t-error-correcting BCH code. Extension of BCH codes was the nonbinary codes, which is using symbols from the Galois field GF(q) by Gorenstein and Zieler.23
In 1960, Peterson investigated the decoding algorithm of binary BCH codes.23 Since this publication, Berlekamp, Massey, Chien, Forney, and many others have improved Peterson’s algorithm.23
The Hamming single error-correcting codes are BCH codes. BCH codes perform well with many code parameters, block lengths, and code rates. For example, in a single error-correcting (15, 11) BCH code with a code rate of 0.733(11/15), m = 4, t = 1; n = 2m − 1 = 24 − 1 = 15; n − k ≤ mt = 4 · 1 = 4; k = n − mt = 15 − 4 = 11; and dmin ≥ 2 · 1 + 1 = 3.
A BCH code is represented by a polynomial expression over a finite field and has an especially selected generator polynomial.
Let α be a primitive element in GF(2m). For 1 ≤ i ≤ t, let φ2i−1(x) be the minimum polynomial of the field element α2i−1. The degree of φ2i−1(x) is m.
The generator polynomial g(x) of t error-correcting primitive BCH codes of length 2m − 1 is given as g(x) = LCM{φ1(x), φ3(x), . . . , φ2t−1(x)}. The degree of g(x) is m · t or less. Thus, the number of parity-check bits of the code n − k is at most m · t. For instance, for the (15, 11) BCH code, minimum polynomial of the field element α:
φ2t−1(x) = φ2·1−1(x) = φ1(x) = (x − α)(x − α2)(x − α4)(x − α8)
= x4 + x + 1.
The generator polynomial g(x) of a single error-correcting primitive BCH code of length 2m − 1 = 24 − 1 = 15 is given by g(x) = LCM{φ1(x)} = x4 + x + 1.
11.4.3. Convolutional Codes
In a communication system, a convolutional code transforms each b-bit information symbol to be converted into an r-bit symbol. Code rate is calculated by ratio of the transmitted bit to the received bit (b/r). The last bits of convolutional code are the transformation bits that are created from the
332
Reliable Transmission of Retinal Fundus Images
last l information bits, where l is calculated by b+1 and called the constraint length of the code.
Convolutional codes are generated by shift register, while transmitted information bits passing through linear finite state circuit. Additional combinatorial logic performs a modulo-two addition. In convolutional coding, an information frame and the previous m information frames are encoded into a single codeword frame. Hence, successive frames are coupled by the encoding procedure. Codes obtained this way are called tree codes, and tree codes with additional properties of linearity and time invariance are called convolutional codes.
In Fig. 11.4a, we have shown a code rate 1/2 (b/r) encoder with a constraint length of 3. An input bit b1 of an input sequence is supplied by the leftmost register. The output r bits of encoder are generated by two generator polynomials created by shift registers. Working principle is that
(a)
(b)
Fig. 11.4. (a) Rate 1/2 non-recursive and non-systematic convolutional encoder with constraint length 3 and (b) convolutional coding system.
333
Myagmarbayar Nergui et al.
the bit shifts all register values to the right (b1 moves to b0, b0 moves to b − 1, and so on) and waits for the next input bit, and output bits are added by module 2. The encoder will work until all registers are on zero state, even there are no input bits. In Fig. 11.4a, we show the generator polynomials, G1 = (1, 1, 1) and G2 = (1, 0, 1).
The state transition diagram and the trellis structure are also used for their description.At the receiver end, the decoding strategy for convolutional codes is the Viterbi algorithm, which is used to decode the convolutional code. In Fig. 11.4b, we have shown a convolutional coding system. First, the input x bits are encoded into c bits using the convolutional encoder. Then, the c bits are passed through a noisy channel, and the r bits are received. Then, the received r bits are decoded by the Viterbi decoder. The Viterbi algorithm calculates maximum likelihood (ML) estimation on the estimated code bits y from the received r bits, so that it maximizes the probability p(r/y) that r bits are received conditioned on the estimated code y bits. y bits must be one of the allowable code bits.
11.4.4. RS Codes14
RS codes are block-based error-correcting codes and it is used widely in the applications of communication systems and digital recording. A block of information representing symbols enters into the RS encoder, which generates codeword symbols for adding redundant symbols to input symbols. These redundant symbols created from the input symbols. The codeword symbol is modulated to be efficiently transmitted or stored. The receiver receives the weakened and corrupted codeword transmitted through noisy channel, and passed it through the RS decoder. The decoder detects and corrects errors inserted into codeword during transmission and storage. RS codes have been widely used, especially as maximum distance separable (MDS) codes.17,19−24
An RS code is defined as an (n, k) RS code over GF(qm), meaning that the encoder encodes k data symbols of m bits each to an n symbols codeword. In this case, n–k parity symbols of m bits each are added. The MDS property implies that for a given value of n and k, RS codes have the largest possible minimum distance dmin. RS codes are subcategory of linear systematic block codes and use symbol containing of m bits (m is a
334
Reliable Transmission of Retinal Fundus Images
natural number and m ≥ 2). For any given value of m, there are 2m − 1 symbols with m bit RS codeword created. For instance, if m = 8, each symbol has 8-bit, each codeword has 28 − 1 = 255 symbols, and the operation generated over GF(28). Some prominent properties of RS codes are Refs.
For the RS codeword n = qm − 1 constructed from k data symbols (each symbol has m bits), the minimum distance of the RS code is calculated by = n − k + 1. Then also, the code rate of RS code is defined by a ratio of number of data symbols to number of codeword symbols, that is k/n,
and the error correcting capability is t = (n − k)/2 .
11.4.5. Turbo Codes14
Shannon’s noisy channel coding theorem18 established that it is possible to communicate information over a noisy channel with a small probability of error as long as the rate of information transfer is less than the channel capacity C. The channel capacity depends on the signal to noise ratio (SNR) and bandwidth of the channel. Furthermore, Shannon computed a lower boundary on the minimal value of SNR expected for supporting an appropriate code rate with an arbitrarily small probability of error. This small probability of error is known as the Shannon limit. An ECC is considered powerful if it performs as well as or nearly as well as the Shannon code.
The development of turbo codes in 199325 helped to improve code designs so that they were able to perform nearly as well as Shannon’s channel coding theorem. Traditional code designs, such as BCH and Reed Muller, or algebraic code designs, have algebraic or topological structures, which ensure good distance properties and decoding algorithms for the code. Turbo codes possess random properties that have enough structure for an effective iterative decoding process as originally envisaged by Shannon. Iterative decoder of turbo codes applied any code rate and larger than 104 bits can accomplish a lower BER at 10−5 SNRs within 1 dB of Shannon limit. Turbo codes are consisting of two or more simple component codes along with an interleaver. We can find some research examples of turbo decoders. For instance, soft-in–soft-out (SISO) decoders for each component code are applied in an iterative manner.26 In this study, simple turbo code is applied
335
