
- •Table of Contents
- •Objectives
- •Embedded Microcomputer Applications
- •Microcomputer and Microcontroller Architectures
- •Digital Hardware Concepts
- •Voltage, Current, and Resistance
- •Diodes
- •Transistors
- •Mechanical Switches
- •Transistor Switch ON
- •Transistor Switch OFF
- •The FET as a Logic Switch
- •NMOS Logic
- •CMOS Logic
- •Mixed MOS
- •Real Transistors Don’t Eat Q!
- •Logic Symbols
- •Tri-State Logic
- •Timing Diagrams
- •Multiplexed Bus
- •Loading and Noise Margin Analysis
- •The Design and Development Process
- •Chapter One Problems
- •Organization: von Neumann vs. Harvard
- •Microprocessor/Microcontroller Basics
- •Microcontroller CPU, Memory, and I/O
- •Design Methodology
- •The 8051 Family Microcontroller
- •Processor Architecture
- •Introduction to the 8051 Architecture
- •8051 Memory Organization
- •8051 CPU Hardware
- •Oscillator and Timing Circuitry
- •The 8051 Microcontroller Instruction Set Summary
- •Direct and Register Addressing
- •Indirect Addressing
- •Immediate Addressing
- •Generic Address Modes and Instruction Formats
- •8051 Address Modes
- •The Software Development Cycle
- •Software Development Tools
- •Hardware Development Tools
- •Chapter Two Problems
- •Timing Diagram Notation Conventions
- •Rise and Fall Times
- •Propagation Delays
- •Setup and Hold Time
- •Tri-State Bus Interfacing
- •Pulse Width and Clock Frequency
- •Fan-Out and Loading Analysis—DC and AC
- •Calculating Wiring Capacitance
- •Fan-Out When CMOS Drives LSTTL
- •Transmission Line Effects
- •Ground Bounce
- •Logic Family IC Characteristics and Interfacing
- •Interfacing TTL Compatible Signals to 5 Volt CMOS
- •Design Example: Noise Margin Analysis Spreadsheet
- •Worst-Case Timing Analysis Example
- •Chapter Three Review Problems
- •Memory Taxonomy
- •Secondary Memory
- •Sequential Access Memory
- •Direct Access Memory
- •Read/Write Memories
- •Read-Only Memory
- •Other Memory Types
- •JEDEC Memory Pin-Outs
- •Device Programmers
- •Memory Organization Considerations
- •Parametric Considerations
- •Asynchronous vs. Synchronous Memory
- •Error Detection and Correction
- •Error Sources
- •Confidence Checks
- •Memory Management
- •Cache Memory
- •Virtual Memory
- •CPU Control Lines for Memory Interfacing
- •Chapter Four Problems
- •Read and Write Operations
- •Address, Data, and Control Buses
- •Address Spaces and Decoding
- •Address Map
- •Chapter Five Problems
- •The Central Processing Unit (CPU)
- •Memory Selection and Interfacing
- •Preliminary Timing Analysis
- •External Data Memory Cycles
- •External Memory Data Memory Read
- •External Data Memory Write
- •Design Problem 1
- •Design Problem 2
- •Design Problem 3
- •Completing the Analysis
- •Chapter Six Problems
- •Introduction to Programmable Logic
- •Technologies: Fuse-Link, EPROM, EEPROM, and RAM Storage
- •PROM as PLD
- •Programmable Logic Arrays
- •PAL-Style PLDs
- •Design Examples
- •PLD Development Tools
- •Simple I/O Decoding and Interfacing Using PLDs
- •IC Design Using PCs
- •Chapter Seven Problems
- •Direct CPU I/O Interfacing
- •Port I/O for the 8051 Family
- •Output Current Limitations
- •Simple Input/Output Devices
- •Matrix Keyboard Input
- •Program-Controlled I/O Bus Interfacing
- •Real-Time Processing
- •Direct Memory Access (DMA)
- •Burst vs. Single Cycle DMA
- •Cycle Stealing
- •Elementary I/O Devices and Applications
- •Timing and Level Conversion Considerations
- •Level Conversion
- •Power Relays
- •Chapter Eight Problems
- •Interrupt Cycles
- •Software Interrupts
- •Hardware Interrupts
- •Interrupt Driven Program Elements
- •Critical Code Segments
- •Semaphores
- •Interrupt Processing Options
- •Level and Edge Triggered Interrupts
- •Vectored Interrupts
- •Non-Vectored Interrupts
- •Serial Interrupt Prioritization
- •Parallel Interrupt Prioritization
- •Construction Methods
- •Power and Ground Planes
- •Ground Problems
- •Electromagnetic Compatibility
- •Electrostatic Discharge Effects
- •Fault Tolerance
- •Hardware Development Tools
- •Instrumentation Issues
- •Software Development Tools
- •Other Specialized Design Considerations
- •Thermal Analysis and Design
- •Battery Powered System Design Considerations
- •Processor Performance Metrics
- •Benchmarks
- •Device Selection Process
- •Analog Signal Conversion
- •Special Proprietary Synchronous Serial Interfaces
- •Unconventional Use of DRAM for Low Cost Data Storage
- •Digital Signal Processing / Digital Audio Recording
- •Detailed Checklist
- •1. Define Power Supply Requirements
- •2. Verify Voltage Level Compatibility
- •3. Check DC Fan-Out: Output Current Drive vs. Loading
- •4. AC (Capacitive) Output Drive vs. Capacitive Load and De-rating
- •5. Verify Worst Case Timing Conditions
- •6. Determine if Transmission Line Termination is Required
- •7. Clock Distribution
- •8. Power and Ground Distribution
- •9. Asynchronous Inputs
- •10. Guarantee Power-On Reset State
- •11. Programmable Logic Devices
- •12. Deactivate Interrupt and Other Requests on Power-Up
- •13. Electromagnetic Compatibility Issues
- •14. Manufacturing and Test Issues
- •Books
- •Web and FTP Sites
- •Periodicals: Subscription
- •Periodicals: Advertiser Supported Trade Magazines
- •Index
113CHAPTER FOUR
Memory Technologies and Interfacing
example is when a 16-bit CPU’s program is burned into two 8-bit memories, one containing the lower byte and one containing upper byte of the instructions. When the bytes in the block of memory are summed, the answer is the same, even if the two devices are swapped! Thus, a serious and common error would not be discovered.
The CRC (cyclic redundancy code) is used to detect changes within a block of data or its order. The CRC is based on a polynomial that is calculated using shifts and XOR (exclusive OR) logic to generate a number that is dependent on the data and the order of the data. The detailed operation of a CRC is beyond the scope of this book, but is based on the same polynomials used for generating pseudo-random numbers. It is commonly used for checking blocks of data on magnetic storage devices and communication links.
Memory Management
In order to understand what memory management is, it’s helpful to understand the motivation behind its use. There are two kinds of memory management: memory address relocation and memory performance enhancement. They are often used in conjunction, as is commonly done in personal computers. This section covers the performance enhancement aspects, while the address relocation issues will be covered in Chapter Six.
The differences between different storage technologies, in terms of performance and cost, vary over many orders of magnitude. For example, semiconductor memory devices have access times that are many orders of magnitude faster (nanosecond vs. millisecond access time) than that of magnetic disks. Of course, magnetic disks also have a cost several orders of magnitude less than semiconductor memory on a cost per bit basis. This disparity in price and performance has lead to the idea of using small, fast memories to store the most frequently accessed subset of the complete collection of data present
in a larger, slower memory. This technique of buffering, often referred to as caching memory contents in a fast memory, is essentially similar whether it is applied to the memory attached to a CPU or the magnetic or optical storage mechanisms. In fact, there may be several layers of caching in a given system, starting with the smallest, fastest memory closest to the CPU, followed by slower but larger memories.
114EMBEDDED CONTROLLER
Hardware Design
Memory price is inversely proportional to speed, as indicated below:
Memory |
RelativeAccess |
Relative |
|
type |
size(Bytes) |
Time(Sec) |
cost/byte |
|
|
|
|
Tape |
1010 |
10 |
1 |
Disk |
109 |
10-3 |
10 |
DRAM |
106 |
10-7 |
102 |
SRAM |
105 |
10-8 |
103 |
Cache Memory
When a high speed memory is used to provide rapid access to the CPU for most frequently used portion of main memory, it is referred to as a CPU cache memory. Likewise, when the main memory is used to provide rapid access to data stored on a disk, it is referred to as a disk cache.
The objective of these approaches is to maximize the likelihood that most pieces of data will be found in the small and fast memory most of the time, thus reducing the average effective access time. The object is to succeed at finding most data in the small fast memory most of the time, minimizing the number of accesses to the big slow memory. Fast SRAM is used as a fast temporary buffer (memory cache) between main memory and the CPU. Main memory DRAM is used to buffer disk data (disk cache). Most hard disk drives also have some internal fast semiconductor RAM to cache data as it is being transferred to and from the disk.
Virtual Memory
Disk storage can be used to emulate a larger primary memory than is actually available. Demand paged virtual memory provides an apparently large primary memory by swapping pages of data between real primary memory and disk.
This is a combination of hardware for translating logical (virtual) addresses, moving pages as needed, and operating system software to determine where and when pages should be kept and detect access attempts to pages which are not in primary memory.
When address relocation mechanisms are combined with disk caching and