Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
578.56 Кб

Architecture (computer science)



Architecture (computer science), a general term referring to the structure of all or part of a computer system. The term also covers the design of system software, such as the operating system (the program that controls the computer), as well as referring to the combination of hardware and basic software that links the machines on a computer network. Computer architecture refers to an entire structure and to the details needed to make it functional. Thus, computer architecture covers computer systems, microprocessors, circuits, and system programs. Typically the term does not refer to application programs, such as spreadsheets or word processing, which are required to perform a task but not to make the system run.



In designing a computer system, architects consider five major elements that make up the system's hardware: the arithmetic/logic unit, control unit, memory, input, and output. The arithmetic/logic unit performs arithmetic and compares numerical values. The control unit directs the operation of the computer by taking the user instructions and transforming them into electrical signals that the computer's circuitry can understand. The combination of the arithmetic/logic unit and the control unit is called the central processing unit (CPU). The memory stores instructions and data. The input and output sections allow the computer to receive and send data, respectively.

Different hardware architectures are required because of the specialized needs of systems and users. One user may need a system to display graphics extremely fast, while another system may have to be optimized for searching a database or conserving battery power in a laptop computer.

In addition to the hardware design, the architects must consider what software programs will operate the system. Software, such as programming languages and operating systems, makes the details of the hardware architecture invisible to the user. For example, computers that use the C programming language or a UNIX operating system may appear the same from the user's viewpoint, although they use different hardware architectures.



When a computer carries out an instruction, it proceeds through five steps. First, the control unit retrieves the instruction from memory—for example, an instruction to add two numbers. Second, the control unit decodes the instructions into electronic signals that control the computer. Third, the control unit fetches the data (the two numbers). Fourth, the arithmetic/logic unit performs the specific operation (the addition of the two numbers). Fifth, the control unit saves the result (the sum of the two numbers).

Early computers used only simple instructions because the cost of electronics capable of carrying out complex instructions was high. As this cost decreased in the 1960s, more complicated instructions became possible. Complex instructions (single instructions that specify multiple operations) can save time because they make it unnecessary for the computer to retrieve additional instructions. For example, if seven operations are combined in one instruction, then six of the steps that fetch instructions are eliminated and the computer spends less time processing that operation. Computers that combine several instructions into a single operation are called complex instruction set computers (CISC).

However, most programs do not often use complex instructions, but consist mostly of simple instructions. When these simple instructions are run on CISC architectures they slow down processing because each instruction—whether simple or complex—takes longer to decode in a CISC design. An alternative strategy is to return to designs that use only simple, single-operation instruction sets and make the most frequently used operations faster in order to increase overall performance. Computers that follow this design are called reduced instruction set computers (RISC).

RISC designs are especially fast at the numerical computations required in science, graphics, and engineering applications. CISC designs are commonly used for nonnumerical computations because they provide special instruction sets for handling character data, such as text in a word processing program. Specialized CISC architectures, called digital signal processors, exist to accelerate processing of digitized audio and video signals.



The CPU of a computer is connected to memory and to the outside world by means of either an open or a closed architecture. An open architecture can be expanded after the system has been built, usually by adding extra circuitry, such as a new microprocessor computer chip connected to the main system. The specifications of the circuitry are made public, allowing other companies to manufacture these expansion products.

Closed architectures are usually employed in specialized computers that will not require expansion—for example, computers that control microwave ovens. Some computer manufacturers have used closed architectures so that their customers can purchase expansion circuitry only from them. This allows the manufacturer to charge more and reduces the options for the consumer.



Computers communicate with other computers via networks. The simplest network is a direct connection between two computers. However, computers can also be connected over large networks, allowing users to exchange data, communicate via electronic mail, and share resources such as printers.

Computers can be connected in several ways. In a ring configuration, data are transmitted along the ring and each computer in the ring examines this data to determine if it is the intended recipient. If the data are not intended for a particular computer, the computer passes the data to the next computer in the ring. This process is repeated until the data arrive at their intended destination. A ring network allows multiple messages to be carried simultaneously, but since each message is checked by each computer, data transmission is slowed.

In a bus configuration, computers are connected through a single set of wires, called a bus. One computer sends data to another by broadcasting the address of the receiver and the data over the bus. All the computers in the network look at the address simultaneously, and the intended recipient accepts the data. A bus network, unlike a ring network, allows data to be sent directly from one computer to another. However, only one computer at a time can transmit data. The others must wait to send their messages.

In a star configuration, computers are linked to a central computer called a hub. A computer sends the address of the receiver and the data to the hub, which then links the sending and receiving computers directly. A star network allows multiple messages to be sent simultaneously, but it is more costly because it uses an additional computer, the hub, to direct the data.



One problem in computer architecture is caused by the difference between the speed of the CPU and the speed at which memory supplies instructions and data. Modern CPUs can process instructions in 3 nanoseconds (3 billionths of a second). A typical memory access, however, takes 100 nanoseconds and each instruction may require multiple accesses. To compensate for this disparity, new computer chips have been designed that contain small memories, called caches, located near the CPU. Because of their proximity to the CPU and their small size, caches can supply instructions and data faster than normal memory. Cache memory stores the most frequently used instructions and data and can greatly increase efficiency.

Although a larger cache memory can hold more data, it also becomes slower. To compensate, computer architects employ designs with multiple caches. The design places the smallest and fastest cache nearest the CPU and locates a second larger and slower cache farther away. This arrangement allows the CPU to operate on the most frequently accessed instructions and data at top speed and to slow down only slightly when accessing the secondary cache. Using separate caches for instructions and data also allows the CPU to retrieve an instruction and data simultaneously.

Another strategy to increase speed and efficiency is the use of multiple arithmetic/logic units for simultaneous operations, called superscalar execution. In this design, instructions are acquired in groups. The control unit examines each group to see if it contains instructions that can be performed together. Some designs execute as many as six operations simultaneously. It is rare, however, to have this many instructions run together, so on average the CPU does not achieve a six-fold increase in performance.

Multiple computers are sometimes combined into single systems called parallel processors. When a machine has more than one thousand arithmetic/logic units, it is said to be massively parallel. Such machines are used primarily for numerically intensive scientific and engineering computation. Parallel machines containing as many as sixteen thousand computers have been constructed.

Computer Science



Computer Science, study of the theory, experimentation, and engineering that form the basis for the design and use of computers—devices that automatically process information. Computer science traces its roots to work done by English mathematician Charles Babbage, who first proposed a programmable mechanical calculator in 1837. Until the advent of electronic digital computers in the 1940s, computer science was not generally distinguished as being separate from mathematics and engineering. Since then it has sprouted numerous branches of research that are unique to the discipline.



Early work in the field of computer science during the late 1940s and early 1950s focused on automating the process of making calculations for use in science and engineering. Scientists and engineers developed theoretical models of computation that enabled them to analyze how efficient different approaches were in performing various calculations. Computer science overlapped considerably during this time with the branch of mathematics known as numerical analysis, which examines the accuracy and precision of calculations.

As the use of computers expanded between the 1950s and the 1970s, the focus of computer science broadened to include simplifying the use of computers through programming languages—artificial languages used to program computers, and operating systems—computer programs that provide a useful interface between a computer and a user. During this time, computer scientists were also experimenting with new applications and computer designs, creating the first computer networks, and exploring relationships between computation and thought.

In the 1970s, computer chip manufacturers began to mass produce microprocessors—the electronic circuitry that serves as the main information processing center in a computer. This new technology revolutionized the computer industry by dramatically reducing the cost of building computers and greatly increasing their processing speed. The microprocessor made possible the advent of the personal computer, which resulted in an explosion in the use of computer applications. Between the early 1970s and 1980s, computer science rapidly expanded in an effort to develop new applications for personal computers and to drive the technological advances in the computing industry. Much of the earlier research that had been done began to reach the public through personal computers, which derived most of their early software from existing concepts and systems.

Computer scientists continue to expand the frontiers of computer and information systems by pioneering the designs of more complex, reliable, and powerful computers; enabling networks of computers to efficiently exchange vast amounts of information; and seeking ways to make computers behave intelligently. As computers become an increasingly integral part of modern society, computer scientists strive to solve new problems and invent better methods of solving current problems.

The goals of computer science range from finding ways to better educate people in the use of existing computers to highly speculative research into technologies and approaches that may not be viable for decades. Underlying all of these specific goals is the desire to better the human condition today and in the future through the improved use of information.



Computer science is a combination of theory, engineering, and experimentation. In some cases, a computer scientist develops a theory, then engineers a combination of computer hardware and software based on that theory, and experimentally tests it. An example of such a theory-driven approach is the development of new software engineering tools that are then evaluated in actual use. In other cases, experimentation may result in new theory, such as the discovery that an artificial neural network exhibits behavior similar to neurons in the brain, leading to a new theory in neurophysiology.

It might seem that the predictable nature of computers makes experimentation unnecessary because the outcome of experiments should be known in advance. But when computer systems and their interactions with the natural world become sufficiently complex, unforeseen behaviors can result. Experimentation and the traditional scientific method are thus key parts of computer science.



Computer science can be divided into four main fields: software development, computer architecture (hardware), human-computer interfacing (the design of the most efficient ways for humans to use computers), and artificial intelligence (the attempt to make computers behave intelligently). Software development is concerned with creating computer programs that perform efficiently. Computer architecture is concerned with developing optimal hardware for specific computational needs. The areas of artificial intelligence (AI) and human-computer interfacing often involve the development of both software and hardware to solve specific problems.


Software Development

In developing computer software, computer scientists and engineers study various areas and techniques of software design, such as the best types of programming languages and algorithms (see below) to use in specific programs, how to efficiently store and retrieve information, and the computational limits of certain software-computer combinations. Software designers must consider many factors when developing a program. Often, program performance in one area must be sacrificed for the sake of the general performance of the software. For instance, since computers have only a limited amount of memory, software designers must limit the number of features they include in a program so that it will not require more memory than the system it is designed for can supply.

Software engineering is an area of software development in which computer scientists and engineers study methods and tools that facilitate the efficient development of correct, reliable, and robust computer programs. Research in this branch of computer science considers all the phases of the software life cycle, which begins with a formal problem specification, and progresses to the design of a solution, its implementation as a program, testing of the program, and program maintenance. Software engineers develop software tools and collections of tools called programming environments to improve the development process. For example, tools can help to manage the many components of a large program that is being written by a team of programmers.

Algorithms and data structures are the building blocks of computer programs. An algorithm is a precise step-by-step procedure for solving a problem within a finite time and using a finite amount of memory. Common algorithms include searching a collection of data, sorting data, and numerical operations such as matrix multiplication. Data structures are patterns for organizing information, and often represent relationships between data values. Some common data structures are called lists, arrays, records, stacks, queues, and trees.

Computer scientists continue to develop new algorithms and data structures to solve new problems and improve the efficiency of existing programs. One area of theoretical research is called algorithmic complexity. Computer scientists in this field seek to develop techniques for determining the inherent efficiency of algorithms with respect to one another. Another area of theoretical research called computability theory seeks to identify the inherent limits of computation.

Software engineers use programming languages to communicate algorithms to a computer. Natural languages such as English are ambiguous—meaning that their grammatical structure and vocabulary can be interpreted in multiple ways—so they are not suited for programming. Instead, simple and unambiguous artificial languages are used. Computer scientists study ways of making programming languages more expressive, thereby simplifying programming and reducing errors. A program written in a programming language must be translated into machine language (the actual instructions that the computer follows). Computer scientists also develop better translation algorithms that produce more efficient machine language programs.

Databases and information retrieval are related fields of research. A database is an organized collection of information stored in a computer, such as a company’s customer account data. Computer scientists attempt to make it easier for users to access databases, prevent access by unauthorized users, and improve access speed. They are also interested in developing techniques to compress the data, so that more can be stored in the same amount of memory. Databases are sometimes distributed over multiple computers that update the data simultaneously, which can lead to inconsistency in the stored information. To address this problem, computer scientists also study ways of preventing inconsistency without reducing access speed.

Information retrieval is concerned with locating data in collections that are not clearly organized, such as a file of newspaper articles. Computer scientists develop algorithms for creating indexes of the data. Once the information is indexed, techniques developed for databases can be used to organize it. Data mining is a closely related field in which a large body of information is analyzed to identify patterns. For example, mining the sales records from a grocery store could identify shopping patterns to help guide the store in stocking its shelves more effectively.

Operating systems are programs that control the overall functioning of a computer. They provide the user interface, place programs into the computer’s memory and cause it to execute them, control the computer’s input and output devices, manage the computer’s resources such as its disk space, protect the computer from unauthorized use, and keep stored data secure. Computer scientists are interested in making operating systems easier to use, more secure, and more efficient by developing new user interface designs, designing new mechanisms that allow data to be shared while preventing access to sensitive data, and developing algorithms that make more effective use of the computer’s time and memory.

The study of numerical computation involves the development of algorithms for calculations, often on large sets of data or with high precision. Because many of these computations may take days or months to execute, computer scientists are interested in making the calculations as efficient as possible. They also explore ways to increase the numerical precision of computations, which can have such effects as improving the accuracy of a weather forecast. The goals of improving efficiency and precision often conflict, with greater efficiency being obtained at the cost of precision and vice versa.

Symbolic computation involves programs that manipulate nonnumeric symbols, such as characters, words, drawings, algebraic expressions, encrypted data (data coded to prevent unauthorized access), and the parts of data structures that represent relationships between values. One unifying property of symbolic programs is that they often lack the regular patterns of processing found in many numerical computations. Such irregularities present computer scientists with special challenges in creating theoretical models of a program’s efficiency, in translating it into an efficient machine language program, and in specifying and testing its correct behavior.


Computer Architecture

Computer architecture is the design and analysis of new computer systems. Computer architects study ways of improving computers by increasing their speed, storage capacity, and reliability, and by reducing their cost and power consumption. Computer architects develop both software and hardware models to analyze the performance of existing and proposed computer designs, then use this analysis to guide development of new computers. They are often involved with the engineering of a new computer because the accuracy of their models depends on the design of the computer’s circuitry. Many computer architects are interested in developing computers that are specialized for particular applications such as image processing, signal processing, or the control of mechanical systems. The optimization of computer architecture to specific tasks often yields higher performance, lower cost, or both.


Artificial Intelligence

Artificial intelligence (AI) research seeks to enable computers and machines to mimic human intelligence and sensory processing ability, and models human behavior with computers to improve our understanding of intelligence. The many branches of AI research include machine learning, inference, cognition, knowledge representation, problem solving, case-based reasoning, natural language understanding, speech recognition, computer vision, and artificial neural networks.

A key technique developed in the study of artificial intelligence is to specify a problem as a set of states, some of which are solutions, and then search for solution states. For example, in chess, each move creates a new state. If a computer searched the states resulting from all possible sequences of moves, it could identify those that win the game. However, the number of states associated with many problems (such as the possible number of moves needed to win a chess game) is so vast that exhaustively searching them is impractical. The search process can be improved through the use of heuristics—rules that are specific to a given problem and can therefore help guide the search. For example, a chess heuristic might indicate that when a move results in checkmate, there is no point in examining alternate moves.



Another area of computer science that has found wide practical use is robotics—the design and development of computer controlled mechanical devices. Robots range in complexity from toys to automated factory assembly lines, and relieve humans from tedious, repetitive, or dangerous tasks. Robots are also employed where requirements of speed, precision, consistency, or cleanliness exceed what humans can accomplish. Roboticists—scientists involved in the field of robotics—study the many aspects of controlling robots. These aspects include modeling the robot’s physical properties, modeling its environment, planning its actions, directing its mechanisms efficiently, using sensors to provide feedback to the controlling program, and ensuring the safety of its behavior. They also study ways of simplifying the creation of control programs. One area of research seeks to provide robots with more of the dexterity and adaptability of humans, and is closely associated with AI.


Human-Computer Interfacing

Human-computer interfaces provide the means for people to use computers. An example of a human-computer interface is the keyboard, which lets humans enter commands into a computer and enter text into a specific application. The diversity of research into human-computer interfacing corresponds to the diversity of computer users and applications. However, a unifying theme is the development of better interfaces and experimental evaluation of their effectiveness. Examples include improving computer access for people with disabilities, simplifying program use, developing three-dimensional input and output devices for virtual reality, improving handwriting and speech recognition, and developing heads-up displays for aircraft instruments in which critical information such as speed, altitude, and heading are displayed on a screen in front of the pilot’s window. One area of research, called visualization, is concerned with graphically presenting large amounts of data so that people can comprehend its key properties.



Because computer science grew out of mathematics and electrical engineering, it retains many close connections to those disciplines. Theoretical computer science draws many of its approaches from mathematics and logic. Research in numerical computation overlaps with mathematics research in numerical analysis. Computer architects work closely with the electrical engineers who design the circuits of a computer.

Beyond these historical connections, there are strong ties between AI research and psychology, neurophysiology, and linguistics. Human-computer interface research also has connections with psychology. Roboticists work with both mechanical engineers and physiologists in designing new robots.

Computer science also has indirect relationships with virtually all disciplines that use computers. Applications developed in other fields often involve collaboration with computer scientists, who contribute their knowledge of algorithms, data structures, software engineering, and existing technology. In return, the computer scientists have the opportunity to observe novel applications of computers, from which they gain a deeper insight into their use. These relationships make computer science a highly interdisciplinary field of study.

Parallel Processing



Parallel Processing, computer technique in which multiple operations are carried out simultaneously. Parallelism reduces computational time. For this reason, it is used for many computationally intensive applications such as predicting economic trends or generating visual special effects for feature films.

Two common ways that parallel processing is accomplished are through multiprocessing or instruction-level parallelism. Multiprocessing links several processors—computers or microprocessors (the electronic circuits that provide the computational power and control of computers)—together to solve a single problem. Instruction-level parallelism uses a single computer processor that executes multiple instructions simultaneously.

If a problem is divided evenly into ten independent parts that are solved simultaneously on ten computers, then the solution requires one tenth of the time it would take on a single nonparallel computer where each part is solved in sequential order. Many large problems are easily divisible for parallel processing; however, some problems are difficult to divide because their parts are interdependent, requiring the results from another part of the problem before they can be solved.

Portions of a problem that cannot be calculated in parallel are called serial. These serial portions determine the computation time for a problem. For example, suppose a problem has nine million computations that can be done in parallel and one million computations that must be done serially. Theoretically, nine million computers could perform nine-tenths of the total computation simultaneously, leaving one-tenth of the total problem to be computed serially. Therefore, the total execution time is only one-tenth of what it would be on a single nonparallel computer, despite the additional nine million processors.



In 1966 American electrical engineer Michael Flynn distinguished four classes of processor architecture (the design of how processors manipulate data and instructions). Data can be sent either to a computer's processor one at a time, in a single data stream, or several pieces of data can be sent at the same time, in multiple data streams. Similarly, instructions can be carried out either one at a time, in a single instruction stream, or several instructions can be carried out simultaneously, in multiple instruction streams.

Serial computers have a Single Instruction stream, Single Data stream (SISD) architecture. One piece of data is sent to one processor. For example, if 100 numbers had to be multiplied by the number 3, each number would be sent to the processor, multiplied, and the result stored; then the next number would be sent and calculated, until all 100 results were calculated. Applications that are suited for SISD architectures include those that require complex interdependent decisions, such as word processing.

A Multiple Instruction stream, Single Data stream (MISD) processor replicates a stream of data and sends it to multiple processors, each of which then executes a separate program. For example, the contents of a database could be sent simultaneously to several processors, each of which would search for a different value. Problems well-suited to MISD parallel processing include computer vision systems that extract multiple features, such as vegetation, geological features, or manufactured objects, from a single satellite image.

A Single Instruction stream, Multiple Data stream (SIMD) architecture has multiple processing elements that carry out the same instruction on separate data. For example, a SIMD machine with 100 processing elements can simultaneously multiply 100 numbers each by the number 3. SIMD processors are programmed much like SISD processors, but their operations occur on arrays of data instead of individual values. SIMD processors are therefore also known as array processors. Examples of applications that use SIMD architecture are image-enhancement processing and radar processing for air-traffic control.

A Multiple Instruction stream, Multiple Data stream (MIMD) processor has separate instructions for each stream of data. This architecture is the most flexible, but it is also the most difficult to program because it requires additional instructions to coordinate the actions of the processors. It also can simulate any of the other architectures but with less efficiency. MIMD designs are used on complex simulations, such as projecting city growth and development patterns, and in some artificial-intelligence programs.


Parallel Communication

Another factor in parallel-processing architecture is how processors communicate with each other. One approach is to let processors share a single memory and communicate by reading each other's data. This is called shared memory. In this architecture, all the data can be accessed by any processor, but care must be taken to prevent the linked processors from inadvertently overwriting each other's results.

An alternative method is to connect the processors and allow them to send messages to each other. This technique is known as message passing or distributed memory. Data are divided and stored in the memories of different processors. This makes it difficult to share information because the processors are not connected to the same memory, but it is also safer because the results cannot be overwritten.

In shared memory systems, as the number of processors increases, access to the single memory becomes difficult, and a bottleneck forms. To address this limitation, and the problem of isolated memory in distributed memory systems, distributed memory processors also can be constructed with circuitry that allows different processors to access each other's memory. This hybrid approach, known as distributed shared memory, eliminates the bottleneck and sharing problems of both architectures.



Parallel processing is more costly than serial computing because multiple processors are expensive and the speedup in computation is rarely proportional to the number of additional processors.

MIMD processors require complex programming to coordinate their actions. Finding MIMD programming errors also is complicated by time-dependent interactions between processors. For example, one processor might require the result from a second processor's memory before that processor has produced the result and put it into its memory. This results in an error that is difficult to identify.

Programs written for one parallel architecture seldom run efficiently on another. As a result, to use one program on two different parallel processors often involves a costly and time-consuming rewrite of that program.



When a parallel processor performs more than 1000 operations at a time, it is said to be massively parallel. In most cases, problems that are suited to massive parallelism involve large amounts of data, such as in weather forecasting, simulating the properties of hypothetical pharmaceuticals, and code breaking. Massively parallel processors today are large and expensive, but technology soon will permit an SIMD processor with 1024 processing elements to reside on a single integrated circuit.

Researchers are finding that the serial portions of some problems can be processed in parallel, but on different architectures. For example, 90 percent of a problem may be suited to SIMD, leaving 10 percent that appears to be serial but merely requires MIMD processing. To accommodate this finding two approaches are being explored: heterogeneous parallelism combines multiple parallel architectures, and configurable computers can change their architecture to suit each part of the problem.

In 1996 International Business Machines Corporation (IBM) challenged Garry Kasparov, the reigning world chess champion, to a chess match with a supercomputer called Deep Blue. The computer utilized 256 microprocessors in a parallel architecture to compute more than 100 million chess positions per second. Kasparov won the match with three wins, two draws, and one loss. Deep Blue was the first computer to win a game against a world champion with regulation time controls. Some experts predict these types of parallel processing machines will eventually surpass human chess playing ability, and some speculate that massive calculating power will one day substitute for intelligence. Deep Blue serves as a prototype for future computers that will be required to solve complex problems.

World Wide Web



World Wide Web (WWW), computer-based network of information resources that a user can move through by using links from one document to another. The information on the World Wide Web is spread over computers all over the world. The World Wide Web is often referred to simply as “the Web.”

Internet Topology

The Internet and the Web are each a series of interconnected computer networks. Personal computers or workstations are connected to a Local Area Network (LAN) either by a dial-up connection through a modem and standard phone line, or by being directly wired into the LAN. Other modes of data transmission that allow for connection to a network include T-1 connections and dedicated lines. Bridges and hubs link multiple networks to each other. Routers transmit data through networks and determine the best path of transmission.

The Web has become a very popular resource since it first became possible to view images and other multimedia on the Internet, a worldwide network of computers, in 1993. The Web offers a place where companies, institutions, and individuals can display information about their products, research, or their lives. Anyone with access to a computer connected to the Web can view most of that information. A small percentage of information on the Web is only accessible to subscribers or other authorized users. The Web has become a forum for many groups and a marketplace for many companies. Museums, libraries, government agencies, and schools make the Web a valuable learning and research tool by posting data and research. The Web also carries information in a wide spectrum of formats. Users can read text, view pictures, listen to sounds, and even explore interactive virtual environments on the Web.



Like all computer networks, the Web connects two types of computers–clients and servers—using a standard set of rules for communication between the computers. The server computers store the information resources that make up the Web, and Web users use client computers to access the resources. A computer-based network may be a public network—such as the worldwide Internet—or a private network, such as a company’s intranet. The Web is part of the Internet. The Internet also encompasses other methods of linking computers, such as Telnet, File Transfer Protocol, and Gopher, but the Web has quickly become the most widely used part of the Internet. It differs from the other parts of the Internet in the rules that computers use to talk to each other and in the accessibility of information other than text. It is much more difficult to view pictures or other multimedia files with methods other than the Web.

Enabling client computers to display Web pages with pictures and other media was made possible by the introduction of a type of software called a browser. Each Web document contains coded information about what is on the page, how the page should look, and to which other sites the document links. The browser on the client’s computer reads this information and uses it to display the page on the client’s screen. Almost every Web page or Web document includes links, called hyperlinks, to other Web sites. Hyperlinks are a defining feature of the Web—they allow users to travel between Web documents without following a specific order or hierarchy.



When users want to access the Web, they use the Web browser on their client computer to connect to a Web server. Client computers connect to the Web in one of two ways. Client computers with dedicated access to the Web connect directly to the Web through a router (a piece of computer hardware that determines the best way to connect client and server computers) or by being part of a larger network with a direct connection to the Web. Client computers with dial-up access to the Web connect to the Web through a modem, a hardware device that translates information from the computer into signals that can travel over telephone lines. Some modems send signals over cable television lines or special high-capacity telephone lines such as Integrated Services Digital Network (ISDN) or Asymmetric Digital Subscriber Loop (ADSL) lines. The client computer and the Web server use a set of rules for passing information back and forth. The Web browser knows another set of rules with which it can open and display information that reaches the client computer.

Web servers hold Web documents and the media associated with them. They can be ordinary personal computers, powerful mainframe computers, or anywhere in the range between the two. Client computers access information from Web servers, and any computer that a person uses to access the Web is a client, so a client could be any type of computer. The set of rules that clients and servers use to talk to each other is called a protocol. The Web, and all Internet formats, uses the protocol called TCP/IP (Transmission Control Protocol/Internet Protocol). However, each part of the Internet—such as the Web, gopher systems, and File Transfer Protocol (FTP) systems—uses a slightly different system to transfer files between clients and servers.

The address of a Web document helps the client computer find and connect to the server that holds the page. The address of a Web page is called a Uniform Resource Locator (URL). A URL is a compound code that tells the client’s browser three things: the rules the client should use to reach the site, the Internet address that uniquely designates the server, and the location within the server’s file system for a given item. An example of a URL is http://encarta.msn.com/. The first part of the URL, http://, shows that the site is on the World Wide Web. Most browsers are also capable of retrieving files with formats from other parts of the Internet, such as gopher and FTP. Other Internet formats use different codes in the first part of their URLs—for example, gopher uses gopher:// and FTP uses ftp://. The next part of the URL, encarta.msn.com, gives the name, or unique Internet address, of the server on which the Web site is stored. Some URLs specify certain directories or files, such as http://encarta.msn.com/explore/default.asp—explore is the name of the directory in which the file default.asp is found.

The Web holds information in many forms, including text, graphical images, and any type of digital media files: including video, audio, and virtual reality files. Some elements of Web pages are actually small software programs in their own right. These objects, called applets (from a small application, another name for a computer program), follow a set of instructions written by the person that programmed the applet. Applets allow users to play games on the Web, search databases, perform virtual scientific experiments, and many other actions.

The codes that tell the browser on the client computer how to display a Web document correspond to a set of rules called Hypertext Markup Language (HTML). Each Web document is written as plain text, and the instructions that tell the client computer how to present the document are contained within the document itself, encoded using special symbols called HTML tags. The browser knows how to interpret the HTML tags, so the document appears on the user’s screen as the document designer intended. In addition to HTML, some types of objects on the Web use their own coding. Applets, for example, are mini-computer programs that are written in computer programming languages such as Visual Basic and Java.

Client-server communication, URLs, and HTML allow Web sites to incorporate hyperlinks, which users can use to navigate through the Web. Hyperlinks are often phrases in the text of the Web document that link to another Web document by providing the document’s URL when the user clicks their mouse on the phrase. The client’s browser usually differentiates between hyperlinks and ordinary text by making the hyperlinks a different color or by underlining the hyperlinks. Hyperlinks allow users to jump between diverse pages on the Web in no particular order. This method of accessing information is called associative access, and scientists believe it bears a striking resemblance to the way the human brain accesses stored information. Hyperlinks make referencing information on the Web faster and easier than using most traditional printed documents.



Even though the World Wide Web is only a part of the Internet, surveys have shown that over 75 percent of Internet use is on the Web. That percentage is likely to grow in the future.

One of the most remarkable aspects of the World Wide Web is its users. They are a cross section of society. Users include students who need to find materials for a term paper, physicians who need to find out about the latest medical research, and college applicants investigating campuses or even filling out application and financial aid forms online. Other users include investors who can look up the trading history of a company’s stock and evaluate data on various commodities and mutual funds. All of this information is readily available on the Web. Users can often find graphs of a company’s financial information that show the information in several different ways.

Travelers investigating a possible trip can take virtual tours, check on airline schedules and fares, and even book a flight on the Web. Many destinations—including parks, cities, resorts, and hotels—have their own Web sites with guides and local maps. Major delivery companies also have Web sites from which customers can track their shipments, finding out where their packages are or when they were delivered.

Government agencies have Web sites where they post regulations, procedures, newsletters, and tax forms. Many elected officials—including almost all members of the United States Congress—have Web sites, where they express their views, list their achievements, and invite input from the voters. The Web also contains directories of e-mail and postal mail addresses and phone numbers.

Many merchants and publishers now do business on the Web. Web users can shop at Web sites of major bookstores, clothing sellers, and other retailers. Many major newspapers have special Web editions that are issued even more frequently than daily. The major broadcast networks use the Web to provide supplementary materials for radio and television shows, especially documentaries. Electronic journals in almost every scholarly field are now on the Web. Most museums now offer the Web user a virtual tour of their exhibits and holdings. These businesses and institutions usually use their Web sites to complement the non-Web parts of the operations. Some receive extra revenues from selling advertising space on their Web sites. Some businesses, especially publishers, provide limited information to ordinary Web users, but offer much more to users who buy a subscription.



The World Wide Web was developed by British physicist and computer scientist Timothy Berners-Lee as a project within the European Organization for Nuclear Research (CERN) in Geneva, Switzerland. Berners-Lee first began working with hypertext in the early 1980s. His implementation of the Web became operational at CERN in 1989, and it quickly spread to universities in the rest of the world through the high-energy physics community of scholars. Groups at the National Center for Supercomputing Applications at the University of Illinois in Champaign-Urbana also researched and developed Web technology. They developed the first major browser, named Mosaic, in 1993. Mosaic was the first browser to come in several different versions, each of which was designed to run on a different operating system. Operating systems are the basic software that control computers.

The architecture of the Web is amazingly straightforward. For the user, the Web is attractive to use because it is built upon a graphical user interface (GUI), a method of displaying information and controls with pictures. The Web also works on diverse types of computing equipment because it is made up of a small set of programs. This small set makes it relatively simple for programmers to write software that can translate information on the Web into a form that corresponds to a particular operating system. The Web’s methods of storing information associatively, retrieving documents with hypertext links, and naming Web sites with URLs make it a smooth extension of the rest of the Internet. This allows easy access to information between different parts of the Internet.



People continue to extend and improve on World Wide Web technology. Computer scientists predict that users will likely see at least five new ways in which the Web has been extended: new ways of searching the Web, new ways of restricting access to intellectual property, more integration of entire databases into the Web, more access to software libraries, and more and more electronic commerce.

HTML will probably continue to go through new forms with extended capabilities for formatting Web pages. Other complementary programming and coding systems such as Visual Basic scripting, Virtual Reality Markup Language (VMRL), Active X programming, and Java scripting will probably continue to gain larger roles in the Web. This will result in more powerful Web pages, capable of bringing information to users in more engaging and exciting ways.

On the hardware side, faster connections to the Web will allow users to download more information, making it practical to include more information and more complicated multimedia elements on each Web page. Software, telephone, and cable companies are planning partnerships that will allow information from the Web to travel into homes along improved telephone lines and coaxial cable such as that used for cable television. New kinds of computers, specifically designed for use with the Web, may become increasingly popular. These computers are less expensive than ordinary computers because they have fewer features, retaining only those required by the Web. Some computers even use ordinary television sets, instead of special computer monitors, to display content from the Web.

Neural Network



Neural Network, in computer science, highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples.

Neural networks were initially studied by computer and cognitive scientists in the late 1950s and early 1960s in an attempt to model sensory perception in biological organisms. Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modeling, and next-generation computing models.

Artificial Neural Network

The neural networks that are increasingly being used in computing mimic those found in the nervous systems of vertebrates. The main characteristic of a biological neural network, top, is that each neuron, or nerve cell, receives signals from many other neurons through its branching dendrites. The neuron produces an output signal that depends on the values of all the input signals and passes this output on to many other neurons along a branching fiber called an axon. In an artificial neural network, bottom, input signals, such as signals from a television camera’s image, fall on a layer of input nodes, or computing units. Each of these nodes is linked to several other “hidden’ nodes between the input and output nodes of the network. There may be several layers of hidden nodes, though for simplicity only one is shown here. Each hidden node performs a calculation on the signals reaching it and sends a corresponding output signal to other nodes. The final output is a highly processed version of the input.



Neural networks fall into two categories: artificial neural networks and biological neural networks. Artificial neural networks are modeled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called neurons that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neurons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations in a short amount of time.



Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons.

Artificial neurons, like their biological counterparts, have simple structures and are designed to mimic the function of biological neurons. The main body of an artificial neuron is called a node or unit. Artificial neurons may be physically connected to one another by wires that mimic the connections between biological neurons, if, for instance, the neurons are simple integrated circuits. However, neural networks are usually simulated on traditional computers, in which case the connections between processing nodes are not physical but are instead virtual.

Artificial neurons may be either discrete or continuous. Discrete neurons send an output signal of 1 if the sum of received signals is above a certain critical value called a threshold value, otherwise they send an output signal of 0. Continuous neurons are not restricted to sending output values of only 1s and 0s; instead they send an output value between 1 and 0 depending on the total amount of input that they receive—the stronger the received signal, the stronger the signal sent out from the node and vice-versa. Continuous neurons are the most commonly used in actual artificial neural networks.


Artificial Neural Network Architecture

The architecture of a neural network is the specific arrangement and connections of the neurons that make up the network. One of the most common neural network architectures has three layers. The first layer is called the input layer and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final layer of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient as possible in discriminating relevant features or patterns from the input layer.



Neural networks differ greatly from traditional computers (for example personal computers, workstations, mainframes) in both form and function. While neural networks use a large number of simple processors to do their calculations, traditional computers generally use one or a few extremely complex processing units. Neural networks also do not have a centrally located memory, nor are they programmed with a sequence of instructions, as are all traditional computers.

 The information processing of a neural network is distributed throughout the network in the form of its processors and connections, while the memory is distributed in the form of the weights given to the various connections. The distribution of both processing capability and memory means that damage to part of the network does not necessarily result in processing dysfunction or information loss. This ability of neural networks to withstand limited damage and continue to function well is one of their greatest strengths.

Neural networks also differ greatly from traditional computers in the way they are programmed. Rather than using programs that are written as a series of instructions, as do all traditional computers, neural networks are “taught” with a limited set of training examples. The network is then able to “learn” from the initial examples to respond to information sets that it has never encountered before. The resulting values of the connection weights can be thought of as a ‘program’.

Neural networks are usually simulated on traditional computers. The advantage of this approach is that computers can easily be reprogrammed to change the architecture or learning rule of the simulated neural network. Since the computation in a neural network is massively parallel, the processing speed of a simulated neural network can be increased by using massively parallel computers—computers that link together hundreds or thousands of CPUs in parallel to achieve very high processing speeds.



In all biological neural networks the connections between particular dendrites and axons may be reinforced or discouraged. For example, connections may become reinforced as more signals are sent down them, and may be discouraged when signals are infrequently sent down them. The reinforcement of certain neural pathways, or dendrite-axon connections, results in a higher likelihood that a signal will be transmitted along that path, further reinforcing the pathway. Paths between neurons that are rarely used slowly atrophy, or decay, making it less likely that signals will be transmitted along them.

The role of connection strengths between neurons in the brain is crucial; scientists believe they determine, to a great extent, the way in which the brain processes the information it takes in through the senses. Neuroscientists studying the structure and function of the brain believe that various patterns of neurons firing can be associated with specific memories. In this theory, the strength of the connections between the relevant neurons determines the strength of the memory. Important information that needs to be remembered may cause the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.


Connection Weights

To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a node, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signals add up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of a how a biological neuron functions—the larger the input signal, the larger the output signal.


Training Sets

Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data. For instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weights between the neurons, the photographs are digitally entered into the input layer of the neural network and an output signal is computed and compared to the target output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.

A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signaled.

A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called error back-propagation, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 1984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network’s calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weights according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point it is ready to calculate another output.



Neural networks have been applied to many tasks that are easy for humans to accomplish, but difficult for traditional computers. Because neural networks mimic the brain, they have shown much promise in so-called sensory processing tasks such as speech recognition, pattern recognition, and the transcription of hand-written text. In some settings, neural networks can perform as well as humans. Neural-network-based backgammon software, for example, rivals the best human players.

While traditional computers still outperform neural networks in most situations, neural networks are superior in recognizing patterns in extremely large data sets. Furthermore, because neural networks have the ability to learn from a set of examples and generalize this knowledge to new situations, they are excellent for work requiring adaptive control systems. For this reason, the United States National Aeronautics and Space Administration (NASA) has extensively studied neural networks to determine whether they might serve to control future robots sent to explore planetary bodies in our solar system. In this application, robots could be sent to other planets, such as Mars, to carry out significant and detailed exploration autonomously.

An important advantage that neural networks have over traditional computer systems is that they can sustain damage and still function properly. This design characteristic of neural networks makes them very attractive candidates for future aircraft control systems, especially in high performance military jets. Another potential use of neural networks for civilian and military use is in pattern recognition software for radar, sonar, and other remote-sensing devices.


Motherboard, in computer science, the main circuit board in a computer. The most important computer chips and other electronic components that give function to a computer are located on the motherboard. The motherboard is a printed circuit board that connects the various elements on it through the use of traces, or electrical pathways. The motherboard is indispensable to the computer and provides the main computing capability.

Personal computers normally have one central processing unit (CPU), or microprocessor, which is located with other chips on the motherboard. The manufacturer and model of the CPU chip carried by the motherboard is a key criterion for designating the speed and other capabilities of the computer. The CPU in many personal computers is not permanently attached to the motherboard, but is instead plugged into a socket so that it may be removed and upgraded.

Motherboards also contain important computing components, such as the basic input/output system (BIOS), which contains the basic set of instructions required to control the computer when it is first turned on; different types of memory chips such as random access memory (RAM) and cache memory; mouse, keyboard, and monitor control circuitry; and logic chips that control various parts of the computer’s function. Having as many of the key components of the computer as possible on the motherboard improves the speed and operation of the computer.

Users may expand their computer’s capability by inserting an expansion board into special expansion slots on the motherboard. Expansion slots are standard with nearly all personal computers and offer faster speed, better graphics capabilities, communication capability with other computers, and audio and video capabilities. Expansion slots come in either half or full size, and can transfer 8 or 16 bits (the smallest units of information that a computer can process) at a time, respectively.

The pathways that carry data on the motherboard are called buses. The amount of data that can be transmitted at one time between a device, such as a printer or monitor, and the CPU affects the speed at which programs run. For this reason, buses are designed to carry as much data as possible. To work properly, expansion boards must conform to bus standards such as integrated drive electronics (IDE), Extended Industry Standard Architecture (EISA), or small computer system interface (SCSI).

Central Processing Unit



Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.




CPU Function

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


Branching Instructions

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.


Clock Pulses

The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 100-megahertz (100-MHz) processor has 100 million clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.


Fixed-Point and Floating-Point Numbers

Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel’s Pentium chip.




Early Computers

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was completed in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


The Transistor

A solution to the problems posed by vacuum tubes came in 1947, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


The Integrated Circuit

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the 3 million to 5 million transistors per chip common on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 16-bit microprocessors are still common, today’s microprocessors are becoming increasingly sophisticated, with many 32-bit and even 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 500 MHz, or 500 million clock pulses per second.



The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Computer Memory



Computer Memory, device that stores data for use by a computer. Most memory devices represent data with the binary number system. In the binary number system, numbers are represented by sequences of the digits 0 and 1. In a computer, these numbers correspond to the on and off states of the computer’s electronic circuitry. Each binary digit is called a bit, which is the basic unit of memory in a computer. A group of eight bits is called a byte, and can represent decimal numbers ranging from 0 to 255. When these numbers are each assigned to a letter, digit, or symbol, in what is known as a character code, a byte can also represent a single character.

Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. The prefixes kilo-, mega-, and giga-, are taken from the metric system and mean 1 thousand, 1 million, and 1 billion, respectively. Thus, a kilobyte is approximately 1000 (1 thousand) bytes, a megabyte is approximately 1,000,000 (1 million) bytes, and a gigabyte is approximately 1,000,000,000 (1 billion) bytes. The actual numerical values of these units are slightly different because they are derived from the binary number system. The precise number of bytes in a kilobyte is 2 raised to the 10th power, or 1,024. The precise number of bytes in a megabyte is 2 raised to the 20th power, and the precise number of bits in a gigabyte is 2 raised to the 30th power.



Computer memory may be divided into internal memory and external memory. Internal memory is memory that can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information. External memory is memory that is accessed by the CPU via slower and more complex input and output operations. External memory uses some form of inexpensive mass-storage media such as magnetic or optical media. See also Information Storage and Retrieval.

Memory can further be distinguished as being random access memory (RAM), read-only memory (ROM), or sequential memory. Information stored in RAM can be accessed in any order, and may be erased or written over depending on the specific media involved. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is permanent and cannot be erased or written over. Sequential memory is a type of memory that must be accessed in a linear order, not randomly.


Internal RAM

Random access memory is the main memory used by the CPU as it processes information. The circuits used to construct this main internal RAM can be classified as either dynamic RAM (DRAM), or static RAM (SRAM). In DRAM, the circuit for a bit consists of one transistor, which acts as a switch, and one capacitor, a device that can store charge. The bit 1 is stored in DRAM by a charged capacitor, while the bit 0 is stored in DRAM as an uncharged capacitor. To store the binary number 1 in a DRAM bit location, the transistor at that location is turned on, meaning that the switch is closed, which allows current to flow into a capacitor and charge it up. The transistor is then turned off, meaning that the switch is opened, which keeps the capacitor charged. To store a 0, charge is drained from the capacitor while the transistor is on, and then the transistor is turned off. To read a value in a DRAM bit location, a detector circuit determines whether charge is present or absent on the relevant capacitor. Because capacitors are imperfect, charge slowly leaks out of them, which results in loss of the stored data. Thus, the computer must periodically read the data out of DRAM and rewrite it by putting more charge on the capacitors, a process known as refreshing memory.

In SRAM, the circuit for a bit consists of multiple transistors that continuously refresh the stored values. The computer can access data in SRAM more quickly than DRAM, but the circuitry in SRAM draws more power. The circuitry for a SRAM bit is also larger, so a SRAM chip holds fewer bits than a DRAM chip of the same size. For this reason, SRAM is used when access speed is more important than large memory capacity or low power consumption.

The time it takes the CPU to read or write a bit to memory is particularly important to computer performance. This time is called access time. Current DRAM access times are between 60 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.

The internal memory of a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers an address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes.

When a computer executes a read instruction, part of the instruction specifies which memory address to access. The address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits use the address to select the bits at the specified location in RAM. Their contents are sent back to the CPU over another set of wires called a data bus. Inside the CPU the data passes through circuits called the data path. In some CPUs the data path may directly perform arithmetic operations on data from memory, while in others the data must first go to high-speed memory devices within the CPU called registers.


Internal ROM

Read-only memory is the other type of internal memory. ROM memory is used to store the basic set of instructions, called the basic input-output system (BIOS), that the computer needs to run when it is first turned on. This information is permanently stored on computer chips in the form of hardwired electronic circuits.


External Memory

External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device uses materials and mechanisms similar to those used for audio tape, while optical storage materials use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic media.


Magnetic Media

Magnetic tape is one form of external computer memory, but instead of recording a continuous signal as with analog audio tape, distinct spots are either magnetized or demagnetized on the tape, corresponding to binary 1s and 0s. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).

Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.

Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.


Optical Media

Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio compact disc (CD). Because its contents are permanently stored on it when it is manufactured, it is known as compact disk-read only memory (CD-ROM). A variation on the CD, called compact disk-recordable (CD-R) uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.


Magneto-Optical Media

Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, considerably more expensive than CD-ROMs.



Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer scientists consider multigigabyte memory chips and terabyte-sized disks real possibilities. Research is also leading to new optical storage technologies with 10 to 100 times the capacity of CD-ROMs produced in the 1990s.

Some computer scientists are concerned that memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Scientists also expect that the dimensions of memory chips will increase by a factor of four. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity.

Access times for internal memory decreased by a factor of four from 1986 to 1996, while processors became 500 times faster. The result was a growing gap in performance between the processor and its main RAM memory. Future computers will likely have advanced data transfer capabilities that enable the CPU to access more memory faster. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM.



Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.

International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection would become magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.

The next step in the development of computer memory came with the introduction of integrated circuits which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core, and has been the dominant technology for internal memory ever since.




Internet, computer-based global information system. The Internet is composed of many interconnected computer networks. Each network may link tens, hundreds, or even thousands of computers, enabling them to share information with one another and to share computational resources such as powerful supercomputers and databases of information. The Internet has made it possible for people all over the world to effectively and inexpensively communicate with one another. Unlike traditional broadcasting media, such as radio and television, the Internet does not have a centralized distribution system. Instead, an individual who has Internet access can communicate directly with anyone else on the Internet, make information available to others, find information provided by others, or sell products with a minimum overhead cost.

The Internet has brought new opportunities to government, business, and education. Governments use the Internet for internal communication, distribution of information, and automated tax processing. In addition to offering goods and services online to customers, businesses use the Internet to interact with other businesses. Many individuals use the Internet for shopping, paying bills, and online banking. Educational institutions use the Internet for research and to deliver courses to students at remote sites.

The Internet’s success arises from its flexibility. Instead of restricting component networks to a particular manufacturer or particular type, Internet technology allows interconnection of any kind of computer network. No network is too large or too small, too fast or too slow to be interconnected. Thus, the Internet includes inexpensive networks that can only connect a few computers within a single room as well as expensive networks that can span a continent and connect thousands of computers. See Local Area Network.

Internet service providers (ISPs) provide Internet access to customers for a monthly fee. A customer who subscribes to an ISP’s service uses the ISP’s network to access the Internet. Because ISPs offer their services to the general public, the networks they operate are known as public access networks. In the United States, as in many countries, ISPs are private companies; in countries where telephone service is a government-regulated monopoly, the government often controls ISPs.

An organization that has many computers usually owns and operates a private network, called an intranet, that connects all the computers within the organization. To provide Internet service, the organization connects its intranet to the Internet. Unlike public access networks, intranets are restricted to provide security. Only authorized computers at the organization can connect to the intranet, and the organization restricts communication between the intranet and the global Internet. The restrictions allow computers inside the organization to exchange information but keep the information confidential and protected from outsiders.

The Internet has grown tremendously since its inception, doubling in size every 9 to 14 months. In 1981 only 213 computers were connected to the Internet. By 2000 the number had grown to more than 100 million. The current number of people who use the Internet can only be estimated. One survey found that there were 61 million Internet users worldwide at the end of 1996, 148 million at the end of 1998, and 407 million by the end of 2000. Some analysts estimate that the number of users will double again by the end of 2002.



From its inception in the 1970s until the late 1980s the Internet was a U.S. government-funded communication and research tool restricted almost exclusively to academic and military uses. As government restrictions were lifted in the early 1990s, the Internet became commercial. In 1995 the World Wide Web (WWW) replaced file transfer as the application used for most Internet traffic. The difference between the Internet and the Web is similar to the distinction between a highway system and a package delivery service that uses the highways to move cargo from one city to another: The Internet is the highway system over which Web traffic and traffic from other applications move. The Web consists of programs running on many computers that allow a user to find and display multimedia documents (documents that contain a combination of text, photographs, graphics, audio, and video). Many analysts attribute the explosion in use and popularity of the Internet to the visual nature of Web documents. By the end of 2000, Web traffic dominated the Internet—more than 80 percent of all traffic on the Internet came from the Web.

Companies, individuals, and institutions use the Internet in many ways. Companies use the Internet for electronic commerce, also called e-commerce, including advertising, selling, buying, distributing products, and providing customer service. In addition, companies use the Internet for business-to-business transactions, such as exchanging financial information and accessing complex databases. Businesses and institutions use the Internet for voice and video conferencing and other forms of communication that enable people to telecommute (work away from the office using a computer). The use of electronic mail (e-mail) speeds communication between companies, among coworkers, and among other individuals. Media and entertainment companies use the Internet for online news and weather services and to broadcast audio and video, including live radio and television programs. Online chat allows people to carry on discussions using written text. Scientists and scholars use the Internet to communicate with colleagues, perform research, distribute lecture notes and course materials to students, and publish papers and articles. Individuals use the Internet for communication, entertainment, finding information, and buying and selling goods and services.




Internet Access

The term Internet access refers to the communication between a residence or a business and an ISP that connects to the Internet. Access falls into two broad categories: dedicated and dial-up. With dedicated access, a subscriber’s computer remains directly connected to the Internet at all times by a permanent, physical connection. Most large businesses have high-capacity dedicated connections; small businesses or individuals who desire dedicated access choose technologies such as digital subscriber line (DSL) or cable modems, which both use existing wiring to lower cost. A DSL sends data across the same wires that telephone service uses, and cable modems use the same wiring that cable television uses. In each case, the electronic devices that are used to send data over the wires employ separate frequencies or channels that do not interfere with other signals on the wires. Thus, a DSL Internet connection can send data over a pair of wires at the same time the wires are being used for a telephone call, and cable modems can send data over a cable at the same time the cable is being used to receive television signals. The user usually pays a fixed monthly fee for a dedicated connection. In exchange, the company providing the connection agrees to relay data between the user’s computer and the Internet.

Dial-up is the least expensive access technology, but it is also the least convenient. To use dial-up access, a subscriber must have a telephone modem, a device that connects a computer to the telephone system and is capable of converting data into sounds and sounds back into data. The user’s ISP provides software that controls the modem. To access the Internet, the user opens the software application, which causes the dial-up modem to place a toll-free telephone call to the ISP. A modem at the ISP answers the call, and the two modems use audible tones to send data in both directions. When one of the modems is given data to send, the modem converts the data from the digital values used by computers—numbers stored as a sequence of 1s and 0s—into tones. The receiving side converts the tones back into digital values. Unlike dedicated access technologies, a dial-up modem does not use separate frequencies, so the telephone line cannot be used for regular telephone calls at the same time a dial-up modem is sending data.


How Information Travels Over the Internet

All information is transmitted across the Internet in small units of data called packets. Software on the sending computer divides a large document into many packets for transmission; software on the receiving computer regroups incoming packets into the original document. Similar to a postcard, each packet has two parts: a packet header specifying the computer to which the packet should be delivered, and a packet payload containing the data being sent. The header also specifies how the data in the packet should be combined with the data in other packets by recording which piece of a document is contained in the packet.

 A series of rules known as computer communication protocols specify how packet headers are formed and how packets are processed. The set of protocols used for the Internet are named TCP/IP after the two most important protocols in the set: the Transmission Control Protocol and the Internet Protocol. Hardware devices that connect networks in the Internet are called IP routers because they follow the IP protocol when forwarding packets. A router examines the header in each packet that arrives to determine the packet’s destination. The router either delivers the packet to the destination computer across a local network or forwards the packet to another router that is closer to the final destination. Thus, a packet travels from router to router as it passes through the Internet.

TCP/IP protocols enable the Internet to automatically detect and correct transmission problems. For example, if any network or device malfunctions, protocols detect the failure and automatically find an alternative path for packets to avoid the malfunction. Protocol software also ensures that data arrives complete and intact. If any packets are missing or damaged, protocol software on the receiving computer requests that the source resend them. Only when the data has arrived correctly does the protocol software make it available to the receiving application program, and therefore to the user.


Network Names and Addresses

To be connected to the Internet, a computer must be assigned a unique number, known as its IP (Internet Protocol) address. Each packet sent over the Internet contains the IP address of the computer to which it is being sent. Intermediate routers use the address to determine how to forward the packet. Users almost never need to enter or view IP addresses directly. Instead, to make it easier for users, each computer is also assigned a domain name; protocol software automatically translates domain names into IP addresses. For example, the domain name encarta.msn.com specifies a computer owned by Microsoft (names ending in .com are assigned to computers owned by commercial companies), and the corresponding IP address is See also Domain Name System.

Users encounter domain names when they use applications such as the World Wide Web. Each page of information on the Web is assigned a URL (Uniform Resource Locator) that includes the domain name of the computer on which the page is located. For example, a user can enter the URL


to specify a page in the domain encarta.msn.com. Other items in the URL give further details about the page. For example, the string http specifies that a browser should use the http protocol, one of many TCP/IP protocols, to fetch the item. The string category/physcience.asp specifies a particular document.


Client/Server Architecture

Internet applications, such as the Web, are based on the concept of client/server architecture. In a client/server architecture, some application programs act as information providers (servers), while other application programs act as information receivers (clients). The client/server architecture is not one-to-one. That is, a single client can access many different servers, and a single server can be accessed by a number of different clients. Usually, a user runs a client application, such as a Web browser, that contacts one server at a time to obtain information. Because it only needs to access one server at a time, client software can run on almost any computer, including small handheld devices such as personal organizers and cellular telephones (these devices are sometimes called Web appliances). To supply information to others, a computer must run a server application. Although server software can run on any computer, most companies choose large, powerful computers to run server software because the company expects many clients to be in contact with its server at any given time. A faster computer enables the server program to return information with less delay.


Electronic Mail and News Groups

Electronic mail, or e-mail, is a widely used Internet application that enables individuals or groups of individuals to quickly exchange messages, even if the users are geographically separated by large distances. A user creates an e-mail message and specifies a recipient using an e-mail address, which is a string consisting of the recipient’s login name followed by an @ (at) sign and then a domain name. E-mail software transfers the message across the Internet to the recipient’s computer, where it is placed in the specified mailbox, a file on the hard drive. The recipient uses an e-mail application to view and reply to the message, as well as to save or delete it. Because e-mail is a convenient and inexpensive form of communication, it has dramatically improved personal and business communications.

In its original form, e-mail could only be sent to recipients named by the sender, and only text messages could be sent. E-mail has been extended in two ways, and is a much more powerful tool. Software has been invented that can automatically propagate to multiple recipients a message sent to a single address. Known as a mail gateway or list server, such software allows individuals to join or leave a mail list at any time. Such software can be used to create lists of individuals who will receive announcements about a product or service or to create online discussion groups. Of particular interest are Network News discussion groups (newsgroups) that were originally part of the Usenet network. Thousands of newsgroups exist, on an extremely wide range of subjects. Messages to a newsgroup are not sent directly to each user. Instead, an ordered list is disseminated to computers around the world that run news server software. Newsgroup application software allows a user to obtain a copy of selected articles from a local news server or to use e-mail to post a new message to the newsgroup. The system makes newsgroup discussions available worldwide.

E-mail software has also been extended to allow the transfer of nontext documents, such as graphics and other images, executable computer programs, and prerecorded audio. Such documents, appended to an e-mail message, are called attachments. The standard used for encoding attachments is known as Multipurpose Internet Mail Extensions (MIME). Because the Internet e-mail system only transfers printable text, MIME software encodes each document using printable letters and digits before sending it and then decodes the item when e-mail arrives. Most significantly, MIME allows a single message to contain multiple items, allowing a sender to include a cover letter that explains each of the attachments.


Other Internet Applications

Although the World Wide Web is the most popular application, other Internet applications are widely used. For example, the Telnet application enables a user to interactively access a remote computer. Telnet gives the appearance that the user’s keyboard and screen are connected directly to the remote computer. For example, a businessperson who is visiting a location that has Internet access can use Telnet to contact their office computer. Doing so is faster and less expensive than using dial-up modems.

The Internet can also be used to transfer telephone calls using an application known as IP-telephony. This application requires a special phone that digitizes voice and sends it over the Internet to a second IP telephone. Another application, known as the File Transfer Protocol (FTP), is used to download files from an Internet site to a user’s computer. The FTP application is often automatically invoked when a user downloads an updated version of a piece of software. Applications such as FTP have been integrated with the World Wide Web, making them transparent so that they run automatically without requiring users to open them. When a Web browser encounters a URL that begins with ftp:// it automatically uses FTP to access the item.



Computers store all information as binary numbers. The binary number system uses two binary digits, 0 and 1, which are called bits. The amount of data that a computer network can transfer in a certain amount of time is called the bandwidth of the network and is measured in kilobits per second (kbps) or megabits per second (mbps). A kilobit is 1 thousand bits; a megabit is 1 million bits. A dial-up telephone modem can transfer data at rates up to 56 kbps; DSL and cable modem connections are much faster and can transfer at several mbps. The Internet connections used by businesses often operate at 155 mbps, and connections between routers in the heart of the Internet may operate at rates from 2,488 to 9,953 mbps (9.953 gigabits per second) The terms wideband or broadband are used to characterize networks with high capacity and to distinguish them from narrowband networks, which have low capacity.



Research on dividing information into packets and switching them from computer to computer began in the 1960s. The U.S. Department of Defense Advanced Research Projects Agency (ARPA) funded a research project that created a packet switching network known as the ARPANET. ARPA also funded research projects that produced two satellite networks. In the 1970s ARPA was faced with a dilemma: Each of its networks had advantages for some situations, but each network was incompatible with the others. ARPA focused research on ways that networks could be interconnected, and the Internet was envisioned and created to be an interconnection of networks that use TCP/IP protocols. In the early 1980s a group of academic computer scientists formed the Computer Science NETwork, which used TCP/IP protocols. Other government agencies extended the role of TCP/IP by applying it to their networks: The Department of Energy’s Magnetic Fusion Energy Network (MFENet), the High Energy Physics NETwork (HEPNET), and the National Science Foundation NETwork (NSFNET).

In the 1980s, as large commercial companies began to use TCP/IP to build private internets, ARPA investigated transmission of multimedia—audio, video, and graphics—across the Internet. Other groups investigated hypertext and created tools such as Gopher that allowed users to browse menus, which are lists of possible options. In 1989 many of these technologies were combined to create the World Wide Web. Initially designed to aid communication among physicists who worked in widely separated locations, the Web became immensely popular and eventually replaced other tools. Also during the late 1980s, the U.S. government began to lift restrictions on who could use the Internet, and commercialization of the Internet began. In the early 1990s, with users no longer restricted to the scientific or military communities, the Internet quickly expanded to include universities, companies of all sizes, libraries, public and private schools, local and state governments, individuals, and families.



Several technical challenges must be overcome if the Internet is to continue growing at the current phenomenal rate. The primary challenge is to create enough capacity to accommodate increases in traffic. Internet traffic is increasing as more people become Internet users and existing users send ever greater amounts of data. If the volume of traffic increases faster than the capacity of the network increases, congestion will occur, similar to the congestion that occurs when too many cars attempt to use a highway. To avoid congestion, researchers have developed technologies such as Dense Wave Division Multiplexing (DWDM) that transfer more bits per second across an optical fiber. The speed of routers and other packet handling equipment must also increase to accommodate growth. In the short term, researchers are developing faster electronic processors; in the long term, new technologies will be required.

Another challenge involves IP addresses. Although the original protocol design provided addresses for up to 4.29 billion individual computers, the addresses have begun to run out because they were assigned in blocks. Researchers developed technologies such as Network Address Translation (NAT) to conserve addresses. NAT allows multiple computers at a residence to “share” a single Internet address. Engineers have also planned a next-generation of IP, called IPv6, that will handle many more addresses than the current version.

Short, easy-to-remember domain names are also in short supply. Many domain names that use the simple format http://www.[word].com, where [word] is a common noun or verb, are already in use. Currently, only a few endings are allowed, such as .com, .org, and .net. In the near future, additional endings will be allowed, such as .biz and .info. This will greatly expand the number of possible URLs.

Other important questions concerning Internet growth relate to government controls, especially taxation and censorship. Because the Internet has grown so rapidly, governments have had little time to pass laws that control its deployment and use, impose taxes on Internet commerce, or otherwise regulate content. Many Internet users in the United States view censorship laws as an infringement on their constitutional right to free speech. In 1996 the Congress of the United States passed the Communications Decency Act, which made it a crime to transmit indecent material over the Internet. The act resulted in an immediate outcry from users, industry experts, and civil liberties groups opposed to such censorship. In 1997 the Supreme Court of the United States declared the act unconstitutional because it violated First Amendment rights to free speech. Lawmakers responded in 1998 by passing a narrower antipornography bill, the Child Online Protection Act (COPA). COPA required commercial Web sites to ensure that children could not access material deemed harmful to minors. In 1999 a federal judge blocked COPA as well, ruling that it would dangerously restrict constitutionally protected free speech.

Increasing commercial use of the Internet has heightened security and privacy concerns. With a credit or debit card, an Internet user can order almost anything from an Internet site and have it delivered to their home or office. Companies doing business over the Internet need sophisticated security measures to protect credit card, bank account, and social security numbers from unauthorized access as they pass across the Internet. Any organization that connects its intranet to the global Internet must carefully control the access point to ensure that outsiders cannot disrupt the organization’s internal networks or gain unauthorized access to the organization’s computer systems and data. The questions of government control and Internet security will continue to be important as the Internet grows.

Microsoft Corporation



Microsoft Corporation, leading American computer software company. Microsoft develops and sells a wide variety of software products to businesses and consumers in more than 50 countries. The company’s Windows operating systems for personal computers are the most widely used operating systems in the world. Microsoft has its headquarters in Redmond, Washington.

Microsoft’s other well-known products include Word, a word processor; Excel, a spreadsheet program; Access, a database program; and PowerPoint, a program for making business presentations. These programs are sold separately and as part of Office, an integrated software suite. The company also makes BackOffice, an integrated set of server products for businesses. Microsoft’s Internet Explorer allows users to browse the World Wide Web. Among the company’s other products are reference applications; games; financial software; programming languages for software developers; input devices, such as pointing devices and keyboards; and computer-related books.

Microsoft operates The Microsoft Network (MSN), a collection of news, travel, financial, entertainment, and information Web sites. Microsoft and the National Broadcasting Company (NBC) jointly operate MSNBC, a 24-hour news, talk, and information cable-television channel and companion Web site.



Microsoft was founded in 1975 by William H. Gates III and Paul Allen. The pair had teamed up in high school through their hobby of programming on the original PDP-10 computer from the Digital Equipment Corporation. In 1975 Popular Electronics magazine featured a cover story about the Altair 8800, the first personal computer. The article inspired Gates and Allen to develop a version of the BASIC programming language for the Altair. They licensed the software to Micro Instrumentation and Telemetry Systems (MITS), the Altair’s manufacturer, and formed Microsoft (originally Micro-soft) in Albuquerque, New Mexico, to develop versions of BASIC for other computer companies. Microsoft’s early customers included fledgling hardware firms such as Apple Computer, maker of the Apple II computer; Commodore, maker of the PET computer; and Tandy Corporation, maker of the Radio Shack TRS-80 computer. In 1977 Microsoft shipped its second language product, Microsoft Fortran, and it soon released versions of BASIC for the 8080 and 8086 microprocessors.



In 1979 Gates and Allen moved the company to Bellevue, Washington, a suburb of their hometown of Seattle. (The company moved to its current headquarters in Redmond in 1986.) In 1980 International Business Machines Corporation (IBM) chose Microsoft to write the operating system for the IBM PC personal computer, to be introduced the following year. Under time pressure, Microsoft purchased 86-DOS (originally called QDOS for Quick and Dirty Operating System) from Seattle programmer Tim Paterson for $50,000, modified it, and renamed it MS-DOS (Microsoft Disk Operating System). As part of its contract with IBM, Microsoft was permitted to license the operating system to other companies. By 1984 Microsoft had licensed MS-DOS to 200 personal computer manufacturers, making MS-DOS the standard operating system for personal computers and driving Microsoft’s enormous growth in the 1980s. Allen left the company in 1983 but remained on its board of directors until 2000.



As sales of MS-DOS took off, Microsoft began to develop business applications for personal computers. In 1982 it released Multiplan, a spreadsheet program, and the following year it released a word-processing program, Microsoft Word. In 1984 Microsoft was one of the few established software companies to develop application software for the Macintosh, a personal computer developed by Apple Computer. Microsoft’s early support for the Macintosh resulted in tremendous success for its Macintosh application software, including Word, Excel, and Works (an integrated software suite). Multiplan for MS-DOS, however, faltered against the popular Lotus 1-2-3 spreadsheet program made by Lotus Development Corporation.



In 1985 Microsoft released Windows, an operating system that extended the features of MS-DOS and employed a graphical user interface. Windows 2.0, released in 1987, improved performance and offered a new visual appearance. In 1990 Microsoft released a more powerful version, Windows 3.0, which was followed by Windows 3.1 and 3.11. These versions, which came preinstalled on most new personal computers, rapidly became the most widely used operating systems. In 1990 Microsoft became the first personal-computer software company to record $1 billion in annual sales.

As Microsoft’s dominance grew in the market for personal-computer operating systems, the company was accused of monopolistic business practices. In 1990 the Federal Trade Commission (FTC) began investigating Microsoft for alleged anticompetitive practices, but it was unable to reach a decision and dropped the case. The United States Department of Justice continued the probe.

In 1991 Microsoft and IBM ended a decade of collaboration when they went separate ways on the next generation of operating systems for personal computers. IBM chose to pursue the OS/2 operating system (first released in 1987), which until then had been a joint venture with Microsoft. Microsoft chose to evolve its Windows operating system into increasingly powerful systems. In 1993 Apple lost a copyright-infringement lawsuit against Microsoft that claimed Windows illegally copied the design of the Macintosh’s graphical interface. The ruling was later upheld by an appellate court.

In 1993 Microsoft released Windows NT, an operating system for business environments. The following year the company and the Justice Department reached an agreement that called for Microsoft to change the way its operating system software was sold and licensed to computer manufacturers. In 1995 the company released Windows 95, which featured a simplified interface, multitasking, and other improvements. An estimated 7 million copies of Windows 95 were sold worldwide within seven weeks of its release.




Business Developments

In the mid-1990s Microsoft began to expand into the media, entertainment, and communications industries, launching The Microsoft Network in 1995 and MSNBC in 1996. Also in 1996 Microsoft introduced Windows CE, an operating system for handheld personal computers. In 1997 Microsoft paid $425 million to acquire WebTV Networks, a manufacturer of low-cost devices to connect televisions to the Internet. That same year Microsoft invested $1 billion in Comcast Corporation, a U.S. cable television operator, as part of an effort to expand the availability of high-speed connections to the Internet.

In June 1998 Microsoft released Windows 98, which featured integrated Internet capabilities. In the following month Gates appointed Steve Ballmer, executive vice president of Microsoft, as the company’s president, transferring to him supervision of most day-to-day business operations of the company. Gates retained the title of chairman and chief executive officer (CEO).

In 1999 Microsoft paid $5 billion to telecommunications company AT&T Corp. to use Microsoft’s Windows CE operating system in devices designed to provide consumers with integrated cable television, telephone, and high-speed Internet services. Also in 1999, the company released Windows 2000, the latest version of the Windows NT operating system. In January 2000 Gates transferred his title of CEO to Ballmer. Gates, in turn, took on the title of chief software architect to focus on the development of new products and technologies.


Legal Challenges

In late 1997 the Justice Department accused Microsoft of violating its 1994 agreement by requiring computer manufacturers that installed Windows 95 to also include Internet Explorer, Microsoft’s software for browsing the Internet. The government contended that Microsoft was illegally taking advantage of its power in the market for computer operating systems to gain control of the market for Internet browsers. In response, Microsoft argued that it should have the right to enhance the functionality of Windows by integrating Internet-related features into the operating system. Also in late 1997, computer company Sun Microsystems sued Microsoft, alleging that it had breached a contract for use of Sun’s Java universal programming language by introducing Windows-only enhancements. In November 1998 a federal district court ruled against Microsoft on an injunction filed by Sun earlier that year. The injunction forced Microsoft to revise its software to meet Sun’s Java compatibility standards. The two companies settled the case in 2001, with Microsoft agreeing to pay Sun $20 million for limited use of Java.

Microsoft temporarily settled with the Justice Department in its antitrust case in early 1998 by agreeing to allow personal computer manufacturers to offer a version of Windows 95 that did not include access to Internet Explorer. However, in May 1998 the Justice Department and 20 states filed broad antitrust suits charging Microsoft with engaging in anticompetitive conduct. The suits sought to force Microsoft to offer Windows without Internet Explorer or to include Navigator, a competing browser made by Netscape Communications Corporation. The suits also challenged some of the company’s contracts and pricing strategies.

The federal antitrust trial against Microsoft began in October 1998. Executives from Netscape, Sun, and several other computer software and hardware companies testified regarding their business deals with Microsoft. In November 1999 Judge Thomas Penfield Jackson issued his findings of fact in the antitrust case, in which he declared that Microsoft had a monopoly in the market for personal computer operating systems. In 2000 Jackson ruled that the company had violated antitrust laws by engaging in tactics that discouraged competition. He ordered Microsoft to be split into two companies: one for operating systems and another for all other businesses, including its Office software suite. He also imposed a number of interim restrictions on the company’s business practices. The judge put these penalties on hold while Microsoft appealed the decision.

In June 2001 an appeals court upheld Jackson’s findings that Microsoft had monopoly power and that the company used anticompetitive business practices to protect its Windows monopoly. However, the appeals court threw out the trial court’s ruling that Microsoft had illegally integrated Internet Explorer into Windows, returning the issue to a lower court for review under a different legal standard. The appeals court also reversed Jackson’s order to break up the company, in part because of the judge’s failure to hold a proper hearing on the remedy and in part because of comments he made to reporters outside the courtroom about the merits of the case. The court found that Jackson’s comments were improper because they created the appearance of bias, even though the court found no evidence of actual bias. The appeals court ordered that the case be assigned to a different judge to reconsider the remedy for Microsoft’s violations of antitrust law.


Browser, in computer science, a program that enables a computer to locate, download, and display documents containing text, sound, video, graphics, animation, and photographs located on computer networks. The act of viewing and moving about between documents on computer networks is called browsing. Users browse through documents on open, public-access networks called internets, or on closed networks called intranets. The largest open network is the Internet, a worldwide computer network that provides access to sites on the World Wide Web (WWW, the Web).

Browsers allow users to access Web information by locating documents on remote computers that function as Web servers. A browser downloads information over phone lines to a user’s computer through the user’s modem and then displays the information on the computer. Most browsers can display a variety of text and graphics that may be integrated into such a document, including animation, audio and video. Examples of browsers are Netscape, Internet Explorer, and Mosaic.

Browsers can create the illusion of traveling to an actual location in virtual space (hyperspace) where the document being viewed exists. This virtual location in hyperspace is referred to as a node, or a Web site. The process of virtual travel between Web sites is called navigating.

Documents on networks are called hypertext if the media is text only, or hypermedia if the media includes graphics as well as text. Every hypertext or hypermedia document on an internet has a unique address called a uniform resource locator (URL). Hypertext documents usually contain references to other URLs that appear in bold, underlined, or colored text. The user can connect to the site indicated by the URL by clicking on it. This use of a URL within a Web site is known as a hyperlink. When the user clicks on a hyperlink, the browser moves to this next server and downloads and displays the document targeted by the link. Using this method, browsers can rapidly take users back and forth between different sites.

Common features found in browsers include the ability to automatically designate a Web site to which the browser opens with each use, the option to create directories of favorite or useful Web sites, access to search engines (programs that permit the use of key words to locate information on the Internet, an internet or an intranet), and the ability to screen out certain types of information by blocking access to certain categories of sites.

A browser’s performance depends upon the speed and efficiency of the user’s computer, the type of modem being used, and the bandwidth of the data-transmission medium (the amount of information that can be transmitted per second). Low bandwidth results in slow movement of data between source and recipient, leading to longer transmission times for documents. Browsers may also have difficulty reaching a site during times of heavy traffic on the network or because of high use of the site.

The most commonly used browsers for the Web are available for free or for a small charge and can be downloaded from the Internet. Browsers have become one of the most important tools—ranking with e-mail—for computer network users. They have provided tens of millions of people with a gateway to information and communication through the Internet.


Hypermedia, in computer science, the integration of graphics, sound, video, and animation into documents or files that are linked in an associative system of information storage and retrieval. Hypermedia files contain cross references called hyperlinks that connect to other files with related information, allowing users to easily move, or navigate, from one document to another through these associations.

Hypermedia is structured around the idea of offering a working and learning environment that parallels human thinking—that is, an environment that allows the user to make associations between topics rather than move sequentially from one to the next, as in an alphabetical list. Hypermedia topics are thus linked in a manner that allows the user to jump from subject to related subject in searching for information. For example, a hypermedia presentation on navigation might include links to such topics as astronomy, bird migration, geography, satellites, and radar. If the information is primarily in text form, the document or file is called hypertext. If video, music, animation, or other elements are included, the document is called a hypermedia document.

The World Wide Web (WWW) isa well-known hypermedia environment that users can access through the Internet. Other forms of hypermedia applications include CD-ROM encyclopedias and games. To view hypermedia documents, a user’s computer must have hardware and software that support multimedia. This usually consists of sound and video cards, speakers, and a graphical operating system such as Windows 95 or the Apple operating system. To view hypermedia documents on the Internet, a program called a browser is necessary. Browsers provide varying levels of support for the graphics, sound and video available on the Internet. Examples of browsers are Netscape, Internet Explorer, and Mosaic.

A wide variety of computer programs are used to create different media applications on the WWW. To run these applications, a browser must be supplemented by programs called plug-ins. Plug-ins are tools that allow a computer to display or interpret the various file formats in which multimedia files exist. Plug-ins are usually available for free and can be downloaded to and stored on a computer’s hard drive.

The number of people experiencing hypermedia is growing rapidly as a result of increased exposure to the WWW, better computer and modem performance, and increases in data transmission rates.

Web Site

Web Site, in computer science, file of information located on a server connected to the World Wide Web (WWW). The WWW is a set of protocols and software that allows the global computer network called the Internet to display multimedia documents. Web sites may include text, photographs, illustrations, video, music, or computer programs. They also often include links to other sites in the form of hypertext, highlighted or colored text that the user can click on with their mouse, instructing their computer to jump to the new site.

Every web site has a specific address on the WWW, called a Uniform Resource Locator (URL). These addresses end in extensions that indicate the type of organization sponsoring the web site, for example, .gov for government agencies, .edu for academic institutions, and .com for commercial enterprises. The user’s computer must be connected to the Internet and have a special software program called a browser to retrieve and read information from a web site. Examples of browsers include Navigator from the Netscape Communications Corporation and Explorer from the Microsoft Corporation.

The content presented on a web site usually contains hypertext and icons, pictures that also serve as links to other sites. By clicking on the hypertext or icons with their mouse, users instruct their browser program to connect to the web site specified by the URL contained in the hypertext link. These links are embedded in the web site through the use of Hypertext Markup Language (HTML), a special language that encodes the links with the correct URL.

Web sites generally offer an appearance that resembles the graphical user interfaces (GUI) of Microsoft’s Windows operating system, Apple’s Macintosh operating system, and other graphics based operating systems. They may include scroll bars, menus, buttons, icons, and toolbars, all of which can be activated by a mouse or other input device.

To find a web site, a user can consult an Internet reference guide or directory, or use one of the many freely available search engines, such as WebCrawler from America Online Incorporated. These engines are search and retrieval programs, of varying sophistication, that ask the user to fill out a form before executing a search of the WWW for the requested information. The user can also create a list of the URLs of frequently visited web sites. Such a list helps a user recall a URL and easily access the desired web site. Web sites are easily modified and updated, so the content of many sites changes frequently.

Computer Program



Computer Program, set of instructions that directs a computer to perform some processing function or combination of functions. For the instructions to be carried out, a computer must execute a program, that is, the computer reads the program, and then follows the steps encoded in the program in a precise order until completion. A program can be executed many different times, with each execution yielding a potentially different result depending upon the options and data that the user gives the computer.

Programs fall into two major classes: application programs and operating systems. An application program is one that carries out some function directly for a user, such as word processing or game-playing. An operating system is a program that manages the computer and the various resources and devices connected to it, such as RAM (random access memory), hard drives, monitors, keyboards, printers, and modems, so that they may be used by other programs. Examples of operating systems are DOS, Windows 95, OS/2, and UNIX.



Software designers create new programs by using special applications programs, often called utilityprograms or development programs. A programmer uses another type of program called a text editor to write the new program in a special notation called a programming language. With the text editor, the programmer creates a text file, which is an ordered list of instructions, also called the program source file. The individual instructions that make up the program source file are called source code. At this point, a special applications program translates the source code into machine language, or object code—a format that the operating system will recognize as a proper program and be able to execute.

Three types of applications programs translate from source code to object code: compilers, interpreters, and assemblers. The three operate differently and on different types of programming languages, but they serve the same purpose of translating from a programming language into machine language.

A compiler translates text files written in a high-level programming language—such as Fortran, C, or Pascal—from the source code to the object code all at once. This differs from the approach taken by interpreted languages such as BASIC, APL and LISP, in which a program is translated into object code statement by statement as each instruction is executed. The advantage to interpreted languages is that they can begin executing the program immediately instead of having to wait for all of the source code to be compiled. Changes can also be made to the program fairly quickly without having to wait for it to be compiled again. The disadvantage of interpreted languages is that they are slow to execute, since the entire program must be translated one instruction at a time, each time the program is run. On the other hand, compiled languages are compiled only once and thus can be executed by the computer much more quickly than interpreted languages. For this reason, compiled languages are more common and are almost always used in professional and scientific applications.

Another type of translator is the assembler, which is used for programs or parts of programs written in assembly language. Assembly language is another programming language, but it is much more similar to machine language than other types of high-level languages. In assembly language, a single statement can usually be translated into a single instruction of machine language. Today, assembly language is rarely used to write an entire program, but is instead most often used when the programmer needs to directly control some aspect of the computer’s function.

Programs are often written as a set of smaller pieces, with each piece representing some aspect of the overall application program. After each piece has been compiled separately, a program called a linker combines all of the translated pieces into a single executable program.

Programs seldom work correctly the first time, so a program called a debugger is often used to help find problems called bugs. Debugging programs usually detect an event in the executing program and point the programmer back to the origin of the event in the program code.

Recent programming systems, such as Java, use a combination of approaches to create and execute programs. A compiler takes a Java source program and translates it into an intermediate form. Such intermediate programs are then transferred over the Internet into computers where an interpreter program then executes the intermediate form as an application program.



Most programs are built from just a few kinds of steps that are repeated many times in different contexts and in different combinations throughout the program. The most common step performs some computation, and then proceeds to the next step in the program, in the order specified by the programmer.

Programs often need to repeat a short series of steps many times, for instance in looking through a list of game scores and finding the highest score. Such repetitive sequences of code are called loops.

One of the capabilities that makes computers so useful is their ability to make conditional decisions and perform different instructions based on the values of data being processed. If-then-else statements implement this function by testing some piece of data and then selecting one of two sequences of instructions on the basis of the result. One of the instructions in these alternatives may be a goto statement that directs the computer to select its next instruction from a different part of the program. For example, a program might compare two numbers and branch to a different part of the program depending on the result of the comparison: If x is greater than y then goto instruction #10 else continue

Programs often use a specific sequence of steps more than once. Such a sequence of steps can be grouped together into a subroutine, which can then be called, or accessed, as needed in different parts of the main program. Each time a subroutine is called, the computer remembers where it was in the program when the call was made, so that it can return there upon completion of the subroutine. Preceding each call, a program can specify that different data be used by the subroutine, allowing a very general piece of code to be written once and used in multiple ways.

Most programs use several varieties of subroutines. The most common of these are functions, procedures, library routines, system routines, and device drivers. Functions are short subroutines that compute some value, such as computations of angles, which the computer cannot compute with a single basic instruction. Procedures perform a more complex function, such as sorting a set of names. Library routines are subroutines that are written for use by many different programs. System routines are similar to library routines but are actually found in the operating system. They provide some service for the application programs, such as printing a line of text. Device drivers are system routines that are added to an operating system to allow the computer to communicate with a new device, such as a scanner, modem, or printer. Device drivers often have features that can be executed directly as applications programs. This allows the user to directly control the device, which is useful if, for instance, a color printer needs to be realigned to attain the best printing quality after changing an ink cartridge.



Modern computers usually store programs on some form of magnetic storage media that can be accessed randomly by the computer, such as the hard drive disk permanently located in the computer, or a portable floppy disk. Additional information on such disks, called directories, indicate the names of the various programs on the disk, when they were written to the disk, and where the program begins on the disk media. When a user directs the computer to execute a particular application program, the operating system looks through these directories, locates the program, and reads a copy into RAM. The operating system then directs the CPU (central processing unit) to start executing the instructions at the beginning of the program. Instructions at the beginning of the program prepare the computer to process information by locating free memory locations in RAM to hold working data, retrieving copies of the standard options and defaults the user has indicated from a disk, and drawing initial displays on the monitor.

The application program requests a copy of any information the user enters by making a call to a system routine. The operating system converts any data so entered into a standard internal form. The application then uses this information to decide what to do next—for example, perform some desired processing function such as reformatting a page of text, or obtain some additional information from another file on a disk. In either case, calls to other system routines are used to actually carry out the display of the results or the accessing of the file from the disk.

When the application reaches completion or is prompted to quit, it makes further system calls to make sure that all data that needs to be saved has been written back to disk. It then makes a final system call to the operating system indicating that it is finished. The operating system then frees up the RAM and any devices that the application was using and awaits a command from the user to start another program.



People have been storing sequences of instructions in the form of a program for several centuries. Music boxes of the 18th century and player pianos of the late 19th and early 20th centuries played musical programs stored as series of metal pins, or holes in paper, with each line (of pins or holes) representing when a note was to be played, and the pin or hole indicating what note was to be played at that time. More elaborate control of physical devices became common in the early 1800s with French inventor Joseph Marie Jacquard’s invention of the punch-card controlled weaving loom. In the process of weaving a particular pattern, various parts of the loom had to be mechanically positioned. To automate this process, Jacquard used a single paper card to represent each positioning of the loom, with holes in the card to indicate which loom actions should be done. An entire tapestry could be encoded onto a deck of such cards, with the same deck yielding the same tapestry design each time it was used. Programs of over 24,000 cards were developed and used.

The world’s first programmable machine was designed—although never fully built—by the English mathematician and inventor, Charles Babbage. This machine, called the Analytical Engine, used punch cards similar to those used in the Jacquard loom to select the specific arithmetic operation to apply at each step. Inserting a different set of cards changed the computations the machine performed. This machine had counterparts for almost everything found in modern computers, although it was mechanical rather than electrical. Construction of the Analytical Engine was never completed because the technology required to build it did not exist at the time.

The first card deck programs for the Analytical Engine were developed by British mathematician Countess Augusta Ada Lovelace, daughter of the poet Lord Byron. For this reason she is recognized as the world’s first programmer.

The modern concept of an internally stored computer program was first proposed by Hungarian-American mathematician John von Neumann in 1945. Von Neumann’s idea was to use the computer’s memory to store the program as well as the data. In this way, programs can be viewed as data and can be processed like data by other programs. This idea greatly simplifies the role of program storage and execution in computers.



The field of computer science has grown rapidly since the 1950s due to the increase in their use. Computer programs have undergone many changes during this time in response to user need and advances in technology. Newer ideas in computing such as parallel computing, distributed computing, and artificial intelligence, have radically altered the traditional concepts that once determined program form and function.

Computer scientists working in the field of parallel computing, in which multiple CPUs cooperate on the same problem at the same time, have introduced a number of new program models . In parallel computing parts of a problem are worked on simultaneously by different processors, and this speeds up the solution of the problem. Many challenges face scientists and engineers who design programs for parallel processing computers, because of the extreme complexity of the systems and the difficulty involved in making them operate as effectively as possible.

Another type of parallel computing called distributed computing uses CPUs from many interconnected computers to solve problems. Often the computers used to process information in a distributed computing application are connected over the Internet. Internet applications are becoming a particularly useful form of distributed computing, especially with programming languages such as Java . In such applications, a user logs onto a Web site and downloads a Java program onto their computer. When the Java program is run, it communicates with other programs at its home web site, and may also communicate with other programs running on different computers or web sites.

Research into artificial intelligence (AI) has led to several other new styles of programming. Logic programs, for example, do not consist of individual instructions for the computer to follow blindly, but instead consist of sets of rules: if x happens then do y. A special program called an inference engine uses these rules to “reason” its way to a conclusion when presented with a new problem. Applications of logic programs include automatic monitoring of complex systems, and proving mathematical theorems.

A radically different approach to computing in which there is no program in the conventional sense is called a neural network. A neural network is a group of highly interconnected simple processing elements, designed to mimic the brain. Instead of having a program direct the information processing in the way that a traditional computer does, a neural network processes information depending upon the way that its processing elements are connected. Programming a neural network is accomplished by presenting it with known patterns of input and output data and adjusting the relative importance of the interconnections between the processing elements until the desired pattern matching is accomplished. Neural networks are usually simulated on traditional computers, but unlike traditional computer programs, neural networks are able to learn from their experience.


Debugger, in computer science, a program designed to help in debugging another program by allowing the programmer to step through the program, examine data, and check conditions. There are two basic types of debuggers: machine-level and source-level. Machine-level debuggers display the actual machine instructions (disassembled into assembly language) and allow the programmer to look at registers and memory locations. Source-level debuggers let the programmer look at the original source code (C or Pascal, for example), examine variables and data structures by name, and so on.

Electronic Games



Electronic Games, software programs played for entertainment, challenge, or educational purposes. Electronic games are full of color, sound, realistic movement, and visual effects, and some even employ human actors. There are two broad classes of electronic games: video games, which are designed for specific video-game systems, handheld devices, and coin-operated arcade consoles; and computer games, which are played on personal computers.

Categories of electronic games include strategy games, sports games, adventure and exploration games, solitaire and multiplayer card games, puzzle games, fast-action arcade games, flying simulations, and versions of classic board games. Software programs that employ game-play elements to teach reading, writing, problem solving, and other basic skills are commonly referred to as edutainment.

Electronic games put to use a variety of skills. Many games, such as Tetris and Pac-Man, serve as tests of hand-eye coordination. In these games the challenge is to play as long as possible while the game gets faster or more complex. Other games, such as Super Mario Bros., are more sophisticated. They employ hand-eye coordination by challenging the player to react quickly to action on the screen, but they also test judgment and perseverance, sometimes by presenting puzzles that players must solve to move forward in the game. Strategy games ask players to make more complicated decisions that can influence the long-range course of the game. Electronic games can pit players against each other on the same terminal, across a local network, or via the Internet. Most games that require an opponent can also be played alone, however, with the computer taking on the role of opponent.



Video-game consoles, small handheld game devices, and coin-operated arcade games are special computers built exclusively for playing games. To control the games, players can use joysticks, trackballs, buttons, steering wheels (for car-racing games), light guns, or specially designed controllers that include a joystick, direction pad, and several buttons or triggers. Goggles and other kinds of virtual reality headgear can provide three-dimensional effects in specialized games. These games attempt to give the player the experience of actually being in a jungle, the cockpit of an airplane, or another setting or situation.

The first video games, which consisted of little more than a few electronic circuits in a simplified computer, appeared around 1970 as coin-operated cabinet games in taverns and pinball arcade parlors. In 1972 the Atari company introduced a game called Pong, based on table tennis. In Pong, a ball and paddles are represented by lights on the screen; the ball is set in motion, and by blocking it with the paddles, players knock it back and forth across the screen until someone misses. Pong soon became the first successful commercial video game. Arcade games have remained popular ever since.

Also in 1972 the Magnavox company introduced a home video-game machine called the Odyssey system. It used similar ball-and-paddle games in cartridge form, playable on a machine hooked up to a television. In 1977 Atari announced the release of its own home video-game machine, the Atari 2600. Many of the games played on the Atari system had originally been introduced as arcade games. The most famous included Space Invaders and Asteroids. In Space Invaders, a player has to shoot down ranks of aliens as they march down the screen. In Asteroids, a player needs to destroy asteroids before they crash into the player’s ship. The longer the player survives, the more difficult both games become. After Atari’s success with home versions of such games, other companies began to compete for shares of the fast-growing home video-game market. Major competitors included Coleco with its ColecoVision system, and Mattel with Intellivision. Some companies, particularly Activision, gained success solely by producing games for other companies’ video-game systems.

After several years of enormous growth, the home video-game business collapsed in 1983. The large number of games on offer confused consumers, and many video-game users were increasingly disappointed with the games they purchased. They soon stopped buying games altogether. Failing to see the danger the industry faced, the leading companies continued to spend millions of dollars for product development and advertising. Eventually, these companies ran out of money and left the video-game business.

Despite the decline in the home video-game business, the arcade segment of the industry continued to thrive. Pac-Man, which appeared in arcades in 1980, was one of the major sensations of the time. In this game, players maneuver a button-shaped character notched with a large mouth around a maze full of little dots. The goal is to gobble up all the dots without being touched by one of four enemies in hot pursuit. Another popular game was Frogger, in which players try to guide a frog safely across a series of obstacles, including a busy road.

In the mid-1980s, Nintendo, a Japanese company, introduced the Nintendo Entertainment System (NES). The NES touched off a new boom in home video games, due primarily to two game series: Super Mario Bros. and The Legend of Zelda. These and other games offered more advanced graphics and animation than earlier home video-game systems had, reengaging the interest of game players. Once again, other companies joined the growing home video-game market. One of the most successful was Sega, also headquartered in Japan. In the early 1990s, the rival video-game machines were Nintendo’s Super NES and Sega’s Genesis. These systems had impressive capabilities to produce realistic graphics, sound, and animation.

Throughout the 1990s Nintendo and Sega competed for dominance of the American home video-game market, and in 1995 another Japanese company, Sony, emerged as a strong competitor. Sega and Sony introduced new systems in 1995, the Sega Saturn and the Sony PlayStation. Both use games that come on CD-ROMs (compact discs). A year later, Nintendo met the challenge with the cartridge-based Nintendo 64 system, which has even greater processing power than its competitors, meaning that faster and more complex games can be created. In 1998 Sega withdrew the Saturn system from the U.S. market because of low sales.



While video-game systems are used solely for gaming, games are only one of the many uses for computers. In computer games, players can use a keyboard to type in commands or a mouse to move a cursor around the screen, and sometimes they use both. Many computer games also allow the use of a joystick or game controller.

Computer games were born in the mid-1970s, when computer scientists started to create text adventure games to be played over networks of computers at universities and research institutions. These games challenged players to reach a goal or perform a certain task, such as finding a magical jewel. At each stop along the way, the games described a situation and required the player to respond by typing in an action. Each action introduced a new situation to which the player had to react. One of the earliest such games was called Zork.

Beginning in the late 1970s, Zork and similar games were adapted for use on personal computers, which were just gaining popularity. As technology improved, programmers began to incorporate graphics into adventure games. Because relatively few people owned home computers, however, the market for computer games grew slowly until the mid-1980s. Then, more dynamic games such as Choplifter, a helicopter-adventure game produced by Broderbund, helped fuel rising sales of computers. In 1982 Microsoft Corporation released Flight Simulator, which allows players to mimic the experience of flying an airplane.

As the power of personal computers increased in the 1980s, more sophisticated games were developed. Some of the companies that produced the most popular games were Sierra On-Line, Electronic Arts, and Strategic Simulations, Inc. (SSI). A line of so-called Sim games, produced by Maxis, enabled players to create and manage cities (SimCity), biological systems (SimEarth), and other organizational structures. In the process, players learned about the relationships between the elements of the system. For example, in SimCity a player might increase the tax rate to raise money only to find that people move out of the city, thus decreasing the number of taxpayers and possibly offsetting the increase in revenue. An educational mystery game called Where in the World Is Carmen Sandiego?, by Broderbund, was introduced in the 1980s and aimed at children. The game tests players’ reasoning ability and general knowledge by requiring them to track down an elusive master criminal by compiling clues found around the world.

Computer games continued to gain popularity in the 1990s, with the introduction of more powerful and versatile personal computers and the growing use of computers in schools and homes. With the development of CD-ROM technology, games also integrated more graphics, sounds, and videos, making them more engaging for consumers. The most successful games of the 1990s included Doom (by Id Software) and Myst (by Broderbund). Doom is a violent action game in which the player is a marine charged with fighting evil creatures. In Myst, a player wanders through a fictional island world, attempting to solve puzzles and figure out clues. The game’s appeal comes from the process of exploration and discovery.

Many of the most recent games employ live actors and some of the techniques of filmmaking. Such games are like interactive movies. With the growth of the Internet in the mid-1990s, multiplayer gaming also became popular. In games played over the Internet—such as Ultima Online by Electronic Arts—dozens, hundreds, or even thousands of people can play a game at the same time. Players wander through a fictional world meeting not only computer-generated characters but characters controlled by other players as well. By the end of the 1990s the future for new computer games seemed limitless.




Games, activities or contests governed by sets of rules. People engage in games for recreation and to develop mental or physical skills.

Games come in many varieties. They may have any number of players and can be played competitively or cooperatively. They also may involve a wide range of equipment. Some games, such as chess, test players’ analytic skills. Other games, such as darts and electronic games, require hand-eye coordination. Some games are also considered sports, especially when they involve physical skill.



Games may be classified in several ways. These include the number of players required (as in solitaire games), the purpose of playing (as in gambling games), the object of the game (as in race games, to finish first), the people who play them (as in children’s games), or the place they are played (as in lawn games). Many games fall into more than one of these categories, so the most common way of classifying games is by the equipment that is required to play them.

Board games probably make up the largest category of games. They are usually played on a flat surface made of cardboard, wood, or other material. Players place the board on a table or on the floor, then sit around it to play. In most board games, pieces are placed on the board and moved around on it. Dice, cards, and other equipment can be used.

In strategy board games, pieces are placed or moved in order to capture other pieces (as in chess or checkers) or to achieve such goals as gaining territory, linking pieces to one another, or aligning pieces together. Other major groups of board games include race games (such as backgammon), word games (Scrabble), games of deduction (Clue), trivia games (Trivial Pursuit), party games (Pictionary), family games (Life), financial games (Monopoly), sports games (Strat-O-Matic Baseball), action games (Operation), and games of conflict (Risk).

Many games fall into more than one category. The board game Life, for example, has elements of race games, and Trivial Pursuit is often played at parties. Other types of board games include topical games, which can be based on currently popular movies, television programs, or books; and simulation games, which range from historical war games to civilization-building games.

Role-playing games, which can be played without boards or with playing fields drawn by hand on paper, are often considered a distinct game category. In these games, each player assumes the role of a character with particular strengths and weaknesses. Another player known as the gamemaster leads the character-players through adventures. The most famous role-playing game is Dungeons & Dragons (now called Advanced Dungeons & Dragons), which was invented in the 1970s.

Some games, such as billiards and table tennis, are played on larger surfaces than board games, typically tables with legs. These table games also require different kinds of equipment from board games. In billiards, players use a cue stick to knock balls into one another. Table tennis players use paddles to hit a light ball back and forth over a net strung across the table.

Card games require a deck of cards, and sometimes paper and pencil (or occasionally other equipment, such as a cribbage board) for keeping score. Many popular games, including poker, bridge, and rummy, call for a standard deck of 52 playing cards. Some card games, such as canasta, use more than one deck or a larger deck. And other games use a deck from which certain cards have been removed, or decks with cards designed specifically for the game.

The major kinds of card games include trick-taking games, in which players try to take (or avoid taking) specific cards; melding games, in which players try to form winning combinations with their cards; betting games, in which players wager on the outcome; and solitaire games, which are played alone. A new category, collectible card games, became an overnight sensation in 1993 with the publication of Magic: The Gathering. In Magic and similar games, players buy a starter set of cards that they use to compete against other players. They can supplement the starter kit with additional purchases of random assortments of cards.

Tile games can be similar to card games, but they use pieces made of harder materials, such as wood, plastic, or bone. Popular tile games include Mah Jongg and dominoes. Dice games involve throwing a set of dice in an attempt to achieve certain combinations or totals. Paper and pencil games use only paper and pencil. Two such games, tic-tac-toe and dots-and-boxes, are among the first games that many children learn. Target games, in which players aim at a target, are tests of hand-eye coordination. Examples of target games are marbles, horseshoe pitching, and bowling.

Electronic games (video games and computer games) grew in popularity in the late 20th century, as the power of computers increased. In most electronic games, players use a keyboard, joystick, or some other type of game controller. Video games are played on specially designed arcade machines, handheld devices, or systems that are hooked to television screens. Computer games are played on home computers. With electronic games, the computer itself can serve as the opponent, allowing people to play traditional games such as chess or bridge against the computer.



Games have been played for thousands of years and are common to all cultures. Throughout history and around the world, people have used sticks to draw simple game boards on the ground, making up rules that incorporate stones or other common objects as playing pieces. About 5000 years ago people began to make more permanent game boards from sun-dried mud or wood. One of the earliest games, called senet, was played in ancient Egypt. Like many early games, senet had religious significance. Pictures on the board squares represented different parts of the journey that the ancient Egyptians believed the soul made after death.

Some of the oldest board games may have evolved from methods of divination, or fortune-telling. The game of go, which many experts regard as the finest example of a pure strategy game, may have evolved from a method of divination practiced in China more than 3000 years ago, in which black and white pieces were cast onto a square board marked with symbols of various significance. Go also involves black and white pieces on a board, but players deliberately place them on intersections of lines while trying to surround more territory than the opponent.

Many modern games evolved over centuries. As games spread to different geographic regions, people experimented with rules, creating variants and often changing the original game forever. The name mancala applies to a group of ancient Egyptian mathematical games in which pebbles, seeds, or other objects are moved around pits scooped out of dirt or wood. As the game spread through Asia, Africa, and the Americas, players developed local variations that are still played today. Two such variations are sungka, from the Philippines, and mweso, from Uganda.

Chess, xiangqi (Chinese chess), and shogi (Japanese chess) are among the most widely played board games in the world. Although quite different, all three are believed to have evolved from a common ancestor—either a 6th-century game played in India or an earlier game played in China. Over the centuries, chess spread westward to the Middle East and into Europe, with rules changing frequently. The game also spread eastward to Korea and Japan, resulting in very different rule changes.

For most of human history, a game could not gain much popularity unless it was fairly easy for players to make their own equipment. The invention of printing (which occurred in the mid-1400s in the West) made this process easier, but it was not until the advances of the 18th-century Industrial Revolution that it became possible to mass-produce many new varieties of games. Twentieth-century technological advances such as the invention of plastic and the computer revolution led to the creation of more games, and more new kinds of games, than in all previous centuries combined.



In recent years improvements in CDs (compact discs) and in other aspects of computer technology have brought about entire new categories of games that grow more sophisticated each year. Computer adventure games, which as recently as the early 1980s consisted almost entirely of text, can now feature sophisticated graphics and movie-like animations using human actors.

In the 1990s the Internet opened up the possibility of playing games with people in all parts of the world. Internet clubs have sprung up for many kinds of games, and many of the newest computer games now come with user interfaces for online play.


E-Mail, in computer science, abbreviation of the term electronic mail, method of transmitting data or text files from one computer to another over an intranet or the Internet. E-mail enables computer users to send messages and data quickly through a local area network or beyond through a nationwide or worldwide communication network. E-mail came into widespread use in the 1990s and has become a major development in business and personal communications.

E-mail users create and send messages from individual computers using commercial e-mail programs or mail-user agents (MUAs). Most of these programs have a text editor for composing messages. The user sends a message to one or more recipients by specifying destination addresses. When a user sends an e-mail message to several recipients at once, it is sometimes called broadcasting.

The address of an e-mail message includes the source and destination of the message. Different addressing conventions are used depending upon the e-mail destination. An interoffice message distributed over an intranet, or internal computer network, may have a simple scheme, such as the employee’s name, for the e-mail address. E-mail messages sent outside of an intranet are addressed according to the following convention: The first part of the address contains the user’s name, followed by the symbol @, the domain name, the institution’s or organization’s name, and finally the country name.

A typical e-mail address might be sally@abc.com. In this example sally is the user’s name, abc is the domain name—the specific company, organization, or institution that the e-mail message is sent to or from, and the suffix com indicates the type of organization that abc belongs to—com for commercial, org for organization, edu for educational, mil for military, and gov for governmental. An e-mail message that originates outside the United States or is sent from the United States to other countries has a supplementary suffix that indicates the country of origin or destination. Examples include uk for the United Kingdom, fr for France, and au for Australia.

E-mail data travels from the sender’s computer to a network tool called a message transfer agent (MTA) that, depending on the address, either delivers the message within that network of computers or sends it to another MTA for distribution over the Internet. The data file is eventually delivered to the private mailbox of the recipient, who retrieves and reads it using an e-mail program or MUA. The recipient may delete the message, store it, reply to it, or forward it to others.

Modems are important devices that have allowed for the use of e-mail beyond local area networks. Modems convert a computer’s binary language into an analog signal and transmit the signal over ordinary telephone lines. Modems may be used to send e-mail messages to any destination in the world that has modems and computers able to receive messages.

E-mail messages display technical information called headers and footers above and below the main message body. In part, headers and footers record the sender’s and recipient’s names and e-mail addresses, the times and dates of message transmission and receipt, and the subject of the message.

In addition to the plain text contained in the body of regular e-mail messages, an increasing number of e-mail programs allow the user to send separate files attached to e-mail transmissions. This allows the user to append large text- or graphics-based files to e-mail messages.

E-mail has had a great impact on the amount of information sent worldwide. It has become an important method of transmitting information previously relayed via regular mail, telephone, courier, fax, television, and radio.


Windows, in computer science, personal computer operating system sold by Microsoft Corporation that allows users to enter commands with a point-and-click device, such as a mouse, instead of a keyboard. An operating system is a set of programs that control the basic functions of a computer. The Windows operating system provides users with a graphical user interface (GUI), which allows them to manipulate small pictures, called icons, on the computer screen to issue commands. Windows is the most widely used operating system in the world. It is an extension of and replacement for Microsoft’s Disk Operating System (MS-DOS).

The Windows GUI is designed to be a natural, or intuitive, work environment for the user. With Windows, the user can move a cursor around on the computer screen with a mouse. By pointing the cursor at icons and clicking buttons on the mouse, the user can issue commands to the computer to perform an action, such as starting a program, accessing a data file, or copying a data file. Other commands can be reached through pull-down or click-on menu items. The computer displays the active area in which the user is working as a window on the computer screen. The currently active window may overlap with other previously active windows that remain open on the screen. This type of GUI is said to include WIMP features: windows, icons, menus, and pointing device (such as a mouse).

Computer scientists at the Xerox Corporation’s Palo Alto Research Center (PARC) invented the GUI concept in the early 1970s, but this innovation was not an immediate commercial success. In 1983 Apple Computer featured a GUI in its Lisa computer. This GUI was updated and improved in its Macintosh computer, introduced in 1984.

Microsoft began its development of a GUI in 1983 as an extension of its MS-DOS operating system. Microsoft’s Windows version 1.0 first appeared in 1985. In this version, the windows were tiled, or presented next to each other rather than overlapping. Windows version 2.0, introduced in 1987, was designed to resemble IBM’s OS/2 Presentation Manager, another GUI operating system. Windows version 2.0 included the overlapping window feature. The more powerful version 3.0 of Windows, introduced in 1990, and subsequent versions 3.1 and 3.11 rapidly made Windows the market leader in operating systems for personal computers, in part because it was prepackaged on new personal computers. It also became the favored platform for software development.

In 1993 Microsoft introduced Windows NT (New Technology). The Windows NT operating system offers 32-bit multitasking, which gives a computer the ability to run several programs simultaneously, or in parallel, at high speed. This operating system competes with IBM’s OS/2 as a platform for the intensive, high-end, networked computing environments found in many businesses.

In 1995 Microsoft released a new version of Windows for personal computers called Windows 95. Windows 95 had a sleeker and simpler GUI than previous versions. It also offered 32-bit processing, efficient multitasking, network connections, and Internet access. Windows 98, released in 1998, improved upon Windows 95.

In 1996 Microsoft debuted Windows CE, a scaled-down version of the Microsoft Windows platform designed for use with handheld personal computers. Windows 2000, released at the end of 1999, combined Windows NT technology with the Windows 98 graphical user interface.

Other popular operating systems include the Macintosh System (Mac OS) from Apple Computer, Inc., OS/2 Warp from IBM, and UNIX and its variations, such as Linux.

Operating System



Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, and tape; and it manages hardware errors and the loss of data.



Operating systems control different computer processes, such as running a spreadsheet program or accessing information from the computer's memory. One important process is the interpretation of commands that allow the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in. Other command interpreters are graphically oriented and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters because they are more powerful.

Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.

All modern operating systems are multitasking and can run several processes simultaneously. In most computers there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves the state of a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.

Operating systems can use virtual memory to run processes that require more main memory than is actually available. With this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.



Operating systems commonly found on personal computers include UNIX, Macintosh OS, MS-DOS, OS/2, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet, the software for which initially was designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users protect their files from other users. The commands in UNIX are not intuitive, however, and mastering the system is difficult.

DOS (Disk Operating System) and its successor, MS-DOS, are popular operating systems among users of personal computers. The file systems of DOS and MS-DOS are similar to that of UNIX, but they are single user and single-tasking because they were developed before personal computers became relatively powerful. A multitasking variation is OS/2, initially developed by Microsoft Corporation and International Business Machines (IBM).

Few computer users run MS-DOS or OS/2 directly. They prefer versions of UNIX or windowing systems with graphical interfaces, such as Windows or the Macintosh OS, which make computer technology more accessible. However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.



Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failure—become more complex in distributed systems.

Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.

International Business Machines Corporation



International Business Machines Corporation (IBM), one of the world’s largest manufacturers of computers and a leading provider of computer-related products and services worldwide. IBM makes computer hardware, software, microprocessors, communications systems, servers, and workstations. Its products are used in business, government, science, defense, education, medicine, and space exploration. IBM has its headquarters in Armonk, New York.



The company was incorporated in 1911 as Computing-Tabulating-Recording Company in a merger of three smaller companies. After further acquisitions, it absorbed the International Business Machines Corporation in 1924 and assumed that company’s name. Thomas Watson arrived that same year and began to build the floundering company into an industrial giant. IBM soon became the country’s largest manufacturer of time clocks and punch-card tabulators. It also developed and marketed the first electric typewriter.



IBM entered the market for digital computers in the early 1950s, after the introduction of the UNIVAC computer by rival Remington Rand in 1951. The development of IBM’s computer technology was largely funded by contracts with the U.S. government’s Atomic Energy Commission, and close parallels existed between products made for government use and those introduced by IBM into the public marketplace. In the late 1950s IBM distinguished itself with two innovations: the concept of a family of computers (its 360 family) in which the same software could be run across the entire family; and a corporate policy dictating that no customer would be allowed to fail in implementing an IBM system. This policy spawned enormous loyalty to “Big Blue,” as IBM came to be known.

IBM’s dominant position in the computer industry has led the U.S. Department of Justice to file several antitrust suits against the company. IBM lost an antitrust case in 1936, when the Supreme Court of the United States ruled that IBM and Remington Rand were unfairly controlling the punch-card market and illegally forcing customers to buy their products. In 1956 IBM settled another lawsuit filed by the Department of Justice. IBM agreed to sell its tabulating machines rather than just leasing them, to establish a competitive market for used machines. In 1982 the Justice Department abandoned a federal antitrust suit against IBM after 13 years of litigation.

From the 1960s until the 1980s IBM dominated the global market for mainframe computers, although in the 1980s IBM lost market share to other manufacturers in specialty areas such as high-performance computing. When minicomputers were introduced in the 1970s IBM viewed them as a threat to the mainframe market and failed to recognize their potential, opening the door for such competitors as Digital Equipment Corporation, Hewlett-Packard Company, and Data General.



In 1981 IBM introduced its first personal computer, the IBM PC, which was rapidly adopted in businesses and homes. The computer was based on the 8088 microprocessor made by Intel Corporation and the MS-DOS operating system made by Microsoft Corporation. The PC’s enormous success led to other models, including the XT and AT lines. Seeking to capture a share of the personal-computer market, other companies developed clones of the PC, known as IBM-compatibles, that could run the same software as the IBM PC. By the mid-1980s these clone computers far outsold IBM personal computers.

In the mid-1980s IBM collaborated with Microsoft to develop an operating system called OS/2 to replace the aging MS-DOS. OS/2 ran older applications written for MS-DOS and newer, OS/2-specific applications that could run concurrently with each other in a process called multitasking. IBM and Microsoft released the first version of OS/2 in 1987. In 1991 Microsoft and IBM ended their collaboration on OS/2. IBM released several new versions of the operating system throughout the 1990s, while Microsoft developed its Windows operating systems.

In the late 1980s IBM was the world’s largest producer of a full line of computers and a leading producer of office equipment, including typewriters and photocopiers. The company was also the largest manufacturer of integrated circuits. The sale of mainframe computers and related software and peripherals accounted for nearly half of IBM’s business and about 70 to 80 percent of its profits.



In the early 1990s, amid a recession in the U.S. economy, IBM reorganized itself into autonomous business units more closely aligned to the company’s markets. The company suffered record losses in 1992 and, for the first time in its history, IBM cut stock dividends (to less than half of their previous value). John F. Akers, chairman of IBM since 1985, resigned in early 1993. Louis V. Gerstner, Jr., was named chairman of the company later that year. In 1995, IBM paid $3.5 billion to acquire Lotus Development Corporation, a software company, expanding its presence in the software industry. Beginning in 1996, IBM began increasing its stock dividends as the company returned to profitability. In 1997 an IBM supercomputer known as Deep Blue defeated world chess champion Garry Kasparov in a six-game chess match. The victory was hailed as a milestone in the development of artificial intelligence.

In 1998 IBM built the world’s fastest supercomputer for the Department of Energy at the Lawrence Livermore National Laboratory. The computer is capable of 3.9 trillion calculations per second, and was developed to simulate nuclear-weapons tests. In 1999 IBM announced a $16-billion, seven-year agreement to provide Dell Computer Corporation with storage, networking, and display peripherals–the largest agreement of its kind ever. IBM also announced plans to develop products and provide support for Linux, a free version of the UNIX operating system.

Programming Language



Programming Language, in computer science, artificial language used to write a sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.

Соседние файлы в папке 3