Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Computer_Engineering_print_new2

.pdf
Скачиваний:
628
Добавлен:
01.06.2015
Размер:
6.14 Mб
Скачать

Computer Engineering 121

transform into преобразовывать в, видоизменять

3. NOUNS + PREPOSITIONS

an alternative to

быть альтернативой чему-л

in agreement with

в согласии с, по соглашению

in comparison with

по сравнению с

in connection with

в связи с, применительно к

in relation to

относительно, касательно

in use

в употреблении

intention of

намерение, стремление, цель

need for

необходимость, потребность

probability of

вероятность из

reason for

мотив

use of

применение

3. ADJECTIVES + PREPOSITIONS

capable of

способный на что-л

engaged in

занятый чем-л

essential to

необходимый для

in general

в целом

in particular

в частности, в особенности

similar to

подобно, подобным образом

full of

полон

English for Special Purposes

122 Computer Engineering

Supplementary reading

UNIT 1 COMPUTER & COMPUTING: Famous people in computer technology development

Entrepreneur Steven Jobs and Stephen Wozniak, his engineer partner, founded a small company named Apple Computer, Inc. They introduced the Apple II computer in 1977. Its monitor supported relatively high-quality

color graphics, and it had a floppy-disk drive. The machine initially was popular for running video games. In 1979 Daniel Bricking wrote an electronic spreadsheet program called VisiCalc that ran on the Apple II.

IBM introduced its Personal Computer (PC) in 1981. As a result of competition from the makers of clones (computers that worked exactly like an IBM PC); the price of personal computers fell drastically. By the 1990s personal computers were far more powerful than the multimillion-dollar machines from the 1950s. In rapid succession computers shrank from tabletop to laptop and finally to palm-size.

The English mathematician Charles Babbage conceived the first set of operating instructions for a digital computer in the design of his “analytical engine” (1834), which was never built. The first operational stored-program computer was completed in 1949 at the University of Cambridge. The operating systems that came into wide use between 1950 and 1980 were developed mostly by private companies to operate proprietary mainframe computers and applications. The most popular of these systems were built by IBM Corporation) – include MVS, DOS/VSE, and VM.

In addition to proprietary systems, open, or portable, operating systems have been developed to run computers built by other manufacturers.

Open operating systems rose to prominence during the 1980s and are now widely used to run personal computers (PCs) and workstations, which use extremely powerful PCs. The dominant operating system is the disk operating system (DOS) developed by Microsoft Corporation. Also popular is Microsoft's Windows NT, an adjunct to DOS that provides enhanced computer graphics.

Following the launch of the Altair 8800, William Henry Gates III, (known as Bill Gates) called the creators of the new microcomputer, Micro Instrumentation and Telemetry Systems (MITS), offering to demonstrate an implementation of the

BASIC programming language for the system. After the demonstration, MITS agreed to distribute Altair BASIC Gates left Harvard University, moved to Albuquerque, New Mexico where MITS was located, and founded Microsoft there on September 5, 1975. The company's first international office was founded on November 1, 1978, in Japan, titled "ASCII Microsoft" (now called "Microsoft Japan"). On January 1, 1979, the company moved from Albuquerque to a new home in Bellevue, Washington.

English for Special Purposes

UNIT 1 COMPUTER & COMPUTING: CPU

The heart of a computer is the central

processing unit (CPU). In addition to performing arithmetic and logic operations on data, it times and controls the rest of the system. Mainframe

and supercomputer CPUs sometimes consist of

several linked microchips, called microprocessors1, each of which performs a separate task, but most other computers require only a single microprocessor as a CPU.

Components known as input devices let users

enter commands, data, or programs for processing by the CPU. Computer keyboards, which are much like typewriter keyboards, are the most common input devices. Information typed at the keyboard

is translated into a series of binary numbers2 that the CPU can manipulate.

Most digital computers store data both

internally, in what is called main memory3, and

externally, on auxiliary storage units4. As a computer processes data and instructions, it temporarily stores information in main memory, which consists of random-access memory (RAM). Random access means that each byte can be stored and retrieved directly, as opposed to sequentially as on magnetic tape.

Components that let the user see or hear the results of the computer's data processing are known as output devices. The most common one is the video display terminal (VDT), or monitor,

which used a cathode-ray tube (CRT) 5, which is nowadays out of date, or liquid-crystal display

(LCD)6 to show characters and graphics on a television-like screen.

Computer Engineering 123

Comments:

1microprocessor - an integrated circuit that contains all the functions of a central processing unit of a computer.

2binary number – The binary numeral system, or

base-2 number system represents numeric values using two symbols, usually 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2. Owing to its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by all modern computers.

3main memory – the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.

4auxiliary storage unit – a device that store data for a long time without extern power, external memory device.

5cathode-ray tube (CRT) – a high-vacuum tube, in which cathode rays produce a luminous image on a fluorescent screen, used chiefly in televisions and computer terminals.

6liquid-crystal display (LCD) – a form of visual display used in electronic devices, in which a layer of a liquid crystal is sandwiched between two transparent electrodes.

English for Special Purposes

124 Computer Engineering

UNIT 2 SOFTWARE: UNIX

Unix (officially trademarked as UNIX, sometimes also written as Unix with small caps) is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, and Douglas McIlroy. Today the term Unix is used to describe any operating system that conforms to Unix standards, meaning the core operating system operates the same as the original Unix operating system. Today's Unix systems are split into various branches, developed over time by AT&T as well as various commercial vendors and non-profit organizations.

As of 2007, the owner of the trademark is The Open Group, an industry standards consortium. Only systems fully compliant with and certified according to the Single UNIX Specification are qualified to use the trademark; others are called "Unix system-like" or "Unix-like".

During the late 1970s and early 1980s, the influence of Unix in academic circles led to largescale adoption of Unix (particularly of the BSD variant, originating from the University of California, Berkeley) by commercial startups, the most notable of which are Solaris, HP-UX and AIX. Today, in addition to certified Unix systems such as those already mentioned, Unix-like operating systems such as Linux and BSD are commonly encountered. The term "traditional Unix" may be used to describe a Unix or an operating system that has the characteristics of either Version 7 Unix or UNIX System V.

Unix operating systems are widely used in both servers and workstations. The Unix environment and the client-server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.

Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.

As a result, Unix became synonymous with "open systems".

Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of interprocess communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are known as the Unix philosophy.

Under Unix, the "operating system" consists of many of these utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low level" tasks that most programs share, and, perhaps most importantly, schedules access to hardware to avoid conflicts if two programs try to access the same resource or device simultaneously. To mediate such access, the kernel was given special rights on the system, leading to the division between user-space and kernel-space.

The microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a "normal" computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well as most I/O was "linear". However, modern systems include networking and other new devices. As graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse, and in the 1980s non-blocking I/O and the set of interprocess communication mechanisms was augmented (sockets, shared memory, message queues, semaphores), and functionalities such as network protocols were moved out of the kernel.

English for Special Purposes

Computer Engineering 125

UNIT 3 PORTABLE COMPUTERS: The future of Portable Computers

Jef Raskin, a user interface and system design consultant. Future portable computers will look like tiny keyboards or writing tablets with a cable or a wireless link to your eyeglasses. Though it would seem inevitable, not all versions of the glasses will connect to the computer or car or whatever by a wireless link.

Unfortunately, batteries are still big and heavy, and will be for a while yet, so we'll have to have some wires leading to battery packs, and once you've got a wire to the battery on your belt, you might as well have a nice, reliable, and cheap wire link to your eyeballs. Head-mounted displays will ultimately become cheaper than LCD panels. If it's done right, the eyeglass display will plug into whatever gadget you have, giving at least 600 * 800 pixels of screen resolution to the smallest cell phone (displays with built-in cell phones will be common). Because the illumination can come from a single white light-emitting diode that takes a fraction of a watt to operate, lithium-ion or lithium-polymer batteries on your belt or in the keyboard unit will allow you to work for many more hours on a single charge than today's big screens permit.

The laptop will probably last through this decade, but we are exploring different alternatives and as such it is likely that conclusions on this subject will be different in the nearest future. The market desperately needs to move to a more appliance-like device that is much more portable and much less power-hungry. With increasing wireless bandwidth and availability our options are going to increase. Micro-displays are advancing, thanks to rear projection TVs, at an incredible rate, making head-mounted displays more capable and more likely long term.

While we have a number of choices, for most, the laptop, with some enhancements, is likely to keep its place for at least the next five years.

Rob Enderle, a TechNewsWorld columnist, is the Principal Analyst for the Enderle Group. In the near future, most mobile devices will offer video conferencing, as they incorporate user-facing cameras. And many will also include pin projectors so their contents can be projected for easier viewing. But innovation will truly take off when battery size and capacity cease to be major hurdles for industrial designers. Today, wireless and soon over-the-air charging could explode similar to how wireless networking did in the past. Imagine not having to plug in your laptop, iPad, or iPhone - simply walking into your home or office will start the device charging - wirelessly. Faster chips and faster wireless networks will also eventually allow voice recognition to finally become a reality. Someday soon we will truly be able to ask questions to our mobile devices and they'll reply - and perhaps discuss the answer.

What's considered portable today will most likely be considered doorstops ten years from now. As flexible and transparent displays enter the mainstream devices will become lighter and wearable. Early attempts to intertwine computers and clothing have failed to date because you need batteries - and no one wants to plug in their pants. But as wireless power takes hold and computers shrink and their hard-case form disappears - the options become limitless.

And of course, don't overlook the potential for the merging of humankind and computers. Scientists continually push the envelope. Even today early steps towards powering devices with the body itself have leapt forward, and they'll continue to do so at an exponential rate. So in the future, perhaps we won't carry our portable devices, maybe we'll be the device. And then the question will be, not what's the next advance in computers, but what's the next advance in humans?

English for Special Purposes

126 Computer Engineering

UNIT 4 PROGRAMMING LANGUAGES: Java

Java is a programming language originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities. Java applications are typically compiled to byte code that can run on any Java Virtual Machine (JVM) regardless of computer architecture. Java is a general-purpose, concurrent, class-based, object-oriented language that is specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere". Java is currently one of the most popular programming languages in use, and is widely used from application software to web applications.

One characteristic of Java is portability, which means that computer programs written in the Java language must run similarly on any supported hardware/operating-system platform. This is achieved by compiling the Java language code to an intermediate representation called Java byte code, instead of directly to platform-specific machine code. Java byte code instructions are analogous to machine code, but are intended to be interpreted by a virtual machine (VM) written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a Web browser for Java applets. Standardized libraries provide a generic way to access host-specific features such as graphics, threading, and networking.

Programs written in Java have a reputation for being slower and requiring more memory than those written in C. However, Java programs' execution speed improved significantly with the introduction of Just-in- time compilation in 1997/1998 for Java 1.1, the addition of language features supporting better code analysis, and optimizations in the Java Virtual Machine itself, such as HotSpot becoming the default for Sun's JVM in 2000. Currently, Java code has approximately half the performance of C code. Some platforms offer direct hardware support for Java; there are microcontrollers that can run java in hardware instead of a software JVM, and ARM based processors can have

hardware support for executing Java byte code through its Jazelle option.

Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector.

One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack, or explicitly allocated and deallocated from the heap. In the latter case the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable and/or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Note that garbage collection does not prevent "logical" memory leaks, i.e. those where the memory is still referenced but never used.

Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java.

Java does not support C/C++ style pointer arithmetic, where object addresses and unsigned integers can be used interchangeably. This allows the garbage collector to relocate referenced objects and ensures type safety and security.

As in C++, variables of Java's primitive data types are not objects. Values of primitive types are either stored directly in fields or on the stack rather than on the heap, as commonly true for objects. This was a conscious decision by Java's designers for performance reasons. Because of this, Java was not considered to be a pure object-oriented programming language.

English for Special Purposes

Computer Engineering 127

UNIT 5 COMPUTER NETWORKING: Global Networking Infrastructure for the 21st century

The Internet Phenomenon

The Internet has gone from near-invisibility to near-ubiquity in little more than a year. In fact, though, today's multi-billion dollar industry in Internet hardware and software is the direct descendant of strategically-motivated fundamental research begun in the 1960s with federal sponsorship. A fertile mixture of high-risk ideas, stable research funding, visionary leadership, extraordinary grass-roots cooperation, and vigorous entrepreneurship has led to an emerging Global Information Infrastructure unlike anything that has ever existed.

Although not easy to estimate with accuracy, the 1994 data communications market approached roughly $15 billion/year if one includes private line data services ($9 billion/year), local area network and bridge/router equipment ($3 billion/year), wide area network services ($1 billion/year), electronic messaging and online services ($1 billion/year), and proprietary networking software and hardware ($1 billion/year). Some of these markets show annual growth rates in the 35-50% range, and the Internet itself has doubled in size each year since 1988.

As this article is written in 1995, the Internet encompasses an estimated 50,000 networks worldwide, about half of which are in the United States. There are over 5 million computers permanently attached to the Internet [as of mid1996 the number is between 10 and 15 million!], plus at least that many portable and desktop systems which are only intermittently online. (There were only 4 computers on the ARPANET in 1969, and only 200 on the Internet in 1983!) Traffic rates measured in the recently "retired" NSFNET backbone approached 20 trillion bytes per month and were growing at a 100% annual rate.

What triggered this phenomenon? What sustains it? How is its evolution managed? The answers to these questions have their roots in

DARPA-sponsored research in the 1960s into a then-risky new approach to data communication: packet switching. The U.S. government has played a critical role in the evolution and application of advanced computer networking technology and deserves credit for stimulating wide-ranging exploration and experimentation over the course of several decades.

Evolutionary Stages: Packet Switching

Today's computer communication networks are based on a technology called packet switching. This technology, which arose from DARPA-sponsored research in the 1960s, is fundamentally different from the technology that was then employed by the telephone system or by the military messaging system (which was based on "message switching").

In a packet switching system, data to be communicated is broken into small chunks that are labeled to show where they come from and where they are to go, rather like postcards in the postal system. Like postcards, packets have a maximum length and are not necessarily reliable. Packets are forwarded from one computer to another until they arrive at their destination. If any are lost, they are re-sent by the originator. The recipient acknowledges receipt of packets to eliminate unnecessary re-transmissions.

The earliest packet switching research was sponsored by the Information Processing Techniques Office of the Department of Defense Advanced Research Projects Agency, which acted as a visionary force shaping the evolution of computer networking as a tool for coherent harnessing of far-flung computing resources. The first experiments were conducted around 1966. Shortly thereafter, similar work began at the National Physical Laboratory in the UK. In 1968 DARPA developed and released a Request for Quotation for a communication system based on a set of small, interconnected computers it called "Interface Message Processors" or "IMPs."

English for Special Purposes

128 Computer Engineering

The competition was won by Bolt Beranek and Newman (BBN), a research firm in Cambridge, MA, and by September 1969 BBN had developed and delivered the first IMP to the Network Measurement Center located at UCLA. The "ARPANET" was to touch off an explosion of networking research that continues to the present.

Apart from exercising leadership by issuing its RFQ for a system that many thought was simply not feasible (AT&T was particularly pessimistic), DARPA also set a crucial tone by making the research entirely unclassified and by engaging some of the most creative members of the computer science community who tackled this communication problem without the benefit of the experience (and hence bias) of traditional telephony groups. Even within the computer science community, though, the technical approach was not uniformly well-received, and it is to DARPA's credit that it persevered despite much advice to the contrary.

ARPANET

The ARPANET grew from four nodes in 1969 to roughly one hundred by 1975. In the course of this growth, a crucial public demonstration was held during the first International Conference on Computer Communication in October 1972. Many skeptics were converted by witnessing the responsiveness and robustness of the system. Out of that pivotal meeting came an International Network Working Group (INWG) composed of researchers who had begun to explore packet switching concepts in earnest. Several INWG participants went on to develop an international standard for packet communication known as X.25, and to lead the development of commercial packet switching in the U.S., Canada, France, and the UK, specifically for systems such as Telenet, Datapac, Experimental Packet Switching System, Transpac, and Reseau Communication par Paquet.

By mid-1975, DARPA had concluded that the ARPANET was stable and should be turned over to

a separate agency for operational management. Responsibility was therefore transferred to the Defense Communications Agency (now known as the Defense Information Systems Agency).

New Packet Technologies

ARPANET was a single terrestrial network. Having seen that ARPANET was not only feasible but powerfully useful, DARPA began a series of research programs intended to extend the utility of packet switching to ships at sea and ground mobile units through the use of synchronous satellites (SATNET) and ground mobile packet radio (PRNET). These programs were begun in 1973, as was a prophetic effort known as "Internetting" which was intended to solve the problem of linking different kinds of packet networks together without requiring the users or their computers to know much about how packets moved from one network to another.

Also in the early 1970s, DARPA provided followon funding for a research project originated in the late 1960s by the Air Force Office of Scientific Research to explore the use of radio for a packet switched network. This effort, at the University of Hawaii, led to new mobile packet radio ideas and also to the design of the now-famous Ethernet. The Ethernet concept arose when a researcher from Xerox PARC spent a sabbatical period at the University of Hawaii and had the insight that the random access radio system could be operated on a coaxial cable, but at data rates thousands of times faster than could then be supported over the air. Ethernet has become a cornerstone of the multi-billion dollar local area network industry.

These efforts came together in 1977 when a four-network demonstration was conducted linking ARPANET, SATNET, Ethernet and the PRNET. The satellite effort, in particular, drew international involvement from participants in the UK, Norway, and later Italy and Germany.

http://www.cs.washington.edu/homes

English for Special Purposes

Computer Engineering 129

UNIT 6 COMPUTER GRAPHICS: The concept of computer graphics

An image or picture is an artifact, usually twodimensional, that has a similar appearance to some subject – usually a physical object or a person. Images may be two-dimensional, such as a photograph, screen display, and as well as a three-dimensional, such as a statue. They may be captured by optical devices – such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.

A digital image is a representation of a twodimensional image using ones and zeros (binary). Depending on whether or not the image resolution is

fixed, it may be of vector or raster type. Without qualifications, the term "digital image" usually refers to raster images.

A pixel is the smallest piece of information in an image. Pixels are normally arranged in a regular 2- dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black.

Graphics are visual presentations on some surface, such as a wall, canvas, computer screen, paper, or stone to brand, inform, illustrate, or entertain. Examples are photographs, drawings, line art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images. Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style. Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading

information. The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output.

3D projection is a method of mapping three dimensional points to a two dimensional plane. As most current methods for displaying graphical data are based on planar two dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.

Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism; usually higher than that of typical scanline rendering methods, but at a greater computational cost.

Shading refers to depicting depth in 3D models or illustrations by varying levels of darkness. It is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. The term has been recently generalized to mean that shaders are applied.

Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multitexturing is the use of more than one texture at a time on a polygon. Procedural textures (created from adjusting parameters of an underlying algorithm that produces an output texture), and bitmap textures (created in an image editing application) are, generally speaking, common methods of implementing texture definition from a 3D animation program, while intended placement of textures onto a model's surface often requires a technique known as UV mapping.

English for Special Purposes

130 Computer Engineering

Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner.

Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

3D modeling is the process of developing a mathematical, wireframe representation of any threedimensional object, called a "3D model", via specialized software.

Models may be created automatically or manually; the manual modeling process of preparing geometric

data for 3D computer graphics is similar to plastic arts such as sculpting.

3D models may be created using multiple approaches: use of NURBS curves to generate accurate and smooth surface patches, polygonal mesh modeling (manipulation of faceted geometry), or polygonal mesh subdivision (advanced tessellation of polygons, resulting in smooth surfaces similar to NURBS models).

A 3D model can be displayed as a two-dimensional image through a process called 3D rendering, used in a computer simulation of physical phenomena, or animated directly for

other purposes. The model can also be physically created using 3D Printing devices.

UNIT 7 MULTIMEDIA: Rich media features

The term Rich Media refers to a broad range of digital interactive media that can either be downloadable or embedded in a webpage. When downloaded, it can be used or viewed offline using media players such as Microsoft Media Player, Real Networks' RealPlayer, or Apple's QuickTime.

For distance learning through the Web as in e- learning, rich media must be an integral part of the courseware. It should comprise animation, interactivities to various levels of sophistication, visuals and narration. These components make training programs more effective and your company sees a significant Return on Investment (ROI).

Other components of Rich Media

File sizes: File sizes must always be small. To minimize delays in file transfer, use file formats that make the best use of Rich Media and are of good quality. These include Microsoft Media Player, GIF, JPG, RealPlayer, or Apple's QuickTime, Macromedia Flash (SWF), MP3, Shockwave Audio (SWA), Animated GIF, Macromedia Authorware (AAM) and VOX.

If there are delays in self-paced interactive course programs, students can be very frustrated as it interferes with understanding and retention. So, file sizes must be small and very streamable to slow modems. By streamable is meant that as bits of the digital video are downloaded, the movie starts to play, until the entire download is complete.

Image formats: If you use less than 256 colors,

sharp colors and small file sizes, the GIF format is ideal. These include screen grabs, clipart, drawings and illustrations.

JPGs are better for photographs and illustrations with over 256 colors. If you use JPGs for screen captures, it will cause blurring of colors and the file size will be larger because the colors are averaged.

Animation: Macromedia Flash is best for animation on the Web. Not only can you increase the Flash movie to 1024 x 768 from 640 x 480, but you don't lose any picture quality nor do you increase the file size. Alternatively, you can use animated GIF format.

Movies and digital video: The most popular streamable formats for digital video are Windows Media Player, Apple QuickTime and RealVideo.

Sound files: Of the many formats for sound compression available, MP3 is the most popular and has excellent quality. If you use Real Audio and Windows Media Player, you can safely use MP3.

One of the best ways of using sound on the Web is to use Macromedia Flash which converts WAV files to the Shockwave format it has internally. If you use Macromedia Authorware with its 800kbps plug-in, you can either use SWA (Shockwave Audio) or VOX (Voxware).

Authoring software: You could give your learners a "no plug-in" option by using Macromedia Dreamweaver with Course Builder. It creates good interactive learning, is compliant with AICC norms and can import all kinds of media.

English for Special Purposes

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]