Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
database.docx
Скачиваний:
13
Добавлен:
10.02.2016
Размер:
314.09 Кб
Скачать

1980S – Introduction of Macintosh in 1984:

Apple was the only major microcomputer manufacturer that did not switch to producing IBM-compatibles but chose the path of its own. In 1984, it unveiled its Macintosh model which was far superior to any of the IBM PCs or its clones in terms of user-friendliness. It used a technology called graphical user interface (GUI) and a pointing device called a mouse. The movement of the mouse moves the cursor on the screen. By moving the cursor to the appropriate words or pictures (called icons) and then clicking them allowed user to give appropriate commands to the computer. In this manner, the user need not memorize a lengthy list of commands that must be typed into the computer.

The graphical user interface or GUI (which is sometimes called WIMP for Windows, Icons, Mouse and Pull-down menus), had been in the process of developing since 1960s by various groups and by 1981, Xerox had used it for its Xerox Star computer. But, Xerox priced it too high and failed to provide critical hardware and support. Xerox never took the personal computer seriously and made very little marketing effort. As a result, Xerox perhaps missed one great opportunity (Smith and Alexander, 1988).

In December 1979, Steve Jobs was invited to visit Xerox Palo Alto Research Center (PARC) in Silicon Valley where Xerox was developing the technology for “the office of the future” where applications to graphical user interface was displayed. Since then, Jobs had in mind that the company’s next computer had to look like the machine he had seen at Xerox PARC. At first, he started Lisa project with this concept in mind. But, Lisa project was a failure; and Apple put all its effort into the Macintosh project that had started in 1979 (Lammers, 1986). In January 1984, Apple introduced the Macintosh with huge promotional efforts that included a legendary Super Bowl commercial. Priced at $ 2,500, it received high praise for its design, aesthetic quality and user-friendliness. Its elegant operating system so far was a great achievement. It displayed a combination of aesthetic beauty and practical engineering that was extremely rare to find (Guterl, 1984). But, after the initial enthusiasm, the sales were disappointing. The problem was the lack of sufficient number of software and other add-ons. This is because of Apple’s policy to keep Macintosh’s architecture closed. This closed architecture meant that hardware and software developers would find it difficult to create their own Macintosh add-ons and software without the close cooperation with Apple. A lack of third-party support created a problem for Macintosh, and sales never peaked up (Wallace and Erickson, 1992).

In order to reposition itself, Apple invited several of the leading software firms to develop software. But, a lack of sufficient level of demand for Mac software (which had then 10 percent market share of the personal computer market) caused this software developers to be discouraged. The only major firm which did accept to write software for Mac at least for a while was Microsoft. Microsoft, since 1981, had been somewhat involved in the Macintosh project, developing some minor parts of the operating system. By taking the offer from Apple to write programs for them, Microsoft found an environment much insulated from the highly competitive IBM-compatible market where it was facing intense competition for its application software against such strong competitors as Lotus in spreadsheet applications and Micro Pro in word processing. Later, it would be able to convert the same applications, so that they would run on the IBM-compatible PC. By 1987, Microsoft, in fact, was deriving half of its revenue from its Macintosh software (Veit, 1993). More importantly, working on the Macintosh gave Microsoft firsthand knowledge of the technology of graphical user interface on which it based its new Windows operating system for the IBM PC, to which we turn next.

1980s – Launching of Microsoft’s Windows:

Microsoft started its own graphical user interface (GUI) project in September 1981, shortly after Bill Gates had visited Steve Jobs at Apple and seen the prototype of Macintosh computer under development. Initially, it was estimated that it would take six programmer years to develop the system. But, when version 1.0 of Windows was released in October 1985 – it was estimated that the program containing 10,000 instructions had taken eighty programmer years to complete (Wallace and Erickson, 1992).

The Microsoft Windows was heavily based on the Macintosh user interface. On 22 November 1985, shortly after Windows was launched, Microsoft signed a licensing agreement to copy the visual characteristics of the Macintosh, thereby avoiding legal trouble for version 1.

However, although competitively priced at $ 99, sales of Windows 1.0 were sluggish at first because it was unbearably slow. Although a million copies were sold, most users found the system to be little more than a gimmick and the vast majority of users stayed with MS-DOS. Part of the reason was that the microprocessor at that time – Intel 80286 – was not fast enough to support GUI technology. Only in the late 1980s, when the next generation of microprocessors – the Intel 386 and 486 became available; that the GUI became much more supportable. At that time, Microsoft introduced its Windows 2.0. Windows 2.0 popularity also provided Microsoft with the opportunity to bundle its Excel spreadsheet software and its world processing software, Word. With it allowed their market share to increase considerably and eventually become the market leader in their respective applications.

In April 1987, IBM and Microsoft announced their joint intention to develop a new operating system called OS/2. On 17 March 1988, Apple filed a lawsuit alleging that Microsoft’s Windows 2.0 infringed Apple’s registered audio visual copyrights protecting the Macintosh interface. Apple argued that Microsoft’s original 1985 agreement with Apple had covered only version 1 of Windows but not version 2 (Ichbiah and Kneeper, 1991).

The lawsuit was eventually dismissed after three years. Meanwhile, Microsoft was achieving one of the most dramatic growths of any business in the 20 century. Most of the growth was achieved in the applications software. However, significant revenues were also derived from its Windows 2.0 operating system (Ichbiah and Kneeper, 1991).

Success of Windows 2.0 made Microsoft to lose interest in OS/2. When OS/2 was finally launched in early 1988, Microsoft failed to provide adequate software support for it. Because of this, the first version of OS/2 never took off. To the annoyance of IBM, Microsoft continued its Windows project intensely while ignoring OS/2 project. In fact, in 1990, Microsoft introduced a new Windows version – version 3.0.

The 1990s and Beyond – The Dominance of Microsoft in the Operating System Market and the Challenges It Faces:

On 22 May 1990, Microsoft introduced Windows 3.0 all around the world with an extravagant publicity and events that cost about $ 10 million. Windows 3.0 was well-received. Microsoft continued to take advantage of its dominance in operating systems by bundling the application software with operating systems and by taking advantage of its intimate knowledge of the source code of the operating system.

In April 1991, IBM announced a new version of OS/2 – release 2.0. The new operating system was said to have cost $ 1 billion to develop and was designed to replace all previous operating systems for IBM-compatible computers, including Microsoft’s own MS-DOS and Windows. However, despite the sound technical merits of OS/2, IBM continued to lose ground against Microsoft – partly because of a lack of appealing software and partly because of failure to market it effectively (Ichbiah and Kneeper, 1991).

Failure of OS/2 resulted in further dominating position for Microsoft Windows. This position was further reinforced in 1995 with its release of Windows 95 which was an immediate success. Since then, it has introduced several other versions of Windows including Windows 2000 and Windows XP.

Despite its successes and the dominating position of the Windows operating system, Microsoft faces several challenges. One is the US Department of Justice’s lawsuit against Microsoft charging that it had used its dominating position illegally. While it had lost in District Court, the case is currently pending in the Appeals Court.

The other challenges have to do with the future of the operating system as such. The advent of the Internet has opened up new possibilities and challenges. First, there are open-source systems like Linux, which is available freely online to anybody who wants to view, download or adapt it. This clearly threatens Microsoft’s dominating position. Second, the Internet may provide a platform in which operating system may become much less important. In this rapidly changing environment of computing and information technology, it is extremely difficult to say which direction the operating systems will take.

Origin and Evolution of Network Operating Systems

Contemporary network operating systems are mostly advanced and specialized branches of POSIX-compliant

software platforms and are rarely developed from scratch. The main reason for this situation is the high cost of

developing a world-class operating system all the way from concept to finished product. By adopting a generalpurpose OS architecture, network vendors can focus on routing-specific code, decrease time to market, and benefit

from years of technology and research that went into the design of the original (donor) products.

For example, consider Table 1, which lists some operating systems for routers and their respective origins (the

Generation column is explained in the following sections).

Generally speaking, network operating systems in routers can be traced to three generations of development, each

with distinctively different architectural and design goals.

First-Generation OS: Monolithic Architecture

Typically, first-generation network operating systems for routers and switches were proprietary images running in

a flat memory space, often directly from flash memory or ROM. While supporting multiple processes for protocols,

packet handling and management, they operated using a cooperative, multitasking model in which each process

would run to completion or until it voluntarily relinquished the CPU.

All first-generation network operating systems shared one trait: They eliminated the risks of running full-size

commercial operating systems on embedded hardware. Memory management, protection and context switching

were either rudimentary or nonexistent, with the primary goals being a small footprint and speed of operation.

Nevertheless, first-generation network operating systems made networking commercially viable and were deployed

on a wide range of products. The downside was that these systems were plagued with a host of problems associated

with resource management and fault isolation; a single runaway process could easily consume the processor or

cause the entire system to fail. Such failures were not uncommon in the data networks controlled by older software

and could be triggered by software errors, rogue traffic and operator errors.

Legacy platforms of the first generation are still seen in networks worldwide, although they are gradually being

pushed into the lowest end of the telecom product lines.

Second-Generation OS: Control Plane Modularity

The mid-1990s were marked by a significant increase in the use of data networks worldwide, which quickly

challenged the capacity of existing networks and routers. By this time, it had become evident that embedded

platforms could run full-size commercial operating systems, at least on high-end hardware, but with one catch: They

could not sustain packet forwarding with satisfactory data rates. A breakthrough solution was needed. It came in the

concept of a hard separation between the control and forwarding plane—an approach that became widely accepted

after the success of the industry’s first application-specific integrated circuit (ASIC)-driven routing platform, the

Juniper Networks M40. Forwarding packets entirely in silicon was proven to be viable, clearing the path for nextgeneration network operating systems, led by Juniper with its Junos OS.

Today, the original M40 routers are mostly retired, but their legacy lives in many similar designs, and their blueprints

are widely recognized in the industry as the second-generation reference architecture.

Second-generation network operating systems are free from packet switching and thus are focused on control

plane functions. Unlike its first-generation counterparts, a second-generation OS can fully use the potential of

multitasking, multithreading, memory management and context manipulation, all making systemwide failures

less common. Most core and edge routers installed in the past few years are running second-generation operating

systems, and it is these systems that are currently responsible for moving the bulk of traffic on the Internet and in

corporate networks.Copyright © 2010, Juniper Networks, Inc. 3

wHite paper - Network operating System evolution

However, the lack of a software data plane in second-generation operating systems prevents them from powering

low-end devices without a separate (hardware) forwarding plane. Also, some customers cannot migrate from their

older software easily because of compatibility issues and legacy features still in use.

These restrictions led to the rise of transitional (generation 1.5) OS designs, in which a first-generation monolithic

image would run as a process on top of the second-generation scheduler and kernel, thus bridging legacy features

with newer software concepts. The idea behind “generation 1.5” was to introduce some headroom and gradually

move the functionality into the new code, while retaining feature parity with the original code base. Although

interesting engineering exercises, such designs were not as feature-rich as their predecessors, nor as effective as

their successors, making them of questionable value in the long term.

Third-Generation OS: Flexibility, Scalability and Continuous Operation

Although second-generation designs were very successful, the past 10 years have brought new challenges.

Increased competition led to the need to lower operating expenses and a coherent case for network software flexible

enough to be redeployed in network devices across the larger part of the end-to-end packet path. From multipleterabit routers to Layer 2 switches and security appliances, the “best-in-class” catchphrase can no longer justify a

splintered operational experience—true ”network“ operating systems are clearly needed. Such systems must also

achieve continuous operation, so that software failures in the routing code, as well as system upgrades, do not affect

the state of the network. Meeting this challenge requires availability and convergence characteristics that go far

beyond the hardware redundancy available in second-generation routers.

Another key goal of third-generation operating systems is the capability to run with zero downtime (planned and

unplanned). Drawing on the lesson learned from previous designs regarding the difficulty of moving from one OS

to another, third-generation operating systems also should make the migration path completely transparent to

customers. They must offer an evolutionary, rather than revolutionary, upgrade experience typical to the retirement

process of legacy software designs.

Basic OS Design Considerations

Choosing the right foundation (prototype) for an operating system is very important, as it has significant implications

for the overall software design process and final product quality and serviceability. This importance is why OEM

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]