Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ещё одна посылка от Водяхо / [Judith_Hurwitz,_Robin_Bloor,_Marcia_Kaufman,_Fern(BookFi.org).pdf
Скачиваний:
46
Добавлен:
03.06.2014
Размер:
3.57 Mб
Скачать

188 Part IV: Nitty-Gritty Service Management

understood what will happen during dynamic virtualization. Definite issues have been identified, however, and we address several of them in the following sections.

Network monitoring

Current network defenses are based on physical networks. In the virtualized environment, the network is no longer physical; its configuration can actually change dynamically, which makes network monitoring difficult. To fix this problem, you must have software products that can monitor virtual networks and, ultimately, dynamic virtual networks.

Hypervisors

Just as an OS attack is possible, a hacker can take control of a hypervisor. If the hacker gains control of the hypervisor, he gains control of everything

that it controls; therefore, he could do a lot of damage. (For more details, see “Using a hypervisor in virtualization,” earlier in this chapter.)

Configuration and change management

The simple act of changing configurations or patching the software on virtual machines becomes much more complex if the software involved is locked away in virtual images, because in the virtual world, you no longer have a fixed static address to update the configuration.

Perimeter security

Providing perimeter security such as firewalls in a virtual environment is a little more complicated than in a normal network, because some virtual servers are outside a firewall.

This problem may not be too hard to solve, because you can isolate the virtual resource spaces. This approach places a constraint on how provisioning is carried out, however.

Taking Virtualization into the Cloud

Virtualization, as a technique for achieving efficiency in the data center and on the desktop, is here to stay. As we indicate earlier in this chapter, virtualization is rapidly becoming a requirement for managing a data center from a service-delivery perspective. Despite the economies that virtualization provides, however, companies are seeking even better economies when they’re available.

Chapter 15: Virtualizing the Computing Environment 189

In particular, companies have increasing interest in cloud computing (see the following section), prompted by the assumption that cloud-computing providers may achieve more effective economies of scale than can be achieved in the data center. In some contexts, this assumption is correct.

If you like, you can think of cloud computing as being the next stage of development for virtualization. The problem for the data center is that workloads are very mixed; the data center needs to execute internal transactional systems, Web transactional systems, messaging systems such as e-mail and chat, business intelligence systems, document management systems, workflow systems, and so on. With cloud computing, you can pick your spot and focus on getting efficiency from a predictable workload.

From this somewhat manual approach, you can move to industrial virtualization by making it a repeatable platform. This move requires forethought, however. What would such a platform need?

For this use of resources to be effective, you must implement a full-service management platform so that resources are safe from all forms of risk. As in traditional systems, the virtualized environment must be protected:

The virtualized services offered must be secure.

The virtualized services must be backed up and recovered as though they’re physical systems.

These resources need to have workload management, workflow, provisioning, and load balancing at the foundation to support the required type of customer experience.

Without this level of oversight, virtualization won’t deliver the cost savings that it promises.

Defining cloud computing

Based on this background, we define cloud computing as a computing model that makes IT resources such as servers, storage, middleware, and business applications available as a service to business organizations in a self-service manner.

Although all these terms are important, the important one is self-service.

In a self-service model, organizations look at their IT infrastructure not as a collection of technologies needed for a specific project, but as a single resource space. The difference between the cloud and the traditional data center is that the cloud is inherently flexible. To work in the real world, the cloud needs three things:

190 Part IV: Nitty-Gritty Service Management

Virtualization: The resources that will be available in a self-service model no longer have the same kinds of constraints that they face in the corporate environment.

Automation: Automation means that the service is supported by an underlying platform that allows resources to be changed, moved, and managed without human intervention.

Standardization: Standardization is also key. Standardized processes and interfaces are required behind the scenes. Interoperability is an essential ingredient of the cloud environment.

When you bring these elements together, you have something very powerful.

What type of cloud services will customers subscribe to? All the services that we describe as the foundation of virtualization (refer to “Understanding Virtualization,” earlier in this chapter) are the same ones that you’d make available as part of the cloud. You want to be able to access CPU cycles, storage, networks, and applications, for example, or you may want to augment the physical environment with additional CPU cycles during a peak load. Alternatively, you may want to replace an entire data center with a virtualized data center that’s based on a virtualized environment managed by a third-party company.

Cloud computing is in its very early stages. In fact, in many situations customers aren’t even aware that they’re using a cloud. Anyone who uses Google’s Gmail service, for example, is leveraging a cloud, because Google’s own search environment runs within its own cloud. In other situations, large corporations are experimenting with cloud computing as a potential way to transfer data center operations to a more flexible model.

Another example is Amazon.com, which sells access to CPU cycles and storage as a service of its cloud infrastructure. A customer may decide to use Amazon’s cloud to test a brand-new application before purchasing it, because renting is easier than owning.

In cloud environments, customers add CPU cycles or storage as their needs grow. They’re protected from the details, but this protection doesn’t happen by magic. The provider has to do a lot of work behind the scenes to manage this highly dynamic environment.

Using the cloud as utility computing

For decades, thinkers have talked about the day when we would move to utility computing as a normal model of managing computing resources.

Chapter 15: Virtualizing the Computing Environment 191

Computing power would be no different from electricity. When you need some extra light in a room, for example, you turn on the light switch, and the electric utility allocates more power to your house. You don’t have an electrical grid in your home, and you don’t have to acquire tools to tune the way that power is allocated to different rooms of your home. Like electrical power, computing power would be a highly managed utility.

Obviously, we’re far from that scenario right now. The typical enterprise is filled with truly heterogeneous data centers, assorted servers, desktops,

mobile devices, storage, networks, applications, and vast arrays of management infrastructures and tools. In fact, you may have been told that about 85 percent of these computing resources are underused.

In addition, at least 70 percent of the budget spent on IT keeps the current systems operational rather than focusing on customer service. The advent of cloud computing is changing all that. Organizations need to reduce risk; reduce costs; and improve overall service to their customers, suppliers, and partners. Most of all, they need to focus on the service levels of the primary transactions that define the business.

IT organizations that decide to proceed with business as usual are putting their companies at risk. Also, because most IT budgets aren’t growing, meeting customer expectations and performance goals without violating the budget is imperative. In truth, the biggest problem that IT organizations have isn’t just running data centers and the associated software, but managing the environment so that it meets the required level of service.

Veiling virtualization technology from the end user

Any vendor that wants to provide cloud services to its customers has a lot to live up to. All the virtualization technology that supports these requirements is hidden from the customer. Although the customer may expect to run a wide variety of software services on the cloud, she may have little, if any, input into the underlying services.

Cloud customers see only the interface to the resources. In this self-service mode, they have the freedom to expand and contract their services at will. Vendors providing cloud services have to provide a sophisticated servicelevel agreement (SLA) layer between the business and the IT organization.

The vendors have a responsibility to provide management services, including a service desk to handle problems and real-time visibility of usage metrics

192 Part IV: Nitty-Gritty Service Management

and expenditures. For two reasons, it’s the vendor’s responsibility to provide a completely reliable level of customer service:

The customer has the freedom to move to another vendor’s cloud if he isn’t satisfied with the level of service.

Customers are using the cloud as a substitute for a data center; therefore, the cloud provider has a responsibility to both internal and external customers.

Overseeing and managing a cloud environment are complicated jobs. The provider of the cloud service must have all the management capabilities that are used in any virtualized environment. In addition, the provider must be able to monitor current use and anticipate how it may change. Therefore, the cloud environment needs to be able to provide new resources to a customer in real time. Also, a high level of management must be built into the platform. Much of that management needs to be autonomic — self-managing and self-correcting.

Any sophisticated customer leveraging a cloud will want an SLA with the cloud provider. That customer also needs a mechanism to ensure that service levels are being met (via a full set of service management capabilities).

Chapter 16

IT Security and Service

Management

In This Chapter

Recognizing security risks

Carrying out required security tasks

Managing user identity

Using detection and forensics programs

Coding data

Creating a security plan

Security is a fundamental requirement if you’re implementing true service management. You may think that someone else in your organization is

responsible for security. Think again. Don’t leave security to an independent department somewhere in the bowels of IT. This chapter shows you how, overall, security has to be baked into service management.

Unless you’re fresh out of college, you know that before 1995, IT security wasn’t a significant problem, so very little money was spent on it. By 2004, organizations around the world were spending more than $20 billion on IT security, and that figure is expected to rise to $79 billion by the end of 2010. What happened?

Our guess is that you already know what happened. The Internet happened, letting computers connect remotely to hundreds of millions of other computers and giving lots of bad guys ample opportunity to launch a new career. The bad guys got better at breaking into IT networks, so the cost of stopping them escalated.

194 Part IV: Nitty-Gritty Service Management

IT security is a very awkward area of service management for three reasons:

Almost all applications are built without any consideration for security.

IT security delivers very few benefits beyond reducing the risk of security breaches.

Measuring the success of any IT security investment is very difficult.

Before describing any IT security products or processes, we expand on these points.

Understanding the Universe

of Security Risks

When software developers design a system, they don’t incorporate security features that might keep that system and its data more secure.

Historically, developers didn’t need to add security features, because computer operating systems had a built-in security perimeter based on login identity and permissions (rules specifying what programs users could run and what data files users could access). With the advent of networks, however, an operating system could be artificially extended to work across a network.

PCs had no security at all initially, but a password-and-permissions system was added for networkwide security based on login. In IT security circles, this system is called perimeter security because it establishes a secure perimeter around the network, the applications it runs, and the data stored within. Many of the security products that organizations deploy, such as firewalls and virtual private networks (VPNs, which are encrypted communication lines), are also perimeter-security products. They improve the security of the perimeter, which is a bit like plugging holes in the castle walls.

Currently, the IT industry faces a problem: Security approaches (including perimeter security) are becoming less effective. To understand why, you must know how security threats arise.

Inside and outside threats

About 70 percent of security breaches are caused by insiders (or by people getting help from insiders). This statistic is based on surveys of organizations that suffer breaches, but the truth is that no one is sure exactly what

Chapter 16: IT Security and Service Management 195

the figure is. Insiders rarely get caught, and proving insider involvement usually is impossible when a security attack comes from a computer outside the organization.

Nevertheless, the possibility that insiders will open a door for hackers or mount an inside attack makes it clear that perimeter security on its own will never be enough.

The outside threat is best described this way:

Hackers can be very talented engineers. They use specially designed, very sophisticated software tools to gain access and subvert systems.

Hackers can have networks of thousands of compromised PCs under their control. Such networks, called botnets, are extremely powerful.

Hackers may have channels through which they can sell an organization’s data. A whole economic ecosystem has been built around the sale of stolen data.

Some hackers have financial channels through which they can extort money with impunity.

Hackers are guns for hire and may be hired by your competitors to perform industrial sabotage.

In summary, both inside and outside threats are real and may be formidable. How do you protect against them?

Types of attacks on IT assets

The type of protection you need depends on what you’re trying to prevent. Here’s a list of bad things that can happen:

Denial-of-service (DOS) attack: Drowning some external connection service (such as a Web server) in an avalanche of traffic, thereby preventing the service from working. Normally, the aim is to extort money (“We’ll stop when you pay us”) or to damage the service out of sheer delinquency.

Resource theft: Stealing computer equipment, particularly laptops.

Firewall breach: Breaking through a firewall to access servers on the corporate network directly. Not all firewalls work perfectly, and those that do can be misconfigured.

196 Part IV: Nitty-Gritty Service Management

Virus infection: Implanting a virus on some computer in the network to open a back door into the network. Many such viruses can be planted in many ways.

Software mischief: Using password-cracking software or known security weaknesses in some software (any kind accessed via the Internet) to gain access to the network.

Social engineering: Persuading an inside user to reveal his password.

Hackers sometimes call users, pretending to be the service desk, and trick them into revealing their passwords.

Data theft: Stealing any data that commands commercial value, such as financial details on customers, commercial secrets, or financial results.

Data destruction: Destroying or corrupting data in an attack.

Resource hijacking: Taking control of some of an organization’s computers to run malevolent software, such as a program that sends out spam.

Fraud: Interfering with legitimate business applications to perpetrate a fraud, such as causing money to be sent to fraudulent accounts or redirecting ordered goods to temporary pickup addresses.

You can’t block all attacks — and when we say that, we mean it. If you analyze the last four items in the preceding list, you quickly see that no simple solution can address these threats. A hacker can mount a successful attack in many ways, and unless you have an unlimited security budget, you can’t block all those efforts completely.

You can reduce the risk of a successful attack, however. Here are a few methods:

Anti-DOS technology: Neutralize DOS attacks (which are purely external threats) by investing in appropriate technology. You can use different products — both software and hardware based — depending on the kind of attack you’re trying to protect against.

Physical and personal security: Guard against resource theft by adding physical security in the office and employing personal vigilance outside the office.

Firewall maintenance: Apply the right level of diligence to maintaining firewalls.

White-listing: Stop all viruses by white-listing: telling the system exactly what software is allowed to run on any server in the network and blocking all other software. (For more information, see “HIPS and NIPS,” later in this chapter.)

Automatic login termination: Reduce the risk of password cracking by automatically terminating login attempts after a certain number of tries.

Соседние файлы в папке Ещё одна посылка от Водяхо