Скачиваний:
50
Добавлен:
20.06.2019
Размер:
50.48 Mб
Скачать

132

F. Pamieri and S. Pardi

8.4  Architecture and Implementation Details

The solution to all the above-mentioned issues will result in a flexible and evolutionary architecture that supports cooperation between different entities (computing systems/clusters, storage, scientific instruments, etc.) within the cloud, based on a scalable framework for dynamic and transparent configuration and interconnection of multiple types of resources for high-performance cloud-computing services over globally distributed optical network systems. To achieve this, we have to abstract and encapsulate the available network resources into manageable and dynamically provisioned entities within the cloud in order to meet the complex demand patterns of the applications and to optimize the overall network utilization. More precisely, we need to conceive a new cloud architecture considering the network resources as key resources that can be managed and controlled like any other resource by the cloud middleware/distributed operating system services. In such architecture, the cloud system is modeled by using a three-layer hierarchical schema:

The infrastructure layer, providing a virtualized interface to hardware resources, such as CPU, memory, connectivity/bandwidth, and storage, and aggregating and allocating them on a totally distributed basis

The platform layer including the components that implement the cloud basic services and runtime environment, such as the cloud operating system kernel, a distributed file system (DFS), cloud input/output (I/O) facilities, computing and virtualization engine, network management, and interface modules

The application layer hosting domain-specific application and realizing the cloud service abstraction through specific interfaces

The interfaces provided at the infrastructure layer make the platform layer almost totally independent from the underlying hardware resources, and thus ensure high scalability and flexibility benefits to the whole cloud architecture. Accordingly, the infrastructure layer can be implemented by using a public service such as Amazon EC2/S3 [1,2] or another private-owned infrastructure or solution such as a computing cluster or a grid.

Analogous to the operating system that manages the complexity of an individual machine, the COS handles the complexity at the platform layer and aggregates the resources available in all the data centers participating in the cloud. In particular, it runs applications on a highly unified, reliable, and efficient virtual infrastructure made up of distributed components, automatically managing them to support pre-defined service-level agreements (SLAs) in terms of availability, security, and performance assurance for the applications. It also dynamically moves the applications with the same service-level expectations across on-premise or off-premise sites within the clouds for the sake of highest operational efficiency.

A DFS platform provides a consistent view of the data seen by all the clients named in a hierarchical name space among multiple naming/directory servers, and ensures their distribution across the cloud to handle heavy loads and reliability in case of failures.

8  Enhanced Network Support for Scalable Computing Clouds

133

The I/O subsystem provides data-exchange services in the same infrastructure or among different clouds by using several protocols and facilities. Such services are implemented within the network control logic that has the role of collective broker for network connectivity requirements, keeps track of the resources and interfaces available on the cloud, and copes with all the necessary network operations by hiding the complexity of the resource-specific allocation tasks. These functions are implemented in the cloud middleware platform by relying on information models responsible for capturing structures and relationships of the involved entities. To cope with the heterogeneity of the network infrastructure resources, we propose a new technology-independent network resource abstraction: the Traffic Engineered end-to-end virtual circuit that can be used for virtualconnection transport. Such virtual circuit mimics a direct point-to-point connection or pipe with specific bandwidth and QoS features. The network control logic handles each connectivity request; it then coordinates the setting up of the needed tunnels between the nodes on the cloud hosting the requesting applications. This schema guarantees access to dedicated circuits, which may be requested ­on-demand or by advance reservation to deliver more reliable and predictable network performance.

Finally, the user interface supports administrators and clients to monitor and manage the cloud platform and the applications running on it through specific userfriendly interfaces. It includes configuration, accounting, performance, and secu- rity-management facilities. In this domain, many open-source technologies can be considered. The web services technology is a good candidate to play a role in building such user interface, which makes the cloud easily accessible through the network by delivering desktop-like experience to the users (Fig. 8.1).

Fig. 8.1The cloud-reference architecture

134

F. Pamieri and S. Pardi

8.4.1  Traffic Management and Control Plane Facilities

In our proposed architectural framework, an application program running on a cloud has the view of a virtualized communication infrastructure unifying all the needed computational and storage resources into a common “virtual site” or “virtual network” abstraction, and should be able to dynamically request some specific service levels (bandwidth/QoS, protection, etc.) on it. The fulfilment of the above-mentioned requests triggers the on-demand construction of one or more dedicated point-to-point or multipoint “virtual” circuits or pseudo-wires between the cloud sites hosting the application’s runtime resources, and is accomplished co-operatively by the network devices on the end-to-end paths between these sites. The above-mentioned circuits can either be dedicated layer-2 channels, realizing the abstractions of a transport network behaving as a single virtual switching device, or traffic engineered paths with guaranteed bandwidth, delay, etc. All the involved network resources have to be defined in advance at the “virtual network” configuration time. Control-plane protocols define the procedures for handling such traffic engineering operations, i.e., immediate requests for connectivity at a guaranteed rate. The transparency and adaptability features of cloud infrastructures make support for these operations absolutely necessary in a suitable transport network, which may be a mesh of private or public shared networks, owned and managed by some co-operating service providers and/or enterprises. The underlying network must be as transparent as possible with respect to the cloud infrastructure, so that all the necessary network operations are almost totally hidden to the applications and/or Virtual Machines running on it. Traffic management in our model should work on a pure “peerbased” model based on MPLS/GMPLS [3,4] technology that introduces a circuitswitching paradigm on top of the basic IP packet-switching framework. We consider a network built on label switching routers (LSR), optical wavelength switches, and communication links that may be under the administrative control of several cooperating NSP, realizing a common transport infrastructure. The optical devices implement an intelligent all-optical core where packets are routed through the network without leaving the optical domain. The optical network and the surrounding IP networks are independent of each other, and an edge LSR interacts with its connected switching nodes only over a well-defined User-Network Interface (UNI). A subset of the routers are known to be ingress and egress points for the network traffic within the cloud and these are typically the customer edge (CE) devices directly attached to the NSP’s point-of-presence locations or Provider Edge (PE) devices. There are no requirements for CE devices in order to map the logical connections to the remote sites – they have to be configured as if they were connected to a single bridged network or local area network. Also, the NSP edge nodes and the optical switches within the core do not have any information related to the cloud, and only transfer the tagged packets or cross-connect optical ports/wavelengths from one LSR to another in a transparent way. The key idea in such architecture is to realize a strict separation between the network control and forwarding planes. The space of all possible forwarding options in a network domain is partitioned into “Forwarding Equivalence Classes” (FECs). The packets

Соседние файлы в папке CLOUD COMPUTING