Скачиваний:
50
Добавлен:
20.06.2019
Размер:
50.48 Mб
Скачать

140

F. Pamieri and S. Pardi

The interaction with the network control logic is realized by a new set of cloud services created on top of eyeOS web-desktop. The authentication/authorization process is based on x509 proxy certificates with the Virtual Organization Membership Services (VOMS)-extension, in the gLite-style. A user with the proper privileges can ask for a virtual circuit operation through the eyeOS network interface.

8.5.2  Performance Evaluation and Results Discussion

In this section, we present some simple performance evaluation experiences done on our Cloud prototype to show how an application working on large data volumes distributed in the different sites within the cloud can greatly benefit from the introduction of the above-mentioned network control facilities in the cloud stack, and to demonstrate the effectiveness of the implemented architecture in providing QoS or bandwidth guarantees. To better emphasize the above-mentioned benefits and improvements to application behaviour, we performed our tests under real-world extreme traffic load conditions, by working between the Monte S. Angelo Campus site, actually the largest data center in the SCoPE infrastructure, and the Medicine Campus site, currently hosting the other largest storage repository available to the university’s research community. More precisely, the presented results have been obtained by analyzing the throughput associated to the transfer of 1-GB datasets between two EMC2 Clarion CX-3 storage systems located in the above-mentioned sites.

Both the involved storage area networks are connected to their respective access switches through dedicated resource manager nodes equipped with 1 Gbps full-duplex Ethernet interfaces. Here, for simplicity sake, we considered several sample-transfer sessions moving more than 2 TB of experimental data. During the first data transfer, performed on the cloud without any kind of network resource reservation, the underlying routing protocol picks the best but most crowded route between the two sites (owing to the strong utilization of the involved links in peak hours) through the main branch of the metro ring, so that we were able transfer 500 GB of data in 4 h (average 30 s/file) with an average throughput of 33 MB/s (about 270 Mbit/s) and a peak rate of 86 MB/s. We also observed a standard deviation equal to 12.0 owing to the noise present on the link, as it can be appreciated from the strong oscillation illustrated in the picture in Fig. 8.4. During the second test, we created a virtual point-to-point network between the two storage sites by reserving a 1-Gbps bandwidth channel to the above-mentioned datatransfer operation. Such action triggered the creation of a pair of dynamic end-to-end LSPs (one for each required direction) characterized with the required bandwidth on the involved PE nodes. In the presence of background traffic saturating the main branch, such LSPs were automatically re-routed through the secondary (and almost unused) branch to support the required bandwidth commitment. In this case, we observed that the whole 500-GB data transfer was completed in 1 h and 23 min (9.9 s/file) with an average throughput of 101 MB/s and a peak of 106 MB/s. We also observed an improvement in standard deviation achieving an acceptable value of 2.8 against the 12.5 of the best-effort case. We also evidenced a 20-MB/s loss with respect to the theoretical

8  Enhanced Network Support for Scalable Computing Clouds

 

 

 

141

 

90

 

 

 

 

 

 

 

 

 

 

 

 

80

 

 

 

 

 

 

 

 

 

 

 

(MB/s)

70

 

 

 

 

 

 

 

 

 

 

 

60

 

 

 

 

 

 

 

 

 

 

 

Throughout

50

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Average

40

 

 

 

 

 

 

 

 

 

 

 

30

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

20

 

 

 

 

 

 

 

 

 

 

 

 

10

 

 

 

 

 

 

 

 

 

 

 

 

0

0

50

100

150

200

250

300

350

400

450

500

 

 

 

 

 

 

 

Trials

 

 

 

 

 

Fig. 8.4

The best effort transfer behaviour

 

 

 

 

 

 

Bandwidth Average MB/s

120

100

80

60

40

20

0

0

With 1 Gbit/s LSP

101 MB/s Average Value

Dev. Standard 12.5

NoLSP

34 MB/s Average Value

Dev. Standard 12.5

100

200

300

400

500

600

Trials

Fig. 8.5The gain achieved through a virtual connection

achievable maximum bandwidth, due to the known TCP and Ethernet overhead and limits. This simple test evidences how the above-mentioned facility can be effective in optimizing the data-transfer performance within the cloud, and the results are more impressive when expressed in graphical form, as presented in Fig. 8.5, showing the gain achieved by concatenating a sequence of 300 file transfers in the best effort network with the other 300 transfers on the virtual 1-Gbps point-to-point network.

Соседние файлы в папке CLOUD COMPUTING