Скачиваний:
50
Добавлен:
20.06.2019
Размер:
50.48 Mб
Скачать

366

 

 

B. Li et al.

Table 21.2Platform hardware specification comparison

 

 

 

 

 

 

EC2 (m1.small)

Eucalyptus (m1.small)

Condor

OS Architecture

32-Bit

32-Bit

32-Bit

Compute unit

One virtual core

One

One physical CPU

Compute unit type

Intel 1.0–1.2 GHz 2007

Intel 1.0 GHz 2007

Intel 2.66G dual

 

Opteron or 2007 Xeon

Xeon

core processor

 

processor

 

 

Number of compute unit

One

One

Two

Ram (GB)

1.7

256M

4

21.3.2.3  Eucalyptus

Eucalyptus [8] is open-source software for building cloud systems on top of conventional compute clusters, with a similar API and protocols to EC2. Our private cloud is built using Ubuntu Linux server 9.04 (kernel 1.6.28-27) with Eucalyptus version 1.61 and consists of two servers, each with two Quad Core Intel Xeon E5540s at 2.53 GHz and 32 GB RAM. Currently, only the m1.small instance type is available, offering a maximum of 40 instances of 1.0 GHz per compute unit and 256 MB RAM. We are able to reuse the 32-bit image built for EC2 within this system.

The specification for nodes within the three systems is shown in Table 21.2.

21.3.3  Results

Values obtained for MCS VaR from all three systems are within tolerance of the VC VaR, and the standard error is within the necessary 1% tolerance up to 32 nodes but outside this tolerance at 64 nodes, consistent with expectations based on prior work.

We separate the start-up time from the application run-time and investigate the averages: for Condor, this gives us an average scheduling overhead; for EC2 and Eucalyptus, this provides the average image boot time. Results from this separation are shown below (Figs. 21.121.3).

We obtain an average boot time for 32 virtual machines of 106 s in EC2 and 234 s in Eucalyptus, both of which are lower than a speculated 5 min [4]. For EC2, similar boot times are obtained for all our chosen configurations, and we have found that such times are consistently achievable for morning and afternoon runs over a 7-day period. However, times for both Condor and Eucalyptus are progressively increasing with increasing demands. Condor requires 76 s for 32 processes, which appears to be favourable performance over EC2, but EC2 is offering better times at 64.

Once the application is ‘booted’, Eucalyptus appears to offer best run performance: for 32 instances, Eucalyptus takes 4.1 s, EC2 (m1.small) 7.9 s and Condor 19 s (Fig. 21.4). We have also found that EC2 (c1.medium) can outperform these at 3.7 s. Coordinating the analysis in Condor using DAGman magnifies the start-up time to around 500 s, and making it particularly unfavourable.

21 Towards Application-Specific Service Level Agreements

367

Fig. 21.1 Performance comparison (queuing/boot time)

Fig. 21.2 Performance comparison (application run)

368

B. Li et al.

Fig. 21.3 Performance comparison (total run)

Fig. 21.4 Probability of completion. To show the general trend, we excluded one outlying data point in ec2 c1.medium, which is considerably to the right of other data in that set

The total run time in Eucalyptus produces a similar ‘smile’ curve (Fig. 21.3) to Condor. In both systems, performance is improving up to a given number, then

21 Towards Application-Specific Service Level Agreements

369

drops away as more instances are demanded. EC2’s total run time appears to show a slight increase at 64, but well within the previous range.

21.3.4Job Completion

We consider the probability of completion of the analysis in Condor, Eucalyptus (m1.small) and EC2 (both m1.small and c1.medium) for 32 processors (Fig. 21.4). Condor manages to start all parallel tasks first, followed by EC2 (m1.small), Eucalyptus and EC2 (c1.medium). Note, however, the regression slope gradients: Condor shows the greatest variance for start-up time (s = 19.53), followed by EC2 m1.small (11.81), EC2 c1.medium (7.11) and Eucalyptus (5.41).

The probability of completion of VaR on AWS is 100% after the average AMI booting time of 97 s, provided all have been provisioned. This may not always be the case.

We show the speed up for each platform in Fig.21.5 by considering the gain achieved in using double the number of instances each time. Here, the point at which performance appears to begin to degrade becomes apparent (Eucalyptus, 4; Condor, 8).

21.3.5Cost

We estimated the cost of running VaR MCS on EC2 by reference to the Amazon pricing scheme in July 2009 (Table 21.3), which appeared similar to Sun’s network.

Fig. 21.5 Total run speed-up, showing gain achieved in doubling the number of instances, and performance degradation

Соседние файлы в папке CLOUD COMPUTING