Скачиваний:
50
Добавлен:
20.06.2019
Размер:
50.48 Mб
Скачать

370

 

B. Li et al.

Table 21.3Cost of VaR MCS (Dec 2009)

 

 

 

 

 

 

One off MCS (640,000

 

AWS m1.small moderate

simulations) charges with 434M

 

I/O hourly charges (US$)

Ubuntu AMI m1.small (US$)

 

 

 

EC2 VM per Instance

0.11

0.11

instance-hour (or

 

 

partial hour)

 

 

EC2 I/O in

0.10

0.01

EC2 I/O out

0.17

0.01

S3 I/O out (monthly cost)

0.17

0.01

S3 others

0.30

0.30

VAT (%)

15

15

Total cost (incl. VAT)

0.98

0.51

com charges of $1 per CPU-hour integrating costs that are individually priced here. The actual cost of an MCS VaR in EC2 (m1.small), for 640,000 simulations is about $0.51, and the running costs of 640,000*1 instance is similar to that of 10,000*64 instances. Use of higher performance units, 64-bit machines and Windows-based machines will result in variant performance and costs, not least since a Windows machine initially costs more than a Linux machine [17]. With Sun’s network.com running 64-bit systems, some of Amazon’s costs may be higher than those for a system that was closed down.

VaR MCS with 640,000 simulations, in EC2, costs US$0.51, but takes only 90 s. The same application running in Condor and Eucalyptus takes 95 and 228 s, respectively. The EC2 cost equivalent would be: Condor – $0.54; Eucalyptus $1.29. This emphasises the importance of careful choice of provider. However, a system that takes longer should price more competitively, and equivalent performance would be: Condor

– US$0.48; Eucalyptus – $0.20. Price differences would reflect system performance with different applications and different configurations of those applications. Significant data capture will be required to address the scope of these differences.

21.4  Conclusions and Future Work

In this chapter, we have used a Value at Risk (VaR) Monte Carlo Simulation (MCS) to compare run information from a public cloud (Amazon EC2), a private cloud (Eucalyptus) and a grid system (Condor). We considered the impact of the scheduling and booting overhead on an application with a relatively short run time, and used this information to relate system costs. We have previously reported on introducing risk into Service Level Agreements [10–12], and how price information helps to create guarantee terms of SLAs and contributes to required future work on resource availability prediction. The experiments presented here help us to consider further how to build SLAs such that a price comparison service for computing resources could be feasible. Such price information may be applicable to classes of

21  Towards Application-Specific Service Level Agreements

371

applications that have similar characteristics in order to estimate costs without prior knowledge of performance. However, obtaining reliable information will necessitate numerous runs across multiple systems, likely involving parameter sweeps. These efforts will be combined with autonomic use of SLAs, and are geared towards demonstrating the provision of a computational price comparison service.

With reference to [4], it is entirely feasible that a public cloud (EC2) may be faster than a supercomputer for a certain set of applications with known requirements and performance, and given certain availability constraints and scheduling overheads. The experiments presented here also show the potential for using current commercial clouds over grid-type infrastructures.

During our experiments, we encountered several occasions where one or two instances simply failed to start properly, even given almost 9 (chargeable) hours. Such occurrences merely emphasise the need for, and potential value of, applica- tion-specific SLAs.

References

1.Baun C, Kunze M (2009) Building a private cloud with eucalyptus. In: Proceeding of the 5th IEEE International Conference on e-Science Workshops, Oxford, UK

2.Buyya R, Giddy J, Abramson D (2001) A case for economy grid architecture for serviceoriented grid computing. In: 10th IEEE international heterogeneous computing workshop, San Francisco, CA

3.Chetty M, Buyya R (2002) Weaving electrical and computational grids: how analogous are they? Comput Sci Eng 4:61–72. http://buyya.com/papers/gridanalogy.pdf

4.Foster, I (2009) What’s faster – a supercomputer or EC2? http://ianfoster.typepad.com/ blog/2009/08/whats-fastera-supercomputer-or-ec2.html

5.Germano G, Engel M (2006) City@home: Monte Carlo derivative pricing distributed on networked computers, Grid technology for financial modelling and simulation, 2006

6.Gray J (2003) Distributed computing economics, Microsoft research technical report: MSRTR-2003-24 (also presented in Microsoft VC Summit 2004, Silicon Valey, April 2004)

7.Greenberg A, Hamilton J, Maltz DA, Patel P (2009) The cost of Cloud: research problems in data centre networks. ACM SIGCOMM Comput Commun Rev 39(1). http://ccr.sigcomm.org/ drupal/files/p68-v39n1o-greenberg.pdf. Accessed January 2009

8.Kenyon C, Cheliotis G (2003) Grid resource commercialization: economic engineering and delivery scenarios. In: Nabrzyski J, Schopf J, Weglarz J (eds) Grid resource management: state of the art and research issues. Kluwer, Dordrecht, The Netherlands

9.Kerstin V, Karim D, Iain G, James P (2007) AssessGrid, economic issues underlying risk awareness in grids. LNCS, Springer, Berlin/Heidelberg

10. Li B, Gillam L (2009) Towards job-specific service level agreements in the cloud. In: Proceeding of the 5th IEEE international conference on e-Science workshops. Oxford, UK

11. Li B, Gillam L (2009) Grid service level agreements using financial risk analysis techniques. In: Antonopoulos N, Exarchakos G, Li M, Liotta A (Eds) Handbook of research on P2P and grid systems for service-oriented computing: models, methodologies and applications. IGI Global, USA

12. Li B, Gillam L (2009) Risk informed computer economics. In: IEEE international symposium on cluster computing and the grid (CCGrid 2009, ServP2P). Shanghai, China

13. Li B, Gillam L (2008) Grids for financial risk analysis and financial risk analysis for grids. In: Proceedings of UK e-Science programmes, all hands meeting 2008 (AHM 2008), Edinburg

372

B. Li et al.

14. UC Berkeley Reliable Adaptive Distributed Systems Laboratory (2009) Above the clouds: a Berkeley view of cloud computing, white paper. http://radlab.cs.berkeley.edu/

15. Walker E (2008) Benchmarking Amazon EC2 for high-performance scientific computing. http://www.usenix.org/publications/login/2008-10/openpdfs/walker.pdf

16. Eucalyptus Cloud: http://www.eucalyptus.com/

17. Amazon, EC2, S3 Pricing (2009), Amazon EC2 Developer Guide (2006). http://aws.amazon. com

Соседние файлы в папке CLOUD COMPUTING