
- •Cloud Computing
- •Foreword
- •Preface
- •Introduction
- •Expected Audience
- •Book Overview
- •Part 1: Cloud Base
- •Part 2: Cloud Seeding
- •Part 3: Cloud Breaks
- •Part 4: Cloud Feedback
- •Contents
- •1.1 Introduction
- •1.1.1 Cloud Services and Enabling Technologies
- •1.2 Virtualization Technology
- •1.2.1 Virtual Machines
- •1.2.2 Virtualization Platforms
- •1.2.3 Virtual Infrastructure Management
- •1.2.4 Cloud Infrastructure Manager
- •1.3 The MapReduce System
- •1.3.1 Hadoop MapReduce Overview
- •1.4 Web Services
- •1.4.1 RPC (Remote Procedure Call)
- •1.4.2 SOA (Service-Oriented Architecture)
- •1.4.3 REST (Representative State Transfer)
- •1.4.4 Mashup
- •1.4.5 Web Services in Practice
- •1.5 Conclusions
- •References
- •2.1 Introduction
- •2.2 Background and Related Work
- •2.3 Taxonomy of Cloud Computing
- •2.3.1 Cloud Architecture
- •2.3.1.1 Services and Modes of Cloud Computing
- •Software-as-a-Service (SaaS)
- •Platform-as-a-Service (PaaS)
- •Hardware-as-a-Service (HaaS)
- •Infrastructure-as-a-Service (IaaS)
- •2.3.2 Virtualization Management
- •2.3.3 Core Services
- •2.3.3.1 Discovery and Replication
- •2.3.3.2 Load Balancing
- •2.3.3.3 Resource Management
- •2.3.4 Data Governance
- •2.3.4.1 Interoperability
- •2.3.4.2 Data Migration
- •2.3.5 Management Services
- •2.3.5.1 Deployment and Configuration
- •2.3.5.2 Monitoring and Reporting
- •2.3.5.3 Service-Level Agreements (SLAs) Management
- •2.3.5.4 Metering and Billing
- •2.3.5.5 Provisioning
- •2.3.6 Security
- •2.3.6.1 Encryption/Decryption
- •2.3.6.2 Privacy and Federated Identity
- •2.3.6.3 Authorization and Authentication
- •2.3.7 Fault Tolerance
- •2.4 Classification and Comparison between Cloud Computing Ecosystems
- •2.5 Findings
- •2.5.2 Cloud Computing PaaS and SaaS Provider
- •2.5.3 Open Source Based Cloud Computing Services
- •2.6 Comments on Issues and Opportunities
- •2.7 Conclusions
- •References
- •3.1 Introduction
- •3.2 Scientific Workflows and e-Science
- •3.2.1 Scientific Workflows
- •3.2.2 Scientific Workflow Management Systems
- •3.2.3 Important Aspects of In Silico Experiments
- •3.3 A Taxonomy for Cloud Computing
- •3.3.1 Business Model
- •3.3.2 Privacy
- •3.3.3 Pricing
- •3.3.4 Architecture
- •3.3.5 Technology Infrastructure
- •3.3.6 Access
- •3.3.7 Standards
- •3.3.8 Orientation
- •3.5 Taxonomies for Cloud Computing
- •3.6 Conclusions and Final Remarks
- •References
- •4.1 Introduction
- •4.2 Cloud and Grid: A Comparison
- •4.2.1 A Retrospective View
- •4.2.2 Comparison from the Viewpoint of System
- •4.2.3 Comparison from the Viewpoint of Users
- •4.2.4 A Summary
- •4.3 Examining Cloud Computing from the CSCW Perspective
- •4.3.1 CSCW Findings
- •4.3.2 The Anatomy of Cloud Computing
- •4.3.2.1 Security and Privacy
- •4.3.2.2 Data and/or Vendor Lock-In
- •4.3.2.3 Service Availability/Reliability
- •4.4 Conclusions
- •References
- •5.1 Overview – Cloud Standards – What and Why?
- •5.2 Deep Dive: Interoperability Standards
- •5.2.1 Purpose, Expectations and Challenges
- •5.2.2 Initiatives – Focus, Sponsors and Status
- •5.2.3 Market Adoption
- •5.2.4 Gaps/Areas of Improvement
- •5.3 Deep Dive: Security Standards
- •5.3.1 Purpose, Expectations and Challenges
- •5.3.2 Initiatives – Focus, Sponsors and Status
- •5.3.3 Market Adoption
- •5.3.4 Gaps/Areas of Improvement
- •5.4 Deep Dive: Portability Standards
- •5.4.1 Purpose, Expectations and Challenges
- •5.4.2 Initiatives – Focus, Sponsors and Status
- •5.4.3 Market Adoption
- •5.4.4 Gaps/Areas of Improvement
- •5.5.1 Purpose, Expectations and Challenges
- •5.5.2 Initiatives – Focus, Sponsors and Status
- •5.5.3 Market Adoption
- •5.5.4 Gaps/Areas of Improvement
- •5.6 Deep Dive: Other Key Standards
- •5.6.1 Initiatives – Focus, Sponsors and Status
- •5.7 Closing Notes
- •References
- •6.1 Introduction and Motivation
- •6.2 Cloud@Home Overview
- •6.2.1 Issues, Challenges, and Open Problems
- •6.2.2 Basic Architecture
- •6.2.2.1 Software Environment
- •6.2.2.2 Software Infrastructure
- •6.2.2.3 Software Kernel
- •6.2.2.4 Firmware/Hardware
- •6.2.3 Application Scenarios
- •6.3 Cloud@Home Core Structure
- •6.3.1 Management Subsystem
- •6.3.2 Resource Subsystem
- •6.4 Conclusions
- •References
- •7.1 Introduction
- •7.2 MapReduce
- •7.3 P2P-MapReduce
- •7.3.1 Architecture
- •7.3.2 Implementation
- •7.3.2.1 Basic Mechanisms
- •Resource Discovery
- •Network Maintenance
- •Job Submission and Failure Recovery
- •7.3.2.2 State Diagram and Software Modules
- •7.3.3 Evaluation
- •7.4 Conclusions
- •References
- •8.1 Introduction
- •8.2 The Cloud Evolution
- •8.3 Improved Network Support for Cloud Computing
- •8.3.1 Why the Internet is Not Enough?
- •8.3.2 Transparent Optical Networks for Cloud Applications: The Dedicated Bandwidth Paradigm
- •8.4 Architecture and Implementation Details
- •8.4.1 Traffic Management and Control Plane Facilities
- •8.4.2 Service Plane and Interfaces
- •8.4.2.1 Providing Network Services to Cloud-Computing Infrastructures
- •8.4.2.2 The Cloud Operating System–Network Interface
- •8.5.1 The Prototype Details
- •8.5.1.1 The Underlying Network Infrastructure
- •8.5.1.2 The Prototype Cloud Network Control Logic and its Services
- •8.5.2 Performance Evaluation and Results Discussion
- •8.6 Related Work
- •8.7 Conclusions
- •References
- •9.1 Introduction
- •9.2 Overview of YML
- •9.3 Design and Implementation of YML-PC
- •9.3.1 Concept Stack of Cloud Platform
- •9.3.2 Design of YML-PC
- •9.3.3 Core Design and Implementation of YML-PC
- •9.4 Primary Experiments on YML-PC
- •9.4.1 YML-PC Can Be Scaled Up Very Easily
- •9.4.2 Data Persistence in YML-PC
- •9.4.3 Schedule Mechanism in YML-PC
- •9.5 Conclusion and Future Work
- •References
- •10.1 Introduction
- •10.2 Related Work
- •10.2.1 General View of Cloud Computing frameworks
- •10.2.2 Cloud Computing Middleware
- •10.3 Deploying Applications in the Cloud
- •10.3.1 Benchmarking the Cloud
- •10.3.2 The ProActive GCM Deployment
- •10.3.3 Technical Solutions for Deployment over Heterogeneous Infrastructures
- •10.3.3.1 Virtual Private Network (VPN)
- •10.3.3.2 Amazon Virtual Private Cloud (VPC)
- •10.3.3.3 Message Forwarding and Tunneling
- •10.3.4 Conclusion and Motivation for Mixing
- •10.4 Moving HPC Applications from Grids to Clouds
- •10.4.1 HPC on Heterogeneous Multi-Domain Platforms
- •10.4.2 The Hierarchical SPMD Concept and Multi-level Partitioning of Numerical Meshes
- •10.4.3 The GCM/ProActive-Based Lightweight Framework
- •10.4.4 Performance Evaluation
- •10.5 Dynamic Mixing of Clusters, Grids, and Clouds
- •10.5.1 The ProActive Resource Manager
- •10.5.2 Cloud Bursting: Managing Spike Demand
- •10.5.3 Cloud Seeding: Dealing with Heterogeneous Hardware and Private Data
- •10.6 Conclusion
- •References
- •11.1 Introduction
- •11.2 Background
- •11.2.1 ASKALON
- •11.2.2 Cloud Computing
- •11.3 Resource Management Architecture
- •11.3.1 Cloud Management
- •11.3.2 Image Catalog
- •11.3.3 Security
- •11.4 Evaluation
- •11.5 Related Work
- •11.6 Conclusions and Future Work
- •References
- •12.1 Introduction
- •12.2 Layered Peer-to-Peer Cloud Provisioning Architecture
- •12.4.1 Distributed Hash Tables
- •12.4.2 Designing Complex Services over DHTs
- •12.5 Cloud Peer Software Fabric: Design and Implementation
- •12.5.1 Overlay Construction
- •12.5.2 Multidimensional Query Indexing
- •12.5.3 Multidimensional Query Routing
- •12.6 Experiments and Evaluation
- •12.6.1 Cloud Peer Details
- •12.6.3 Test Application
- •12.6.4 Deployment of Test Services on Amazon EC2 Platform
- •12.7 Results and Discussions
- •12.8 Conclusions and Path Forward
- •References
- •13.1 Introduction
- •13.2 High-Throughput Science with the Nimrod Tools
- •13.2.1 The Nimrod Tool Family
- •13.2.2 Nimrod and the Grid
- •13.2.3 Scheduling in Nimrod
- •13.3 Extensions to Support Amazon’s Elastic Compute Cloud
- •13.3.1 The Nimrod Architecture
- •13.3.2 The EC2 Actuator
- •13.3.3 Additions to the Schedulers
- •13.4.1 Introduction and Background
- •13.4.2 Computational Requirements
- •13.4.3 The Experiment
- •13.4.4 Computational and Economic Results
- •13.4.5 Scientific Results
- •13.5 Conclusions
- •References
- •14.1 Using the Cloud
- •14.1.1 Overview
- •14.1.2 Background
- •14.1.3 Requirements and Obligations
- •14.1.3.1 Regional Laws
- •14.1.3.2 Industry Regulations
- •14.2 Cloud Compliance
- •14.2.1 Information Security Organization
- •14.2.2 Data Classification
- •14.2.2.1 Classifying Data and Systems
- •14.2.2.2 Specific Type of Data of Concern
- •14.2.2.3 Labeling
- •14.2.3 Access Control and Connectivity
- •14.2.3.1 Authentication and Authorization
- •14.2.3.2 Accounting and Auditing
- •14.2.3.3 Encrypting Data in Motion
- •14.2.3.4 Encrypting Data at Rest
- •14.2.4 Risk Assessments
- •14.2.4.1 Threat and Risk Assessments
- •14.2.4.2 Business Impact Assessments
- •14.2.4.3 Privacy Impact Assessments
- •14.2.5 Due Diligence and Provider Contract Requirements
- •14.2.5.1 ISO Certification
- •14.2.5.2 SAS 70 Type II
- •14.2.5.3 PCI PA DSS or Service Provider
- •14.2.5.4 Portability and Interoperability
- •14.2.5.5 Right to Audit
- •14.2.5.6 Service Level Agreements
- •14.2.6 Other Considerations
- •14.2.6.1 Disaster Recovery/Business Continuity
- •14.2.6.2 Governance Structure
- •14.2.6.3 Incident Response Plan
- •14.3 Conclusion
- •Bibliography
- •15.1.1 Location of Cloud Data and Applicable Laws
- •15.1.2 Data Concerns Within a European Context
- •15.1.3 Government Data
- •15.1.4 Trust
- •15.1.5 Interoperability and Standardization in Cloud Computing
- •15.1.6 Open Grid Forum’s (OGF) Production Grid Interoperability Working Group (PGI-WG) Charter
- •15.1.7.1 What will OCCI Provide?
- •15.1.7.2 Cloud Data Management Interface (CDMI)
- •15.1.7.3 How it Works
- •15.1.8 SDOs and their Involvement with Clouds
- •15.1.10 A Microsoft Cloud Interoperability Scenario
- •15.1.11 Opportunities for Public Authorities
- •15.1.12 Future Market Drivers and Challenges
- •15.1.13 Priorities Moving Forward
- •15.2 Conclusions
- •References
- •16.1 Introduction
- •16.2 Cloud Computing (‘The Cloud’)
- •16.3 Understanding Risks to Cloud Computing
- •16.3.1 Privacy Issues
- •16.3.2 Data Ownership and Content Disclosure Issues
- •16.3.3 Data Confidentiality
- •16.3.4 Data Location
- •16.3.5 Control Issues
- •16.3.6 Regulatory and Legislative Compliance
- •16.3.7 Forensic Evidence Issues
- •16.3.8 Auditing Issues
- •16.3.9 Business Continuity and Disaster Recovery Issues
- •16.3.10 Trust Issues
- •16.3.11 Security Policy Issues
- •16.3.12 Emerging Threats to Cloud Computing
- •16.4 Cloud Security Relationship Framework
- •16.4.1 Security Requirements in the Clouds
- •16.5 Conclusion
- •References
- •17.1 Introduction
- •17.1.1 What Is Security?
- •17.2 ISO 27002 Gap Analyses
- •17.2.1 Asset Management
- •17.2.2 Communications and Operations Management
- •17.2.4 Information Security Incident Management
- •17.2.5 Compliance
- •17.3 Security Recommendations
- •17.4 Case Studies
- •17.4.1 Private Cloud: Fortune 100 Company
- •17.4.2 Public Cloud: Amazon.com
- •17.5 Summary and Conclusion
- •References
- •18.1 Introduction
- •18.2 Decoupling Policy from Applications
- •18.2.1 Overlap of Concerns Between the PEP and PDP
- •18.2.2 Patterns for Binding PEPs to Services
- •18.2.3 Agents
- •18.2.4 Intermediaries
- •18.3 PEP Deployment Patterns in the Cloud
- •18.3.1 Software-as-a-Service Deployment
- •18.3.2 Platform-as-a-Service Deployment
- •18.3.3 Infrastructure-as-a-Service Deployment
- •18.3.4 Alternative Approaches to IaaS Policy Enforcement
- •18.3.5 Basic Web Application Security
- •18.3.6 VPN-Based Solutions
- •18.4 Challenges to Deploying PEPs in the Cloud
- •18.4.1 Performance Challenges in the Cloud
- •18.4.2 Strategies for Fault Tolerance
- •18.4.3 Strategies for Scalability
- •18.4.4 Clustering
- •18.4.5 Acceleration Strategies
- •18.4.5.1 Accelerating Message Processing
- •18.4.5.2 Acceleration of Cryptographic Operations
- •18.4.6 Transport Content Coding
- •18.4.7 Security Challenges in the Cloud
- •18.4.9 Binding PEPs and Applications
- •18.4.9.1 Intermediary Isolation
- •18.4.9.2 The Protected Application Stack
- •18.4.10 Authentication and Authorization
- •18.4.11 Clock Synchronization
- •18.4.12 Management Challenges in the Cloud
- •18.4.13 Audit, Logging, and Metrics
- •18.4.14 Repositories
- •18.4.15 Provisioning and Distribution
- •18.4.16 Policy Synchronization and Views
- •18.5 Conclusion
- •References
- •19.1 Introduction and Background
- •19.2 A Media Service Cloud for Traditional Broadcasting
- •19.2.1 Gridcast the PRISM Cloud 0.12
- •19.3 An On-demand Digital Media Cloud
- •19.4 PRISM Cloud Implementation
- •19.4.1 Cloud Resources
- •19.4.2 Cloud Service Deployment and Management
- •19.5 The PRISM Deployment
- •19.6 Summary
- •19.7 Content Note
- •References
- •20.1 Cloud Computing Reference Model
- •20.2 Cloud Economics
- •20.2.1 Economic Context
- •20.2.2 Economic Benefits
- •20.2.3 Economic Costs
- •20.2.5 The Economics of Green Clouds
- •20.3 Quality of Experience in the Cloud
- •20.4 Monetization Models in the Cloud
- •20.5 Charging in the Cloud
- •20.5.1 Existing Models of Charging
- •20.5.1.1 On-Demand IaaS Instances
- •20.5.1.2 Reserved IaaS Instances
- •20.5.1.3 PaaS Charging
- •20.5.1.4 Cloud Vendor Pricing Model
- •20.5.1.5 Interprovider Charging
- •20.6 Taxation in the Cloud
- •References
- •21.1 Introduction
- •21.2 Background
- •21.3 Experiment
- •21.3.1 Target Application: Value at Risk
- •21.3.2 Target Systems
- •21.3.2.1 Condor
- •21.3.2.2 Amazon EC2
- •21.3.2.3 Eucalyptus
- •21.3.3 Results
- •21.3.4 Job Completion
- •21.3.5 Cost
- •21.4 Conclusions and Future Work
- •References
- •Index
234 |
B. Bethwaite et al. |
among multiple instances. It is important to note that no fully virtualised infrastructure services, such as EC2, provide QoS guarantees on the performance of hardware sharing.
With this information, we can project the lower bound cost of running the full-quality experiment in a similar situation, again using EC2 as the overflow resource. We scale MCMC iterations and hence job run-time by a factor of 100, and similarly, we scale the deadline to 50 days. We assume that the local resource pool has not changed. In 50 days, East can deliver us 50 × 24 × 120 = 144,000 CPU hours, the equivalent of 945 jobs. We need EC2 to complete the remaining 486 jobs, which requires at least 54,197 EC2-core-hours at US$0.10 an hour, resulting in a potential charge of US$5,420. Clearly, investing that amount of money into capital expenditure (e.g. to extend the capacity of East) would have made little difference to the overall completion time.
13.4.5 Scientific Results
Using Nimrod/G and associated computer clusters and clouds enabled us to explore a greater variety of SPM in a shorter, feasible, time frame without compromising the quality of the results. For instance, we tested the variants of the original model [26], which use different ways of measuring distances between museum exhibits [30]. This led to insights regarding the suitability of the different model variants for certain application scenarios.
In the future, we intend to investigate other ways of incorporating exhibit features into SPM. We also plan to extend our model to fit non-Gaussian data.
13.5 Conclusions
This chapter demonstrates the potential for cloud computing in high-throughput science. We showed that Nimrod/G scales to both freely available resources as well as commercial services. This is significant because it allows users to balance deadlines and budgets in ways that were not previously possible.
We discussed the additions to Nimrod/G required for it to use Amazon’s EC2 as an execution mechanism, and showed that the Nimrod/G architecture is well suited to computational clouds. As a result, Nimrod/G’s cloud actuator allows higher level tools to exploit clouds as a computational resource. Hence, Nimrod/G can be classified as providing a ‘Platform as a Service’ to job producers/schedulers, and becomes both a cloud client and cloud service in its own right.
The case study showed that computational clouds provide ideal platforms for high-throughput science. Using a mix of grid and cloud resources provided timely results within the budget for the research under discussion.

13 Mixing Grids and Clouds: High-Throughput Science Using the Nimrod Tool Family |
235 |
The economies of scale employed by commercial cloud computing providers make them an attractive platform for HTC. However, questions of national interest and policy issues in spending public research funding with commercial providers remain, especially when they are based overseas. Commercial offerings are motivated by profit, and hence it should be possible to provide a non-profit equivalent more cheaply to better utilise government and university funding, while ensuring the prioritisation of researcher requirements. There is clearly scope for the adoption of similar operational techniques in order to provide HTC resources to the research community.
Commercial computing infrastructure requirements also deviate somewhat from typical HTC requirements1. The commercial cloud provider must have sufficient data-centre capacity to meet fluctuating demand, while providing high QoS with regard to reliability and lead time to service. This necessitates reserving capacity at extra expense, passed on to the consumer. On the other hand, HTC workloads are typically not so sensitive. Waiting some time for a processor is of little significance when tens of thousands to millions of processor hours are required. Such considerations may enable higher utilisation and lower capital overhead for dedicated HTC clouds.
Future work will focus on providing accurate cost accounting by implementing a time-slice scheduler and considering data-transfer charges. We also plan to investigate the use of EC2 Spot Instance pricing. This could prove ideal for cost minimisation biased scheduling, given the spot price for a particular machine type is typically less than half of the standard cost.
Acknowledgements This work has been supported by the Australian Research Council under the Discovery grant scheme. We thank the Australian Academy of Technological Sciences and Engineering (ATSE) Working Group on Cloud Computing for discussions that were used as input to Section 1. We thank Ian Foster for his helpful discussions about the role of high-throughput science and for his contribution to Section 2.
We acknowledge the work of Benjamin Dobell, Aidan Steele, Ashley Taylor and David Warner, Monash University Faculty of I.T. students who worked on the initial Nimrod EC2 actuator prototype. We also thank Neil Soman for assistance in using the Eucalyptus Public Cloud.
References
1.Abramson D, Giddy J, Kotler L (2000) High performance parametric modeling with Nimrod/G: killer application for the global grid? In the 14th international parallel and distributed processing symposium (IPDPS 2000), pp 520–528
2.Abramson D, Buyya R, Giddy J (Oct 2002) A computational economy for grid computing and its implementation in the Nimrod-G resource broker. Future Gen Comput Syst 18:1061–1074
1 The recently released EC2 Spot Instance pricing (http://aws.amazon.com/ec2/spot-instances/) – a supply-demand-driven auctioning of excess EC2 data-centre capacity – is an early example of a scheme to bridge this gap.
236 |
B. Bethwaite et al. |
3.Litzkow M, Livny M, Mutka M (1988) Condor – a hunter of idle workstations. In the proceedings of the 8th international conference of distributed computing systems. IEEE Press, June 1988, pp 104–111
4.Foster I, Kesselman C (1997) Globus: a metacomputing infrastructure toolkit. Int J Supercomput Appl 11:115–128
5.Erwin DW (2002) UNICORE – a grid computing environment. Concurr Comput Prac Exp 14:1395–1410
6.Laure E, Fisher SM, Frohner A, Grandi C, Kunszt P, Krenek A, Mulmo O, Pacini F, Prelz F, White J, Barroso M, Buncic P, Hemmer F, Di Meglio A, Edlund A (2006) Programming the Grid with gLite. Comput Method Sci Tech 12:33–46
7.Bethwaite B, Abramson D, Buckle A (2008) Grid interoperability: an experiment in bridging grid islands. In the IEEE Fourth International Conference on eScience 2008, pp 590–596
8.Riedel M (2009) Interoperation of world-wide production e-Science infrastructures. Concurr Comput Pract Exp 21:961–990
9.Goscinski W, Abramson D (2008) An infrastructure for the deployment of e-science applications. In: Grandinetti L (ed) High performance computing (HPC) and grids in action. IOS Press, Amsterdam, Netherlands, pp 131–148
10. Schmidberger J, Bethwaite B, Enticott C, Bate M, Androulakis S, Faux N, Reboul C, Phan J, Whisstock J, Goscinski W, Garic S, Abramson D, Buckle A (2009) High-throughput protein structure determination using grid computing. In the IEEE International Symposium on Parallel & Distributed Processing (IPDPS 2009), pp 1–8
11. Sher A, Abramson D, Enticott C, Garic S, Gavaghan D, Noble D, Noble P, Peachey T (2008) Incorporating local Ca2 + dynamics into single cell ventricular models. In: Proceedings of the 8th international conference on computational science, Part I, Springer-Verlag, Krakow, Poland, pp 66–75
12. Baldridge KK, Sudholt W, Greenberg JP, Amoreira C, Potier Y, Altintas I, Birnbaum A, Abramson D, Enticott C, Garic S (2006) Cluster and grid infrastructure for computational chemistry and biochemistry. In: Zomaya AY (ed) Parallel computing for bioinformatics and computational biology, Wiley Interscience, New York, pp 531–550
13. Lynch AH, Abramson D, Görgen K, Beringer J, Uotila P (Oct 2007) Influence of savanna fire on Australian monsoon season precipitation and circulation as simulated using a distributed computing environment, Geophysical Research Letters, 34(20):L20801
14. Abramson D, Lewis A, Peachey T, Fletcher C (2001) An automatic design optimization tool and its application to computational fluid dynamics. In: Proceedings of the 2001 ACM/IEEE conference on supercomputing (CDROM), ACM, Denver, Colorado, pp 25–25
15. Peachey T, Diamond N, Abramson D, Sudholt W, Michailova A, Amirriazi S (Jan 2008) Fractional factorial design for parameter sweep experiments using Nimrod/E. Sci Program 16:217–230
16. Abramson D, Enticott C, Altinas I (2008) Nimrod/K: towards massively parallel dynamic grid workflows. In: Proceedings of the 2008 ACM/IEEE conference on supercomputing, IEEE Press, Austin, Texas, pp 1–11
17. Buyya R, Abramson D, Giddy J, Stockinger H (2002) Economic models for resource management and scheduling in Grid computing. Concurr Comput Prac Exp 14:1507–1542
18. Zhang Y, Mandal A, Koelbel C, Cooper K (2009) Combined fault tolerance and scheduling techniques for workflow applications on computational grids. In: The proceedings of the 2009 9th IEEE/ACM international symposium on cluster computing and the grid – volume 00, IEEE Computer Society, pp 244–251
19. Nurmi D, Brevik J, Wolski R (2008) QBETS: Queue Bounds Estimation from Time Series. Job Scheduling Strategies for Parallel Processing, pp 76–101
20. Buyya R, Giddy J, Abramson D (2000) An evaluation of economy-based resource trading and scheduling on computational power grids for parameter sweep applications. Active middleware services: from the proceedings of the 2nd annual workshop on active middleware services, p 221
13 Mixing Grids and Clouds: High-Throughput Science Using the Nimrod Tool Family |
237 |
21. Nurmi D, Wolski R, Grzegorczyk C, Obertelli G, Soman S, Youseff L, Zagorodnov D (2009) The eucalyptus open-source cloud-computing system. In: The IEEE international symposium on cluster computing and the grid, IEEE Press, Shanghai, China, p 131, 124
22. Sotomayor B, Montero RS, Llorente IM, Foster I (2008) Capacity leasing in cloud systems using the OpenNebula engine. Chicago, IL
23. Raicu I, Zhao Y, Dumitrescu C, Foster I, Wilde M (2007) Falkon: a fast and light-weight tasK executiON framework. In the proceedings of the 2007 ACM/IEEE conference on supercomputing – volume 00, ACM Press, Reno, Nevada, pp 1–12
24. Troger P, Rajic H, Haas A, Domagalski P (2007) Standardization of an API for distributed resource management systems. In the proceedings of the seventh IEEE international symposium on cluster computing and the grid, IEEE Computer Society, pp 619–626
25. Resnick P, Varian HR (1997) Recommender systems. Commun ACM 40:56–58
26. Bohnert F, Schmidt DF, Zukerman I (2009) Spatial processes for recommender systems. In: The 21st international joint conference on artificial intelligence (IJCAI-09), Pasadena, CA, pp 2022–2027
27. Bohnert F, Zukerman I, Berkovsky S, Baldwin T, Sonenberg L (2008) Using interest and transition models to predict visitor locations in museums. AI Commun 21:195–202
28. Banerjee S, Carlin BP, Gelfand AE (2004) Hierarchical modeling and analysis for spatial data. CRC Press, Boca Raton, FL
29. Neal RM (2003) Slice sampling. Annal Stat 31:705–767
30. Bohnert F, Zukerman I, Schmidt DF (2009) Using Gaussian spatial processes to model and predict interests in museum exhibits. In the seventh workshop on intelligent techniques for web personalization and recommender systems (ITWP-09), pp 13–19
Part III
Cloud Breaks