
- •Cloud Computing
- •Foreword
- •Preface
- •Introduction
- •Expected Audience
- •Book Overview
- •Part 1: Cloud Base
- •Part 2: Cloud Seeding
- •Part 3: Cloud Breaks
- •Part 4: Cloud Feedback
- •Contents
- •1.1 Introduction
- •1.1.1 Cloud Services and Enabling Technologies
- •1.2 Virtualization Technology
- •1.2.1 Virtual Machines
- •1.2.2 Virtualization Platforms
- •1.2.3 Virtual Infrastructure Management
- •1.2.4 Cloud Infrastructure Manager
- •1.3 The MapReduce System
- •1.3.1 Hadoop MapReduce Overview
- •1.4 Web Services
- •1.4.1 RPC (Remote Procedure Call)
- •1.4.2 SOA (Service-Oriented Architecture)
- •1.4.3 REST (Representative State Transfer)
- •1.4.4 Mashup
- •1.4.5 Web Services in Practice
- •1.5 Conclusions
- •References
- •2.1 Introduction
- •2.2 Background and Related Work
- •2.3 Taxonomy of Cloud Computing
- •2.3.1 Cloud Architecture
- •2.3.1.1 Services and Modes of Cloud Computing
- •Software-as-a-Service (SaaS)
- •Platform-as-a-Service (PaaS)
- •Hardware-as-a-Service (HaaS)
- •Infrastructure-as-a-Service (IaaS)
- •2.3.2 Virtualization Management
- •2.3.3 Core Services
- •2.3.3.1 Discovery and Replication
- •2.3.3.2 Load Balancing
- •2.3.3.3 Resource Management
- •2.3.4 Data Governance
- •2.3.4.1 Interoperability
- •2.3.4.2 Data Migration
- •2.3.5 Management Services
- •2.3.5.1 Deployment and Configuration
- •2.3.5.2 Monitoring and Reporting
- •2.3.5.3 Service-Level Agreements (SLAs) Management
- •2.3.5.4 Metering and Billing
- •2.3.5.5 Provisioning
- •2.3.6 Security
- •2.3.6.1 Encryption/Decryption
- •2.3.6.2 Privacy and Federated Identity
- •2.3.6.3 Authorization and Authentication
- •2.3.7 Fault Tolerance
- •2.4 Classification and Comparison between Cloud Computing Ecosystems
- •2.5 Findings
- •2.5.2 Cloud Computing PaaS and SaaS Provider
- •2.5.3 Open Source Based Cloud Computing Services
- •2.6 Comments on Issues and Opportunities
- •2.7 Conclusions
- •References
- •3.1 Introduction
- •3.2 Scientific Workflows and e-Science
- •3.2.1 Scientific Workflows
- •3.2.2 Scientific Workflow Management Systems
- •3.2.3 Important Aspects of In Silico Experiments
- •3.3 A Taxonomy for Cloud Computing
- •3.3.1 Business Model
- •3.3.2 Privacy
- •3.3.3 Pricing
- •3.3.4 Architecture
- •3.3.5 Technology Infrastructure
- •3.3.6 Access
- •3.3.7 Standards
- •3.3.8 Orientation
- •3.5 Taxonomies for Cloud Computing
- •3.6 Conclusions and Final Remarks
- •References
- •4.1 Introduction
- •4.2 Cloud and Grid: A Comparison
- •4.2.1 A Retrospective View
- •4.2.2 Comparison from the Viewpoint of System
- •4.2.3 Comparison from the Viewpoint of Users
- •4.2.4 A Summary
- •4.3 Examining Cloud Computing from the CSCW Perspective
- •4.3.1 CSCW Findings
- •4.3.2 The Anatomy of Cloud Computing
- •4.3.2.1 Security and Privacy
- •4.3.2.2 Data and/or Vendor Lock-In
- •4.3.2.3 Service Availability/Reliability
- •4.4 Conclusions
- •References
- •5.1 Overview – Cloud Standards – What and Why?
- •5.2 Deep Dive: Interoperability Standards
- •5.2.1 Purpose, Expectations and Challenges
- •5.2.2 Initiatives – Focus, Sponsors and Status
- •5.2.3 Market Adoption
- •5.2.4 Gaps/Areas of Improvement
- •5.3 Deep Dive: Security Standards
- •5.3.1 Purpose, Expectations and Challenges
- •5.3.2 Initiatives – Focus, Sponsors and Status
- •5.3.3 Market Adoption
- •5.3.4 Gaps/Areas of Improvement
- •5.4 Deep Dive: Portability Standards
- •5.4.1 Purpose, Expectations and Challenges
- •5.4.2 Initiatives – Focus, Sponsors and Status
- •5.4.3 Market Adoption
- •5.4.4 Gaps/Areas of Improvement
- •5.5.1 Purpose, Expectations and Challenges
- •5.5.2 Initiatives – Focus, Sponsors and Status
- •5.5.3 Market Adoption
- •5.5.4 Gaps/Areas of Improvement
- •5.6 Deep Dive: Other Key Standards
- •5.6.1 Initiatives – Focus, Sponsors and Status
- •5.7 Closing Notes
- •References
- •6.1 Introduction and Motivation
- •6.2 Cloud@Home Overview
- •6.2.1 Issues, Challenges, and Open Problems
- •6.2.2 Basic Architecture
- •6.2.2.1 Software Environment
- •6.2.2.2 Software Infrastructure
- •6.2.2.3 Software Kernel
- •6.2.2.4 Firmware/Hardware
- •6.2.3 Application Scenarios
- •6.3 Cloud@Home Core Structure
- •6.3.1 Management Subsystem
- •6.3.2 Resource Subsystem
- •6.4 Conclusions
- •References
- •7.1 Introduction
- •7.2 MapReduce
- •7.3 P2P-MapReduce
- •7.3.1 Architecture
- •7.3.2 Implementation
- •7.3.2.1 Basic Mechanisms
- •Resource Discovery
- •Network Maintenance
- •Job Submission and Failure Recovery
- •7.3.2.2 State Diagram and Software Modules
- •7.3.3 Evaluation
- •7.4 Conclusions
- •References
- •8.1 Introduction
- •8.2 The Cloud Evolution
- •8.3 Improved Network Support for Cloud Computing
- •8.3.1 Why the Internet is Not Enough?
- •8.3.2 Transparent Optical Networks for Cloud Applications: The Dedicated Bandwidth Paradigm
- •8.4 Architecture and Implementation Details
- •8.4.1 Traffic Management and Control Plane Facilities
- •8.4.2 Service Plane and Interfaces
- •8.4.2.1 Providing Network Services to Cloud-Computing Infrastructures
- •8.4.2.2 The Cloud Operating System–Network Interface
- •8.5.1 The Prototype Details
- •8.5.1.1 The Underlying Network Infrastructure
- •8.5.1.2 The Prototype Cloud Network Control Logic and its Services
- •8.5.2 Performance Evaluation and Results Discussion
- •8.6 Related Work
- •8.7 Conclusions
- •References
- •9.1 Introduction
- •9.2 Overview of YML
- •9.3 Design and Implementation of YML-PC
- •9.3.1 Concept Stack of Cloud Platform
- •9.3.2 Design of YML-PC
- •9.3.3 Core Design and Implementation of YML-PC
- •9.4 Primary Experiments on YML-PC
- •9.4.1 YML-PC Can Be Scaled Up Very Easily
- •9.4.2 Data Persistence in YML-PC
- •9.4.3 Schedule Mechanism in YML-PC
- •9.5 Conclusion and Future Work
- •References
- •10.1 Introduction
- •10.2 Related Work
- •10.2.1 General View of Cloud Computing frameworks
- •10.2.2 Cloud Computing Middleware
- •10.3 Deploying Applications in the Cloud
- •10.3.1 Benchmarking the Cloud
- •10.3.2 The ProActive GCM Deployment
- •10.3.3 Technical Solutions for Deployment over Heterogeneous Infrastructures
- •10.3.3.1 Virtual Private Network (VPN)
- •10.3.3.2 Amazon Virtual Private Cloud (VPC)
- •10.3.3.3 Message Forwarding and Tunneling
- •10.3.4 Conclusion and Motivation for Mixing
- •10.4 Moving HPC Applications from Grids to Clouds
- •10.4.1 HPC on Heterogeneous Multi-Domain Platforms
- •10.4.2 The Hierarchical SPMD Concept and Multi-level Partitioning of Numerical Meshes
- •10.4.3 The GCM/ProActive-Based Lightweight Framework
- •10.4.4 Performance Evaluation
- •10.5 Dynamic Mixing of Clusters, Grids, and Clouds
- •10.5.1 The ProActive Resource Manager
- •10.5.2 Cloud Bursting: Managing Spike Demand
- •10.5.3 Cloud Seeding: Dealing with Heterogeneous Hardware and Private Data
- •10.6 Conclusion
- •References
- •11.1 Introduction
- •11.2 Background
- •11.2.1 ASKALON
- •11.2.2 Cloud Computing
- •11.3 Resource Management Architecture
- •11.3.1 Cloud Management
- •11.3.2 Image Catalog
- •11.3.3 Security
- •11.4 Evaluation
- •11.5 Related Work
- •11.6 Conclusions and Future Work
- •References
- •12.1 Introduction
- •12.2 Layered Peer-to-Peer Cloud Provisioning Architecture
- •12.4.1 Distributed Hash Tables
- •12.4.2 Designing Complex Services over DHTs
- •12.5 Cloud Peer Software Fabric: Design and Implementation
- •12.5.1 Overlay Construction
- •12.5.2 Multidimensional Query Indexing
- •12.5.3 Multidimensional Query Routing
- •12.6 Experiments and Evaluation
- •12.6.1 Cloud Peer Details
- •12.6.3 Test Application
- •12.6.4 Deployment of Test Services on Amazon EC2 Platform
- •12.7 Results and Discussions
- •12.8 Conclusions and Path Forward
- •References
- •13.1 Introduction
- •13.2 High-Throughput Science with the Nimrod Tools
- •13.2.1 The Nimrod Tool Family
- •13.2.2 Nimrod and the Grid
- •13.2.3 Scheduling in Nimrod
- •13.3 Extensions to Support Amazon’s Elastic Compute Cloud
- •13.3.1 The Nimrod Architecture
- •13.3.2 The EC2 Actuator
- •13.3.3 Additions to the Schedulers
- •13.4.1 Introduction and Background
- •13.4.2 Computational Requirements
- •13.4.3 The Experiment
- •13.4.4 Computational and Economic Results
- •13.4.5 Scientific Results
- •13.5 Conclusions
- •References
- •14.1 Using the Cloud
- •14.1.1 Overview
- •14.1.2 Background
- •14.1.3 Requirements and Obligations
- •14.1.3.1 Regional Laws
- •14.1.3.2 Industry Regulations
- •14.2 Cloud Compliance
- •14.2.1 Information Security Organization
- •14.2.2 Data Classification
- •14.2.2.1 Classifying Data and Systems
- •14.2.2.2 Specific Type of Data of Concern
- •14.2.2.3 Labeling
- •14.2.3 Access Control and Connectivity
- •14.2.3.1 Authentication and Authorization
- •14.2.3.2 Accounting and Auditing
- •14.2.3.3 Encrypting Data in Motion
- •14.2.3.4 Encrypting Data at Rest
- •14.2.4 Risk Assessments
- •14.2.4.1 Threat and Risk Assessments
- •14.2.4.2 Business Impact Assessments
- •14.2.4.3 Privacy Impact Assessments
- •14.2.5 Due Diligence and Provider Contract Requirements
- •14.2.5.1 ISO Certification
- •14.2.5.2 SAS 70 Type II
- •14.2.5.3 PCI PA DSS or Service Provider
- •14.2.5.4 Portability and Interoperability
- •14.2.5.5 Right to Audit
- •14.2.5.6 Service Level Agreements
- •14.2.6 Other Considerations
- •14.2.6.1 Disaster Recovery/Business Continuity
- •14.2.6.2 Governance Structure
- •14.2.6.3 Incident Response Plan
- •14.3 Conclusion
- •Bibliography
- •15.1.1 Location of Cloud Data and Applicable Laws
- •15.1.2 Data Concerns Within a European Context
- •15.1.3 Government Data
- •15.1.4 Trust
- •15.1.5 Interoperability and Standardization in Cloud Computing
- •15.1.6 Open Grid Forum’s (OGF) Production Grid Interoperability Working Group (PGI-WG) Charter
- •15.1.7.1 What will OCCI Provide?
- •15.1.7.2 Cloud Data Management Interface (CDMI)
- •15.1.7.3 How it Works
- •15.1.8 SDOs and their Involvement with Clouds
- •15.1.10 A Microsoft Cloud Interoperability Scenario
- •15.1.11 Opportunities for Public Authorities
- •15.1.12 Future Market Drivers and Challenges
- •15.1.13 Priorities Moving Forward
- •15.2 Conclusions
- •References
- •16.1 Introduction
- •16.2 Cloud Computing (‘The Cloud’)
- •16.3 Understanding Risks to Cloud Computing
- •16.3.1 Privacy Issues
- •16.3.2 Data Ownership and Content Disclosure Issues
- •16.3.3 Data Confidentiality
- •16.3.4 Data Location
- •16.3.5 Control Issues
- •16.3.6 Regulatory and Legislative Compliance
- •16.3.7 Forensic Evidence Issues
- •16.3.8 Auditing Issues
- •16.3.9 Business Continuity and Disaster Recovery Issues
- •16.3.10 Trust Issues
- •16.3.11 Security Policy Issues
- •16.3.12 Emerging Threats to Cloud Computing
- •16.4 Cloud Security Relationship Framework
- •16.4.1 Security Requirements in the Clouds
- •16.5 Conclusion
- •References
- •17.1 Introduction
- •17.1.1 What Is Security?
- •17.2 ISO 27002 Gap Analyses
- •17.2.1 Asset Management
- •17.2.2 Communications and Operations Management
- •17.2.4 Information Security Incident Management
- •17.2.5 Compliance
- •17.3 Security Recommendations
- •17.4 Case Studies
- •17.4.1 Private Cloud: Fortune 100 Company
- •17.4.2 Public Cloud: Amazon.com
- •17.5 Summary and Conclusion
- •References
- •18.1 Introduction
- •18.2 Decoupling Policy from Applications
- •18.2.1 Overlap of Concerns Between the PEP and PDP
- •18.2.2 Patterns for Binding PEPs to Services
- •18.2.3 Agents
- •18.2.4 Intermediaries
- •18.3 PEP Deployment Patterns in the Cloud
- •18.3.1 Software-as-a-Service Deployment
- •18.3.2 Platform-as-a-Service Deployment
- •18.3.3 Infrastructure-as-a-Service Deployment
- •18.3.4 Alternative Approaches to IaaS Policy Enforcement
- •18.3.5 Basic Web Application Security
- •18.3.6 VPN-Based Solutions
- •18.4 Challenges to Deploying PEPs in the Cloud
- •18.4.1 Performance Challenges in the Cloud
- •18.4.2 Strategies for Fault Tolerance
- •18.4.3 Strategies for Scalability
- •18.4.4 Clustering
- •18.4.5 Acceleration Strategies
- •18.4.5.1 Accelerating Message Processing
- •18.4.5.2 Acceleration of Cryptographic Operations
- •18.4.6 Transport Content Coding
- •18.4.7 Security Challenges in the Cloud
- •18.4.9 Binding PEPs and Applications
- •18.4.9.1 Intermediary Isolation
- •18.4.9.2 The Protected Application Stack
- •18.4.10 Authentication and Authorization
- •18.4.11 Clock Synchronization
- •18.4.12 Management Challenges in the Cloud
- •18.4.13 Audit, Logging, and Metrics
- •18.4.14 Repositories
- •18.4.15 Provisioning and Distribution
- •18.4.16 Policy Synchronization and Views
- •18.5 Conclusion
- •References
- •19.1 Introduction and Background
- •19.2 A Media Service Cloud for Traditional Broadcasting
- •19.2.1 Gridcast the PRISM Cloud 0.12
- •19.3 An On-demand Digital Media Cloud
- •19.4 PRISM Cloud Implementation
- •19.4.1 Cloud Resources
- •19.4.2 Cloud Service Deployment and Management
- •19.5 The PRISM Deployment
- •19.6 Summary
- •19.7 Content Note
- •References
- •20.1 Cloud Computing Reference Model
- •20.2 Cloud Economics
- •20.2.1 Economic Context
- •20.2.2 Economic Benefits
- •20.2.3 Economic Costs
- •20.2.5 The Economics of Green Clouds
- •20.3 Quality of Experience in the Cloud
- •20.4 Monetization Models in the Cloud
- •20.5 Charging in the Cloud
- •20.5.1 Existing Models of Charging
- •20.5.1.1 On-Demand IaaS Instances
- •20.5.1.2 Reserved IaaS Instances
- •20.5.1.3 PaaS Charging
- •20.5.1.4 Cloud Vendor Pricing Model
- •20.5.1.5 Interprovider Charging
- •20.6 Taxation in the Cloud
- •References
- •21.1 Introduction
- •21.2 Background
- •21.3 Experiment
- •21.3.1 Target Application: Value at Risk
- •21.3.2 Target Systems
- •21.3.2.1 Condor
- •21.3.2.2 Amazon EC2
- •21.3.2.3 Eucalyptus
- •21.3.3 Results
- •21.3.4 Job Completion
- •21.3.5 Cost
- •21.4 Conclusions and Future Work
- •References
- •Index

160 |
L. Shang et al. |
|
6000 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
No fault |
|
|
|
|
|
|
|
|
5000 |
|
|
|
10% fault |
|
|
|
|
|
|
|
|
|
|
|
|
20% fault |
|
|
|
|
|
|
|
|
4000 |
|
|
|
|
|
|
|
|
|
|
|
Time |
|
|
|
|
|
|
|
|
|
|
|
|
Elapsed |
3000 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2000 |
|
|
|
|
|
|
|
|
|
|
|
|
1000 |
|
|
|
|
|
|
|
|
|
|
|
|
02 |
2.5 |
3 |
3.5 |
4 |
4.5 |
5 |
5.5 |
6 |
6.5 |
7 |
|
|
|
|
|
|
|
|
q |
|
|
|
|
|
Fig. 9.10 |
Schedule mechanism in YML-PC |
|
|
|
|
|
|
faults happen on the computing nodes. In other words, the trust model is totally correct. ‘10% faults’ stands for 10% of computing nodes in cloud platform fail during the process of program execution. In other words, the accurate rate of trust model is 90%. ‘20% faults’ stands for 20% of computing nodes in cloud platform failing during program execution. In other words, the accurate rate of trust model is 80%.
Figure 9.10 tells us that choosing appropriate computing resources to execute tasks is very important. Improper match-making between computing resources and tasks will decrease efficiency greatly. So, monitoring the computing resources in cloud computing is very important and we had better find the regularity behind its appearance through monitoring. Trust model in paper [25] can be utilized in cloud platform and it can be improved by adopting a better behavior model to describe users’ behavior regularity.
9.5 Conclusion and Future Work
Cloud computing has gained great success for search engines, social e-networks, e-mail, and e-commercial. Amazon can provide different levels of computing resources to users by the way of pay-by-use. Many research institutes, such as the University of Berkeley, Delft University of Technology, and so on, have made
9 A Reference Architecture Based on Workflow for Building Scientific Private Clouds |
161 |
evaluations on Amazon cloud platform. At the same time, Kondo et al try to evaluate the cost-benefits of public Clouds and Desktop Grid platform and conclude that Desktop Grid platform is promising and can be the base of cloud platform. So, based on the research mentioned above and real situation of non-big enterprises and research institutes in China, this paper extended the YML framework and presented YML-PC, which is a workflow-based framework for building scientific private Clouds. The project YML-PC will be divided into three steps: (1) Build private Clouds based on YML through harnessing dedicated computing resources and volunteer computing resources and make them work together with high efficiency. (2) Extend YML to support Hadoop and run Hadoop on cluster-based virtual machines. (3) Combining step 1 and step 2, build a hybrid Cloud based on YML. This paper focused on step 1. To improve the efficiency of YML-PC, “trust model” and “data persistence mechanism” are introduced in this paper. Simulations demonstrate that our idea is appropriate for building YML-PC.
Future work will focus on developing components to make YML-PC a reality. Then, more users’ behavior models will be researched to improve the accuracy of prediction on available “time slot” of volunteer computing nodes. Fault-tolerant-based schedule mechanism is another key issue of our future work. A new idea, which is to deploy virtual tool (Xen, VMware for example) on volunteer computing resources and form several virtual machines on volunteer computing node, is also to be evaluated.
References
1.Ostermann S et al Early cloud computing evaluation. http://www.pds.ewi.tudelft.nl/_iosup/
2.Armbrust M, Fox A, Griffith R, Joseph A, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I, Zaharia M (2009, Feb 10) Above the clouds: a Berkeley view of cloud computing. Technical Report, University of California, Berkley, USA
3.Garfinkel SL (Aug 2007) An evaluation of Amazon’s grid computing services: EC2, S3 and SQS. Technical Report TR-08-07, Harvard University
4.de Assuncao MD, di Costanzo A, Buyya R (2009) Evaluating the cost-benefit of using cloud computing to extend the capacity of clusters. HPDC ‘09, ACM, pp 141–150
5.Ibrahim S, Jin H, Lu L, Qi L, Wu S, Shi X (2009) Evaluating MapReduce on virtual machines: The Hadoop case. CloudCom 2009, pp 519–528
6.David P (2006) Anderson, Gilles Fedak: the computational and storage potential of volunteer computing. CCGRID 2006, pp 73–80
7.Heien EM, Anderson DP (2009) Computing low latency batches with unreliable workers in volunteer computing environments. J Grid Comput 7(4):501–518
8.Javadi B, Kondo D, Vincent JM, Anderson DP (Sept 2009) Mining for statistical models of availability in large scale distributed systems: an empirical study of SETI@home. 17th IEEE/ ACM MASCOTS 2009, London, UK
9.Ma X, Vazhkudai SS, Zhang Z (December 2009) Improving data availability for better access performance: a study on caching scientific data on distributed desktop workstations. J Grid
Comput 7(4):419–438
10. Kondo D, Javadi B, Malecot P, Cappello F, Anderson DP (2009) Cost-benefit analysis of Cloud Computing versus desktop grids. ipdps, pp 1–12
11.Andrzejak A, Kondo D, Anderson DP (2010) Exploiting non-dedicated resources for cloud computing. In the 12th IEEE/IFIP (NOMS 2010), Osaka, Japan, 19–23 April 2010
162 |
L. Shang et al. |
12. Domingues P, Araujo F, Silva L (2009) Evaluating the performance and intrusiveness of virtual machines for desktop grid computing, IPDPS, 23–29 May 2009, pp 1–8
13. Vincenzo D (2009) Cunsolo, Salvatore Distefano, Antonio Puliafito, Marco Scarpa: Cloud@ Home: bridging the gap between volunteer and cloud computing. ICIC (1):423–432
14. Delannoy O, Emad N, Petiton SG (2006) Workflow global computing with YML. In: The 7th IEEE/ACM international conference on grid computing, pp 25–32
15. Delannoy O (Sept 2008) YML: a scientific workflow for high performance computing. Ph.D. thesis, Versailles
16. Delannoy O, Petiton S (2004) A peer to peer computing framework: design and performance evaluation of YML. In: third international workshop on HeterPar 2004, IEEE Computer Society Press, pp 362–369
17. Choy L, Delannoy O, Emad N, Petiton SG (2009) Federation and abstraction of heterogeneous global computing platforms with the YML framework, cisis, pp 451–456. In: The international conference on complex, intelligent and software intensive systems, 2009
18. Caron E, Desprez F, Loureiro D, Muresan A (2009) Cloud computing resource management through a grid middleware: a case study with DIET and eucalyptus. Cloud, pp 151–154
19. Sato M, Boku T, Takahashi D (2003) OmniRPC: a Grid RPC system for parallel programming in cluster and grid environment. In: the 3rd IEEE international symposium on cluster computing and the grid, pp 206–213
20.Germain C, eri VN¢, Fedak G, Cappello F (2000) Xtremweb: building an experimental platform for global computing. In: Buyya R, Baker M (eds) GRID, ser. lecture notes in Computer Science, vol 1971. Springer, Heidelberg, pp 91–101
21. Wang L, Tao J, Kunze M, Castellanos AC, Kramer D, Karl W (2008) Scientific cloud computing: early definition and experience. In the 10th IEEE international conference on HPCC, pp 825–830
22. Foster I, Zhao Y, Raicu I, Lu S (2008) Cloud computing and grid computing 360-degree compared. In Grid computing environments workshop, pp 1–10
23. Vecchiola C, Pandey S, Buyya R (2009) High-performance cloud computing: a view of scientific applications. In the 10th international symposium on pervasive systems, algorithms and networks (I-SPAN 2009), Kaohsiung, Taiwan, December 2009
24. Jha S, Merzky A, Fox G ( June 2009) Using clouds to provide grids with higher levels of abstraction and explicit support for usage modes. Concurr Comput Pract Exper 21(8): 1087–1108
25. Shang L, Wang Z, Zhou X, Huang X, Cheng Y (2007) Tm-dg: a trust model based on computer users’ daily behavior for desktop grid platform. In CompFrame ’07: proceedings of the 2007 symposium on component and framework technology in high-performance and scientific computing, ACM, New York, USA, pp 59–66
26. Smets P (1990) The transferable belief model and other interpretations of Dempster-Shafer’s model. In the proceedings of the sixth annual conference on uncertainty in artificial intelligence, pp 375–384, 27–29 July 1990
27. Shang L, Wang Z, Petiton SG (2008) Solution of large scale matrix inversion on cluster and grid. In proceedings of the 2008 seventh international conference on grid and cooperative computing (GCC), 24–26 October 2008, pp 33–40
28. Shang L, Petiton S, Hugues M (2009) A new parallel paradigm for block-based Gauss-Jordan algorithm (gcc). In the eighth international conference on grid and cooperative computing, pp 193–200
29. Cappello F et al (2005) Grid’5000: a large scale and highly reconfigurable grid experimental testbed. In the 6th IEEE/ACM international conference on grid computing, pp 99–106

Chapter 10
An Efficient Framework for Running Applications on Clusters, Grids, and Clouds
Brian Amedro, Françoise Baude, Denis Caromel, Christian Delbé, Imen Filali, Fabrice Huet, Elton Mathias, and Oleg Smirnov
Abstract Since the appearance of distributed computing technology, there has been a significant effort in designing and building the infrastructure needed to tackle the challenges raised by complex scientific applications that require massive computational resources. This increases the awareness to harness the power and flexibility of Clouds that have recently emerged as an alternative to data centers or private clusters. We describe in this chapter an efficient high-level Grid and Cloud framework that allows a smooth transition from clusters and Grids to Clouds. The main lever is the ability to move application infrastructure-specific information away from the code and manage them in a deployment file. An application can thus easily run on a cluster, a grid, or a cloud, or any mix of them without modification.
10.1 Introduction
Traditionally, HPC relied on supercomputers, clusters, or more recently, computing grids. With the rise of cloud computing and effective technical solutions, questions such as “is cloud computing ready for HPC” or “does a computing cloud constitute a relevant reservoir of resources for parallel computing” are around. This chapter gives some concrete answers to such questions. Offering a suitable middleware and associated programming environment to HPC users willing to take advantage of cloud computing is also a concern that we address in this chapter. One natural solution is to extend a grid computing middleware in such a way that it becomes able to harness cloud computing resources. A consequence is that we end up with a middleware that is able to unify resource acquisition and usage of grid and Cloud resources. This middleware was specially designed to cope with HPC computation and communication requirements, but its usage is not restricted to this kind of application.
B. Amedro (*)
OASIS Research Team, INRIA Sophia Antipolis, 2004 route des lucioles – BP 93, 06902 Sophia-Antipolis, France
e-mail: brian.amedro@sophia.inria.fr
N. Antonopoulos and L. Gillam (eds.), Cloud Computing: Principles, |
163 |
Systems and Applications, Computer Communications and Networks,
DOI 10.1007/978-1-84996-241-4_10, © Springer-Verlag London Limited 2010