
- •Cloud Computing
- •Foreword
- •Preface
- •Introduction
- •Expected Audience
- •Book Overview
- •Part 1: Cloud Base
- •Part 2: Cloud Seeding
- •Part 3: Cloud Breaks
- •Part 4: Cloud Feedback
- •Contents
- •1.1 Introduction
- •1.1.1 Cloud Services and Enabling Technologies
- •1.2 Virtualization Technology
- •1.2.1 Virtual Machines
- •1.2.2 Virtualization Platforms
- •1.2.3 Virtual Infrastructure Management
- •1.2.4 Cloud Infrastructure Manager
- •1.3 The MapReduce System
- •1.3.1 Hadoop MapReduce Overview
- •1.4 Web Services
- •1.4.1 RPC (Remote Procedure Call)
- •1.4.2 SOA (Service-Oriented Architecture)
- •1.4.3 REST (Representative State Transfer)
- •1.4.4 Mashup
- •1.4.5 Web Services in Practice
- •1.5 Conclusions
- •References
- •2.1 Introduction
- •2.2 Background and Related Work
- •2.3 Taxonomy of Cloud Computing
- •2.3.1 Cloud Architecture
- •2.3.1.1 Services and Modes of Cloud Computing
- •Software-as-a-Service (SaaS)
- •Platform-as-a-Service (PaaS)
- •Hardware-as-a-Service (HaaS)
- •Infrastructure-as-a-Service (IaaS)
- •2.3.2 Virtualization Management
- •2.3.3 Core Services
- •2.3.3.1 Discovery and Replication
- •2.3.3.2 Load Balancing
- •2.3.3.3 Resource Management
- •2.3.4 Data Governance
- •2.3.4.1 Interoperability
- •2.3.4.2 Data Migration
- •2.3.5 Management Services
- •2.3.5.1 Deployment and Configuration
- •2.3.5.2 Monitoring and Reporting
- •2.3.5.3 Service-Level Agreements (SLAs) Management
- •2.3.5.4 Metering and Billing
- •2.3.5.5 Provisioning
- •2.3.6 Security
- •2.3.6.1 Encryption/Decryption
- •2.3.6.2 Privacy and Federated Identity
- •2.3.6.3 Authorization and Authentication
- •2.3.7 Fault Tolerance
- •2.4 Classification and Comparison between Cloud Computing Ecosystems
- •2.5 Findings
- •2.5.2 Cloud Computing PaaS and SaaS Provider
- •2.5.3 Open Source Based Cloud Computing Services
- •2.6 Comments on Issues and Opportunities
- •2.7 Conclusions
- •References
- •3.1 Introduction
- •3.2 Scientific Workflows and e-Science
- •3.2.1 Scientific Workflows
- •3.2.2 Scientific Workflow Management Systems
- •3.2.3 Important Aspects of In Silico Experiments
- •3.3 A Taxonomy for Cloud Computing
- •3.3.1 Business Model
- •3.3.2 Privacy
- •3.3.3 Pricing
- •3.3.4 Architecture
- •3.3.5 Technology Infrastructure
- •3.3.6 Access
- •3.3.7 Standards
- •3.3.8 Orientation
- •3.5 Taxonomies for Cloud Computing
- •3.6 Conclusions and Final Remarks
- •References
- •4.1 Introduction
- •4.2 Cloud and Grid: A Comparison
- •4.2.1 A Retrospective View
- •4.2.2 Comparison from the Viewpoint of System
- •4.2.3 Comparison from the Viewpoint of Users
- •4.2.4 A Summary
- •4.3 Examining Cloud Computing from the CSCW Perspective
- •4.3.1 CSCW Findings
- •4.3.2 The Anatomy of Cloud Computing
- •4.3.2.1 Security and Privacy
- •4.3.2.2 Data and/or Vendor Lock-In
- •4.3.2.3 Service Availability/Reliability
- •4.4 Conclusions
- •References
- •5.1 Overview – Cloud Standards – What and Why?
- •5.2 Deep Dive: Interoperability Standards
- •5.2.1 Purpose, Expectations and Challenges
- •5.2.2 Initiatives – Focus, Sponsors and Status
- •5.2.3 Market Adoption
- •5.2.4 Gaps/Areas of Improvement
- •5.3 Deep Dive: Security Standards
- •5.3.1 Purpose, Expectations and Challenges
- •5.3.2 Initiatives – Focus, Sponsors and Status
- •5.3.3 Market Adoption
- •5.3.4 Gaps/Areas of Improvement
- •5.4 Deep Dive: Portability Standards
- •5.4.1 Purpose, Expectations and Challenges
- •5.4.2 Initiatives – Focus, Sponsors and Status
- •5.4.3 Market Adoption
- •5.4.4 Gaps/Areas of Improvement
- •5.5.1 Purpose, Expectations and Challenges
- •5.5.2 Initiatives – Focus, Sponsors and Status
- •5.5.3 Market Adoption
- •5.5.4 Gaps/Areas of Improvement
- •5.6 Deep Dive: Other Key Standards
- •5.6.1 Initiatives – Focus, Sponsors and Status
- •5.7 Closing Notes
- •References
- •6.1 Introduction and Motivation
- •6.2 Cloud@Home Overview
- •6.2.1 Issues, Challenges, and Open Problems
- •6.2.2 Basic Architecture
- •6.2.2.1 Software Environment
- •6.2.2.2 Software Infrastructure
- •6.2.2.3 Software Kernel
- •6.2.2.4 Firmware/Hardware
- •6.2.3 Application Scenarios
- •6.3 Cloud@Home Core Structure
- •6.3.1 Management Subsystem
- •6.3.2 Resource Subsystem
- •6.4 Conclusions
- •References
- •7.1 Introduction
- •7.2 MapReduce
- •7.3 P2P-MapReduce
- •7.3.1 Architecture
- •7.3.2 Implementation
- •7.3.2.1 Basic Mechanisms
- •Resource Discovery
- •Network Maintenance
- •Job Submission and Failure Recovery
- •7.3.2.2 State Diagram and Software Modules
- •7.3.3 Evaluation
- •7.4 Conclusions
- •References
- •8.1 Introduction
- •8.2 The Cloud Evolution
- •8.3 Improved Network Support for Cloud Computing
- •8.3.1 Why the Internet is Not Enough?
- •8.3.2 Transparent Optical Networks for Cloud Applications: The Dedicated Bandwidth Paradigm
- •8.4 Architecture and Implementation Details
- •8.4.1 Traffic Management and Control Plane Facilities
- •8.4.2 Service Plane and Interfaces
- •8.4.2.1 Providing Network Services to Cloud-Computing Infrastructures
- •8.4.2.2 The Cloud Operating System–Network Interface
- •8.5.1 The Prototype Details
- •8.5.1.1 The Underlying Network Infrastructure
- •8.5.1.2 The Prototype Cloud Network Control Logic and its Services
- •8.5.2 Performance Evaluation and Results Discussion
- •8.6 Related Work
- •8.7 Conclusions
- •References
- •9.1 Introduction
- •9.2 Overview of YML
- •9.3 Design and Implementation of YML-PC
- •9.3.1 Concept Stack of Cloud Platform
- •9.3.2 Design of YML-PC
- •9.3.3 Core Design and Implementation of YML-PC
- •9.4 Primary Experiments on YML-PC
- •9.4.1 YML-PC Can Be Scaled Up Very Easily
- •9.4.2 Data Persistence in YML-PC
- •9.4.3 Schedule Mechanism in YML-PC
- •9.5 Conclusion and Future Work
- •References
- •10.1 Introduction
- •10.2 Related Work
- •10.2.1 General View of Cloud Computing frameworks
- •10.2.2 Cloud Computing Middleware
- •10.3 Deploying Applications in the Cloud
- •10.3.1 Benchmarking the Cloud
- •10.3.2 The ProActive GCM Deployment
- •10.3.3 Technical Solutions for Deployment over Heterogeneous Infrastructures
- •10.3.3.1 Virtual Private Network (VPN)
- •10.3.3.2 Amazon Virtual Private Cloud (VPC)
- •10.3.3.3 Message Forwarding and Tunneling
- •10.3.4 Conclusion and Motivation for Mixing
- •10.4 Moving HPC Applications from Grids to Clouds
- •10.4.1 HPC on Heterogeneous Multi-Domain Platforms
- •10.4.2 The Hierarchical SPMD Concept and Multi-level Partitioning of Numerical Meshes
- •10.4.3 The GCM/ProActive-Based Lightweight Framework
- •10.4.4 Performance Evaluation
- •10.5 Dynamic Mixing of Clusters, Grids, and Clouds
- •10.5.1 The ProActive Resource Manager
- •10.5.2 Cloud Bursting: Managing Spike Demand
- •10.5.3 Cloud Seeding: Dealing with Heterogeneous Hardware and Private Data
- •10.6 Conclusion
- •References
- •11.1 Introduction
- •11.2 Background
- •11.2.1 ASKALON
- •11.2.2 Cloud Computing
- •11.3 Resource Management Architecture
- •11.3.1 Cloud Management
- •11.3.2 Image Catalog
- •11.3.3 Security
- •11.4 Evaluation
- •11.5 Related Work
- •11.6 Conclusions and Future Work
- •References
- •12.1 Introduction
- •12.2 Layered Peer-to-Peer Cloud Provisioning Architecture
- •12.4.1 Distributed Hash Tables
- •12.4.2 Designing Complex Services over DHTs
- •12.5 Cloud Peer Software Fabric: Design and Implementation
- •12.5.1 Overlay Construction
- •12.5.2 Multidimensional Query Indexing
- •12.5.3 Multidimensional Query Routing
- •12.6 Experiments and Evaluation
- •12.6.1 Cloud Peer Details
- •12.6.3 Test Application
- •12.6.4 Deployment of Test Services on Amazon EC2 Platform
- •12.7 Results and Discussions
- •12.8 Conclusions and Path Forward
- •References
- •13.1 Introduction
- •13.2 High-Throughput Science with the Nimrod Tools
- •13.2.1 The Nimrod Tool Family
- •13.2.2 Nimrod and the Grid
- •13.2.3 Scheduling in Nimrod
- •13.3 Extensions to Support Amazon’s Elastic Compute Cloud
- •13.3.1 The Nimrod Architecture
- •13.3.2 The EC2 Actuator
- •13.3.3 Additions to the Schedulers
- •13.4.1 Introduction and Background
- •13.4.2 Computational Requirements
- •13.4.3 The Experiment
- •13.4.4 Computational and Economic Results
- •13.4.5 Scientific Results
- •13.5 Conclusions
- •References
- •14.1 Using the Cloud
- •14.1.1 Overview
- •14.1.2 Background
- •14.1.3 Requirements and Obligations
- •14.1.3.1 Regional Laws
- •14.1.3.2 Industry Regulations
- •14.2 Cloud Compliance
- •14.2.1 Information Security Organization
- •14.2.2 Data Classification
- •14.2.2.1 Classifying Data and Systems
- •14.2.2.2 Specific Type of Data of Concern
- •14.2.2.3 Labeling
- •14.2.3 Access Control and Connectivity
- •14.2.3.1 Authentication and Authorization
- •14.2.3.2 Accounting and Auditing
- •14.2.3.3 Encrypting Data in Motion
- •14.2.3.4 Encrypting Data at Rest
- •14.2.4 Risk Assessments
- •14.2.4.1 Threat and Risk Assessments
- •14.2.4.2 Business Impact Assessments
- •14.2.4.3 Privacy Impact Assessments
- •14.2.5 Due Diligence and Provider Contract Requirements
- •14.2.5.1 ISO Certification
- •14.2.5.2 SAS 70 Type II
- •14.2.5.3 PCI PA DSS or Service Provider
- •14.2.5.4 Portability and Interoperability
- •14.2.5.5 Right to Audit
- •14.2.5.6 Service Level Agreements
- •14.2.6 Other Considerations
- •14.2.6.1 Disaster Recovery/Business Continuity
- •14.2.6.2 Governance Structure
- •14.2.6.3 Incident Response Plan
- •14.3 Conclusion
- •Bibliography
- •15.1.1 Location of Cloud Data and Applicable Laws
- •15.1.2 Data Concerns Within a European Context
- •15.1.3 Government Data
- •15.1.4 Trust
- •15.1.5 Interoperability and Standardization in Cloud Computing
- •15.1.6 Open Grid Forum’s (OGF) Production Grid Interoperability Working Group (PGI-WG) Charter
- •15.1.7.1 What will OCCI Provide?
- •15.1.7.2 Cloud Data Management Interface (CDMI)
- •15.1.7.3 How it Works
- •15.1.8 SDOs and their Involvement with Clouds
- •15.1.10 A Microsoft Cloud Interoperability Scenario
- •15.1.11 Opportunities for Public Authorities
- •15.1.12 Future Market Drivers and Challenges
- •15.1.13 Priorities Moving Forward
- •15.2 Conclusions
- •References
- •16.1 Introduction
- •16.2 Cloud Computing (‘The Cloud’)
- •16.3 Understanding Risks to Cloud Computing
- •16.3.1 Privacy Issues
- •16.3.2 Data Ownership and Content Disclosure Issues
- •16.3.3 Data Confidentiality
- •16.3.4 Data Location
- •16.3.5 Control Issues
- •16.3.6 Regulatory and Legislative Compliance
- •16.3.7 Forensic Evidence Issues
- •16.3.8 Auditing Issues
- •16.3.9 Business Continuity and Disaster Recovery Issues
- •16.3.10 Trust Issues
- •16.3.11 Security Policy Issues
- •16.3.12 Emerging Threats to Cloud Computing
- •16.4 Cloud Security Relationship Framework
- •16.4.1 Security Requirements in the Clouds
- •16.5 Conclusion
- •References
- •17.1 Introduction
- •17.1.1 What Is Security?
- •17.2 ISO 27002 Gap Analyses
- •17.2.1 Asset Management
- •17.2.2 Communications and Operations Management
- •17.2.4 Information Security Incident Management
- •17.2.5 Compliance
- •17.3 Security Recommendations
- •17.4 Case Studies
- •17.4.1 Private Cloud: Fortune 100 Company
- •17.4.2 Public Cloud: Amazon.com
- •17.5 Summary and Conclusion
- •References
- •18.1 Introduction
- •18.2 Decoupling Policy from Applications
- •18.2.1 Overlap of Concerns Between the PEP and PDP
- •18.2.2 Patterns for Binding PEPs to Services
- •18.2.3 Agents
- •18.2.4 Intermediaries
- •18.3 PEP Deployment Patterns in the Cloud
- •18.3.1 Software-as-a-Service Deployment
- •18.3.2 Platform-as-a-Service Deployment
- •18.3.3 Infrastructure-as-a-Service Deployment
- •18.3.4 Alternative Approaches to IaaS Policy Enforcement
- •18.3.5 Basic Web Application Security
- •18.3.6 VPN-Based Solutions
- •18.4 Challenges to Deploying PEPs in the Cloud
- •18.4.1 Performance Challenges in the Cloud
- •18.4.2 Strategies for Fault Tolerance
- •18.4.3 Strategies for Scalability
- •18.4.4 Clustering
- •18.4.5 Acceleration Strategies
- •18.4.5.1 Accelerating Message Processing
- •18.4.5.2 Acceleration of Cryptographic Operations
- •18.4.6 Transport Content Coding
- •18.4.7 Security Challenges in the Cloud
- •18.4.9 Binding PEPs and Applications
- •18.4.9.1 Intermediary Isolation
- •18.4.9.2 The Protected Application Stack
- •18.4.10 Authentication and Authorization
- •18.4.11 Clock Synchronization
- •18.4.12 Management Challenges in the Cloud
- •18.4.13 Audit, Logging, and Metrics
- •18.4.14 Repositories
- •18.4.15 Provisioning and Distribution
- •18.4.16 Policy Synchronization and Views
- •18.5 Conclusion
- •References
- •19.1 Introduction and Background
- •19.2 A Media Service Cloud for Traditional Broadcasting
- •19.2.1 Gridcast the PRISM Cloud 0.12
- •19.3 An On-demand Digital Media Cloud
- •19.4 PRISM Cloud Implementation
- •19.4.1 Cloud Resources
- •19.4.2 Cloud Service Deployment and Management
- •19.5 The PRISM Deployment
- •19.6 Summary
- •19.7 Content Note
- •References
- •20.1 Cloud Computing Reference Model
- •20.2 Cloud Economics
- •20.2.1 Economic Context
- •20.2.2 Economic Benefits
- •20.2.3 Economic Costs
- •20.2.5 The Economics of Green Clouds
- •20.3 Quality of Experience in the Cloud
- •20.4 Monetization Models in the Cloud
- •20.5 Charging in the Cloud
- •20.5.1 Existing Models of Charging
- •20.5.1.1 On-Demand IaaS Instances
- •20.5.1.2 Reserved IaaS Instances
- •20.5.1.3 PaaS Charging
- •20.5.1.4 Cloud Vendor Pricing Model
- •20.5.1.5 Interprovider Charging
- •20.6 Taxation in the Cloud
- •References
- •21.1 Introduction
- •21.2 Background
- •21.3 Experiment
- •21.3.1 Target Application: Value at Risk
- •21.3.2 Target Systems
- •21.3.2.1 Condor
- •21.3.2.2 Amazon EC2
- •21.3.2.3 Eucalyptus
- •21.3.3 Results
- •21.3.4 Job Completion
- •21.3.5 Cost
- •21.4 Conclusions and Future Work
- •References
- •Index
96 |
V.D. Cunsolo et al. |
It is the basis of the “@home” philosophy of sharing/donating network connected resources for supporting distributed scientific computing.
We believe that the Cloud-computing paradigm is also applicable at lower scales, from the single contributing user who shares his/her desktop, to research groups, public administrations, social communities, and small and medium enterprises, who can make their distributed computing resources available to the Cloud. Both free sharing and pay-per-use models can be easily adopted in such scenarios.
From the utility point of view, the rise of the “techno-utility complex” and the corresponding increase in computing resource demands, in some cases growing dramatically faster than Moore’s Law, predicted by the Sun CTO Greg Papadopoulos in the red shift theory for IT [15], could take us in a close future, toward an oligarchy, a lobby or a trust of few big companies controlling the whole computing resources market.
To avoid such a pessimistic but achievable scenario, we suggest addressing the problem in a different way: instead of building costly private data centers that the Google CEO, Eric Schmidt, likes to compare with the prohibitively expensive cyclotrons [3], we propose a more “democratic” form of Cloud computing, in which the computing resources of single users accessing the Cloud can be shared with others in order to contribute to the elaboration of complex problems.
As this paradigm is very similar to the Volunteer computing one, it can be named as Cloud@Home. Both hardware and software compatibility limitations and restrictions of Volunteer computing can be solved in Cloud computing environments, allowing to share both hardware and software resources and/or services.
The Cloud@Home paradigm could also be applied to commercial Clouds, establishing an open computing-utility market where users can both buy and sell their services. Since the computing power can be described by a “long-tailed” distribution, in which a high-amplitude population (Cloud providers and commercial data centers) is followed by a low-amplitude population (small data centers and private users) that gradually “tails off” asymptotically, Cloud@Home can catch the Long Tail effect [1], providing similar or higher computing capabilities than commercial providers’ data centers, by grouping small computing resources from many single contributors.
In the following, we demonstrate how it is possible to realize all these aims through the Cloud@Home paradigm. In Section 2, we describe the functional architecture of the Cloud@Home infrastructure, and in Section 3, we characterize the blocks implementing the functions previously identified into the Cloud@Home core structure. Section 4 concludes the chapter by recapitulating our work and discussing challenges and future work.
6.2 Cloud@Home Overview
The idea behind Cloud@Home is to reuse “domestic” computing resources to build voluntary contributors’ Clouds that are interoperable and, moreover, interoperate with other foreign, and also commercial, Cloud infrastructures. With Cloud@Home,
6 Open and Interoperable Clouds: The Cloud@Home Way |
97 |
anyone can experience the power of Cloud computing, both actively by providing his/her own resources and services, and passively by submitting his/her applications and requirements.
6.2.1 Issues, Challenges, and Open Problems
Ian Foster summarizes the computing paradigm of the future as follows [9]: “... we will need to support on-demand provisioning and configuration of integrated ‘virtual systems’ providing the precise capabilities needed by an end user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment. We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s Cloud utilities, and the distribution and interoperability of today’s Grid facilities....”
We share all these requirements, but in a slightly different way: we want to actively involve users into such a new form of computing, allowing them to create their own interoperable Clouds. In other words, we believe that it is possible to export, apply, and adapt the “@home” philosophy to the Cloud-computing paradigm. In this way, by merging Volunteer and Cloud computing, a new paradigm can be created: Cloud@Home. This new computing paradigm gives back the power and control to users, who can decide how to manage their resources/services in a global, geographically distributed context. They can voluntarily sustain scientific projects by freely placing their resources/services at the scientific research centers’ disposal, or can earn money by selling their resources to Cloud-computing providers in a pay-per-use/share context.
Therefore, in Cloud@Home, both the commercial/business and volunteer/ scientific viewpoints coexist: in the former case, the end-user orientation of Cloud is extended to a collaborative two-way Cloud in which users can buy and/or sell their resources/services; in the latter case, the Grid philosophy of few but large computing requests is extended and enhanced to open Virtual Organizations. In both cases, QoS requirements could be specified, introducing in to the Grid and Volunteer philosophy (best effort) the concept of quality.
Cloud@Home can also be considered as a generalization and a maturation of the @home philosophy: a context in which users voluntarily share their resources without compatibility problems. This allows knocking down both hardware (processor bits, endianness, architecture, and network) and software (operating systems, libraries, compilers, applications, and middlewares) barriers of Grid and Volunteer computing. Moreover, Cloud@Home allows users to share not only physical resources, as in @home projects or Grid environments, but any kind of service. The flexibility and extensibility of Cloud@Home can allow to easily arrange, manage, and make available with significant computing resources (greater than those in Clouds, Grids, and/or @home environments) to everyone who owns a computer. Another significant improvement of Cloud@Home with regard to Volunteer computing

98 |
V.D. Cunsolo et al. |
paradigms is the QoS/SLA management: starting from the credit management system and other similar experiments on QoS, a mechanism for adequately monitoring, ensuring, negotiating, accounting, billing, and managing, in general, QoS and SLA will be implemented.
On the other hand, Cloud@Home can be considered as the enhancement of the Grid-Utility vision of Cloud computing. In this new paradigm, user’s hosts are not passive interfaces to Cloud services, but can be actively involved in computing. At worst, single nodes and services could be enrolled by the Cloud@Home middleware to build own-private Cloud infrastructures that can with interact with other Clouds.
The Cloud@Home motto is: heterogeneous hardware for homogeneous Clouds. Thus, the scenario we prefigure is composed of several coexisting and interoperable Clouds, as depicted in Fig. 6.2. Open Clouds (yellow) identify open VO operating for free Volunteer computing; Commercial Clouds (blue) characterize entities or companies selling their computing resources for business; and Hybrid Clouds (green) can both sell or give for free their services. Both Open and Hybrid Clouds can interoperate with any other Clouds, as well as Commercial, while these latter can interoperate if and only if the Commercial Clouds are mutually recognized. In this way, it is possible to make federations of heterogeneous Clouds that can work together on the same project. Such a scenario has to be implemented transparently for users who do not want to know whether their applications are running in homogeneous or heterogeneous Clouds. The differences among homogeneous and heterogeneous Clouds are only concerned with implementation issues, mainly affecting the resource management: in the former case, resources are managed locally to the Cloud; in heterogeneous Clouds, interoperable services have to be implemented in order to support discovery, connectivity, translation, and negotiation requirements amongst Clouds.
Sun |
|
Commercial |
Amazon EC2 |
|
|
|
|
||
Network.com |
|
|
||
Cloud |
|
|
||
|
|
|
|
|
|
|
Hybrid Cloud |
Academic |
|
|
|
|
|
|
|
|
|
|
Cloud |
Hybrid Cloud |
Open Cloud |
|
|
|
|
|
|
||
|
|
|
IBM Blue |
|
|
|
|
Cloud |
|
|
Scientific |
|
|
Microsoft |
|
Cloud |
Open Cloud |
|
Azure S.P. |
Fig. 6.2 Co-existing and interoperable Clouds anticipated for the Cloud@Home Scenario
6 Open and Interoperable Clouds: The Cloud@Home Way |
99 |
The overall infrastructure must deal with the high dynamism of its nodes/ resources, allowing to move and reallocate data, tasks, and jobs. It is therefore necessary to implement a lightweight middleware, specifically designed to optimize migrations. The choice of developing such middleware on existing technologies (as done in Nimbus-Stratus starting from Globus) could be limiting, inefficient, or not adequate from this point of view. This represents another significant enhancement of Cloud@Home against Grid: a lightweight middleware allows to involve limited resources’ devices into the Cloud, mainly as consumer hosts accessing the Cloud through “thin client” but also, in some specific applications, as contributing hosts implementing (light) services according to their availabilities. Moreover, the Cloud@Home middleware does not influence code writing as Grid and Volunteer computing paradigms do.
Another important goal of Cloud@Home is security. Volunteer computing has security concerns, while the Grid paradigm implements complex security mechanisms. Virtualization in Clouds implements isolation of services, but does not provide any protection from local access. With regard to security, the specific goal of Cloud@Home is to extend the security mechanisms of Clouds to the protection of data from local access. As Cloud@Home is composed of an amount of resources potentially larger than commercial or proprietary Cloud solutions, its reliability can be compared with Grid or the Volunteer computing and should be greater than other Clouds.
Lastly, interoperability is one of the most important goals of Cloud@Home. This is an open problem in Grid, Volunteer, and Cloud computing, which we want to address in Cloud@Home.
The most important issues that should be taken into account in order to implement such a form of computing can be listed as follows:
•Resources and Services management – a mechanism for managing resources and services offered by Clouds is mandatory. This must be able to enroll, discover, index, assign and reassign, monitor, and coordinate resources and services. A problem to face at this level is the compatibility among resources and services and their portability.
•Frontend – abstraction is needed in order to provide users with a high-level service-oriented point of view of the computing system. The frontend provides a unique, uniform access point to the Cloud. It must allow users to submit functional computing requests, only providing requirements and specifications, without any knowledge of the system-resources deployment. The system evaluates such requirements and specifications, and translates them into physical resource demands, deploying the elaboration process. Another aspect concerning the frontend is the capability of customizing Cloud services and applications.
•Security – effective mechanisms are required to provide authentication, resources and data protection, data confidentiality, and integrity.
•Resource and service accessibility, reliability, and data consistency – it is necessary to implement redundancy of resources and services, and hosts’ recovery policies because users voluntarily contribute to the computing, and therefore, can asynchronously, at any time, log out or disconnect from the Cloud.