
- •Cloud Computing
- •Foreword
- •Preface
- •Introduction
- •Expected Audience
- •Book Overview
- •Part 1: Cloud Base
- •Part 2: Cloud Seeding
- •Part 3: Cloud Breaks
- •Part 4: Cloud Feedback
- •Contents
- •1.1 Introduction
- •1.1.1 Cloud Services and Enabling Technologies
- •1.2 Virtualization Technology
- •1.2.1 Virtual Machines
- •1.2.2 Virtualization Platforms
- •1.2.3 Virtual Infrastructure Management
- •1.2.4 Cloud Infrastructure Manager
- •1.3 The MapReduce System
- •1.3.1 Hadoop MapReduce Overview
- •1.4 Web Services
- •1.4.1 RPC (Remote Procedure Call)
- •1.4.2 SOA (Service-Oriented Architecture)
- •1.4.3 REST (Representative State Transfer)
- •1.4.4 Mashup
- •1.4.5 Web Services in Practice
- •1.5 Conclusions
- •References
- •2.1 Introduction
- •2.2 Background and Related Work
- •2.3 Taxonomy of Cloud Computing
- •2.3.1 Cloud Architecture
- •2.3.1.1 Services and Modes of Cloud Computing
- •Software-as-a-Service (SaaS)
- •Platform-as-a-Service (PaaS)
- •Hardware-as-a-Service (HaaS)
- •Infrastructure-as-a-Service (IaaS)
- •2.3.2 Virtualization Management
- •2.3.3 Core Services
- •2.3.3.1 Discovery and Replication
- •2.3.3.2 Load Balancing
- •2.3.3.3 Resource Management
- •2.3.4 Data Governance
- •2.3.4.1 Interoperability
- •2.3.4.2 Data Migration
- •2.3.5 Management Services
- •2.3.5.1 Deployment and Configuration
- •2.3.5.2 Monitoring and Reporting
- •2.3.5.3 Service-Level Agreements (SLAs) Management
- •2.3.5.4 Metering and Billing
- •2.3.5.5 Provisioning
- •2.3.6 Security
- •2.3.6.1 Encryption/Decryption
- •2.3.6.2 Privacy and Federated Identity
- •2.3.6.3 Authorization and Authentication
- •2.3.7 Fault Tolerance
- •2.4 Classification and Comparison between Cloud Computing Ecosystems
- •2.5 Findings
- •2.5.2 Cloud Computing PaaS and SaaS Provider
- •2.5.3 Open Source Based Cloud Computing Services
- •2.6 Comments on Issues and Opportunities
- •2.7 Conclusions
- •References
- •3.1 Introduction
- •3.2 Scientific Workflows and e-Science
- •3.2.1 Scientific Workflows
- •3.2.2 Scientific Workflow Management Systems
- •3.2.3 Important Aspects of In Silico Experiments
- •3.3 A Taxonomy for Cloud Computing
- •3.3.1 Business Model
- •3.3.2 Privacy
- •3.3.3 Pricing
- •3.3.4 Architecture
- •3.3.5 Technology Infrastructure
- •3.3.6 Access
- •3.3.7 Standards
- •3.3.8 Orientation
- •3.5 Taxonomies for Cloud Computing
- •3.6 Conclusions and Final Remarks
- •References
- •4.1 Introduction
- •4.2 Cloud and Grid: A Comparison
- •4.2.1 A Retrospective View
- •4.2.2 Comparison from the Viewpoint of System
- •4.2.3 Comparison from the Viewpoint of Users
- •4.2.4 A Summary
- •4.3 Examining Cloud Computing from the CSCW Perspective
- •4.3.1 CSCW Findings
- •4.3.2 The Anatomy of Cloud Computing
- •4.3.2.1 Security and Privacy
- •4.3.2.2 Data and/or Vendor Lock-In
- •4.3.2.3 Service Availability/Reliability
- •4.4 Conclusions
- •References
- •5.1 Overview – Cloud Standards – What and Why?
- •5.2 Deep Dive: Interoperability Standards
- •5.2.1 Purpose, Expectations and Challenges
- •5.2.2 Initiatives – Focus, Sponsors and Status
- •5.2.3 Market Adoption
- •5.2.4 Gaps/Areas of Improvement
- •5.3 Deep Dive: Security Standards
- •5.3.1 Purpose, Expectations and Challenges
- •5.3.2 Initiatives – Focus, Sponsors and Status
- •5.3.3 Market Adoption
- •5.3.4 Gaps/Areas of Improvement
- •5.4 Deep Dive: Portability Standards
- •5.4.1 Purpose, Expectations and Challenges
- •5.4.2 Initiatives – Focus, Sponsors and Status
- •5.4.3 Market Adoption
- •5.4.4 Gaps/Areas of Improvement
- •5.5.1 Purpose, Expectations and Challenges
- •5.5.2 Initiatives – Focus, Sponsors and Status
- •5.5.3 Market Adoption
- •5.5.4 Gaps/Areas of Improvement
- •5.6 Deep Dive: Other Key Standards
- •5.6.1 Initiatives – Focus, Sponsors and Status
- •5.7 Closing Notes
- •References
- •6.1 Introduction and Motivation
- •6.2 Cloud@Home Overview
- •6.2.1 Issues, Challenges, and Open Problems
- •6.2.2 Basic Architecture
- •6.2.2.1 Software Environment
- •6.2.2.2 Software Infrastructure
- •6.2.2.3 Software Kernel
- •6.2.2.4 Firmware/Hardware
- •6.2.3 Application Scenarios
- •6.3 Cloud@Home Core Structure
- •6.3.1 Management Subsystem
- •6.3.2 Resource Subsystem
- •6.4 Conclusions
- •References
- •7.1 Introduction
- •7.2 MapReduce
- •7.3 P2P-MapReduce
- •7.3.1 Architecture
- •7.3.2 Implementation
- •7.3.2.1 Basic Mechanisms
- •Resource Discovery
- •Network Maintenance
- •Job Submission and Failure Recovery
- •7.3.2.2 State Diagram and Software Modules
- •7.3.3 Evaluation
- •7.4 Conclusions
- •References
- •8.1 Introduction
- •8.2 The Cloud Evolution
- •8.3 Improved Network Support for Cloud Computing
- •8.3.1 Why the Internet is Not Enough?
- •8.3.2 Transparent Optical Networks for Cloud Applications: The Dedicated Bandwidth Paradigm
- •8.4 Architecture and Implementation Details
- •8.4.1 Traffic Management and Control Plane Facilities
- •8.4.2 Service Plane and Interfaces
- •8.4.2.1 Providing Network Services to Cloud-Computing Infrastructures
- •8.4.2.2 The Cloud Operating System–Network Interface
- •8.5.1 The Prototype Details
- •8.5.1.1 The Underlying Network Infrastructure
- •8.5.1.2 The Prototype Cloud Network Control Logic and its Services
- •8.5.2 Performance Evaluation and Results Discussion
- •8.6 Related Work
- •8.7 Conclusions
- •References
- •9.1 Introduction
- •9.2 Overview of YML
- •9.3 Design and Implementation of YML-PC
- •9.3.1 Concept Stack of Cloud Platform
- •9.3.2 Design of YML-PC
- •9.3.3 Core Design and Implementation of YML-PC
- •9.4 Primary Experiments on YML-PC
- •9.4.1 YML-PC Can Be Scaled Up Very Easily
- •9.4.2 Data Persistence in YML-PC
- •9.4.3 Schedule Mechanism in YML-PC
- •9.5 Conclusion and Future Work
- •References
- •10.1 Introduction
- •10.2 Related Work
- •10.2.1 General View of Cloud Computing frameworks
- •10.2.2 Cloud Computing Middleware
- •10.3 Deploying Applications in the Cloud
- •10.3.1 Benchmarking the Cloud
- •10.3.2 The ProActive GCM Deployment
- •10.3.3 Technical Solutions for Deployment over Heterogeneous Infrastructures
- •10.3.3.1 Virtual Private Network (VPN)
- •10.3.3.2 Amazon Virtual Private Cloud (VPC)
- •10.3.3.3 Message Forwarding and Tunneling
- •10.3.4 Conclusion and Motivation for Mixing
- •10.4 Moving HPC Applications from Grids to Clouds
- •10.4.1 HPC on Heterogeneous Multi-Domain Platforms
- •10.4.2 The Hierarchical SPMD Concept and Multi-level Partitioning of Numerical Meshes
- •10.4.3 The GCM/ProActive-Based Lightweight Framework
- •10.4.4 Performance Evaluation
- •10.5 Dynamic Mixing of Clusters, Grids, and Clouds
- •10.5.1 The ProActive Resource Manager
- •10.5.2 Cloud Bursting: Managing Spike Demand
- •10.5.3 Cloud Seeding: Dealing with Heterogeneous Hardware and Private Data
- •10.6 Conclusion
- •References
- •11.1 Introduction
- •11.2 Background
- •11.2.1 ASKALON
- •11.2.2 Cloud Computing
- •11.3 Resource Management Architecture
- •11.3.1 Cloud Management
- •11.3.2 Image Catalog
- •11.3.3 Security
- •11.4 Evaluation
- •11.5 Related Work
- •11.6 Conclusions and Future Work
- •References
- •12.1 Introduction
- •12.2 Layered Peer-to-Peer Cloud Provisioning Architecture
- •12.4.1 Distributed Hash Tables
- •12.4.2 Designing Complex Services over DHTs
- •12.5 Cloud Peer Software Fabric: Design and Implementation
- •12.5.1 Overlay Construction
- •12.5.2 Multidimensional Query Indexing
- •12.5.3 Multidimensional Query Routing
- •12.6 Experiments and Evaluation
- •12.6.1 Cloud Peer Details
- •12.6.3 Test Application
- •12.6.4 Deployment of Test Services on Amazon EC2 Platform
- •12.7 Results and Discussions
- •12.8 Conclusions and Path Forward
- •References
- •13.1 Introduction
- •13.2 High-Throughput Science with the Nimrod Tools
- •13.2.1 The Nimrod Tool Family
- •13.2.2 Nimrod and the Grid
- •13.2.3 Scheduling in Nimrod
- •13.3 Extensions to Support Amazon’s Elastic Compute Cloud
- •13.3.1 The Nimrod Architecture
- •13.3.2 The EC2 Actuator
- •13.3.3 Additions to the Schedulers
- •13.4.1 Introduction and Background
- •13.4.2 Computational Requirements
- •13.4.3 The Experiment
- •13.4.4 Computational and Economic Results
- •13.4.5 Scientific Results
- •13.5 Conclusions
- •References
- •14.1 Using the Cloud
- •14.1.1 Overview
- •14.1.2 Background
- •14.1.3 Requirements and Obligations
- •14.1.3.1 Regional Laws
- •14.1.3.2 Industry Regulations
- •14.2 Cloud Compliance
- •14.2.1 Information Security Organization
- •14.2.2 Data Classification
- •14.2.2.1 Classifying Data and Systems
- •14.2.2.2 Specific Type of Data of Concern
- •14.2.2.3 Labeling
- •14.2.3 Access Control and Connectivity
- •14.2.3.1 Authentication and Authorization
- •14.2.3.2 Accounting and Auditing
- •14.2.3.3 Encrypting Data in Motion
- •14.2.3.4 Encrypting Data at Rest
- •14.2.4 Risk Assessments
- •14.2.4.1 Threat and Risk Assessments
- •14.2.4.2 Business Impact Assessments
- •14.2.4.3 Privacy Impact Assessments
- •14.2.5 Due Diligence and Provider Contract Requirements
- •14.2.5.1 ISO Certification
- •14.2.5.2 SAS 70 Type II
- •14.2.5.3 PCI PA DSS or Service Provider
- •14.2.5.4 Portability and Interoperability
- •14.2.5.5 Right to Audit
- •14.2.5.6 Service Level Agreements
- •14.2.6 Other Considerations
- •14.2.6.1 Disaster Recovery/Business Continuity
- •14.2.6.2 Governance Structure
- •14.2.6.3 Incident Response Plan
- •14.3 Conclusion
- •Bibliography
- •15.1.1 Location of Cloud Data and Applicable Laws
- •15.1.2 Data Concerns Within a European Context
- •15.1.3 Government Data
- •15.1.4 Trust
- •15.1.5 Interoperability and Standardization in Cloud Computing
- •15.1.6 Open Grid Forum’s (OGF) Production Grid Interoperability Working Group (PGI-WG) Charter
- •15.1.7.1 What will OCCI Provide?
- •15.1.7.2 Cloud Data Management Interface (CDMI)
- •15.1.7.3 How it Works
- •15.1.8 SDOs and their Involvement with Clouds
- •15.1.10 A Microsoft Cloud Interoperability Scenario
- •15.1.11 Opportunities for Public Authorities
- •15.1.12 Future Market Drivers and Challenges
- •15.1.13 Priorities Moving Forward
- •15.2 Conclusions
- •References
- •16.1 Introduction
- •16.2 Cloud Computing (‘The Cloud’)
- •16.3 Understanding Risks to Cloud Computing
- •16.3.1 Privacy Issues
- •16.3.2 Data Ownership and Content Disclosure Issues
- •16.3.3 Data Confidentiality
- •16.3.4 Data Location
- •16.3.5 Control Issues
- •16.3.6 Regulatory and Legislative Compliance
- •16.3.7 Forensic Evidence Issues
- •16.3.8 Auditing Issues
- •16.3.9 Business Continuity and Disaster Recovery Issues
- •16.3.10 Trust Issues
- •16.3.11 Security Policy Issues
- •16.3.12 Emerging Threats to Cloud Computing
- •16.4 Cloud Security Relationship Framework
- •16.4.1 Security Requirements in the Clouds
- •16.5 Conclusion
- •References
- •17.1 Introduction
- •17.1.1 What Is Security?
- •17.2 ISO 27002 Gap Analyses
- •17.2.1 Asset Management
- •17.2.2 Communications and Operations Management
- •17.2.4 Information Security Incident Management
- •17.2.5 Compliance
- •17.3 Security Recommendations
- •17.4 Case Studies
- •17.4.1 Private Cloud: Fortune 100 Company
- •17.4.2 Public Cloud: Amazon.com
- •17.5 Summary and Conclusion
- •References
- •18.1 Introduction
- •18.2 Decoupling Policy from Applications
- •18.2.1 Overlap of Concerns Between the PEP and PDP
- •18.2.2 Patterns for Binding PEPs to Services
- •18.2.3 Agents
- •18.2.4 Intermediaries
- •18.3 PEP Deployment Patterns in the Cloud
- •18.3.1 Software-as-a-Service Deployment
- •18.3.2 Platform-as-a-Service Deployment
- •18.3.3 Infrastructure-as-a-Service Deployment
- •18.3.4 Alternative Approaches to IaaS Policy Enforcement
- •18.3.5 Basic Web Application Security
- •18.3.6 VPN-Based Solutions
- •18.4 Challenges to Deploying PEPs in the Cloud
- •18.4.1 Performance Challenges in the Cloud
- •18.4.2 Strategies for Fault Tolerance
- •18.4.3 Strategies for Scalability
- •18.4.4 Clustering
- •18.4.5 Acceleration Strategies
- •18.4.5.1 Accelerating Message Processing
- •18.4.5.2 Acceleration of Cryptographic Operations
- •18.4.6 Transport Content Coding
- •18.4.7 Security Challenges in the Cloud
- •18.4.9 Binding PEPs and Applications
- •18.4.9.1 Intermediary Isolation
- •18.4.9.2 The Protected Application Stack
- •18.4.10 Authentication and Authorization
- •18.4.11 Clock Synchronization
- •18.4.12 Management Challenges in the Cloud
- •18.4.13 Audit, Logging, and Metrics
- •18.4.14 Repositories
- •18.4.15 Provisioning and Distribution
- •18.4.16 Policy Synchronization and Views
- •18.5 Conclusion
- •References
- •19.1 Introduction and Background
- •19.2 A Media Service Cloud for Traditional Broadcasting
- •19.2.1 Gridcast the PRISM Cloud 0.12
- •19.3 An On-demand Digital Media Cloud
- •19.4 PRISM Cloud Implementation
- •19.4.1 Cloud Resources
- •19.4.2 Cloud Service Deployment and Management
- •19.5 The PRISM Deployment
- •19.6 Summary
- •19.7 Content Note
- •References
- •20.1 Cloud Computing Reference Model
- •20.2 Cloud Economics
- •20.2.1 Economic Context
- •20.2.2 Economic Benefits
- •20.2.3 Economic Costs
- •20.2.5 The Economics of Green Clouds
- •20.3 Quality of Experience in the Cloud
- •20.4 Monetization Models in the Cloud
- •20.5 Charging in the Cloud
- •20.5.1 Existing Models of Charging
- •20.5.1.1 On-Demand IaaS Instances
- •20.5.1.2 Reserved IaaS Instances
- •20.5.1.3 PaaS Charging
- •20.5.1.4 Cloud Vendor Pricing Model
- •20.5.1.5 Interprovider Charging
- •20.6 Taxation in the Cloud
- •References
- •21.1 Introduction
- •21.2 Background
- •21.3 Experiment
- •21.3.1 Target Application: Value at Risk
- •21.3.2 Target Systems
- •21.3.2.1 Condor
- •21.3.2.2 Amazon EC2
- •21.3.2.3 Eucalyptus
- •21.3.3 Results
- •21.3.4 Job Completion
- •21.3.5 Cost
- •21.4 Conclusions and Future Work
- •References
- •Index
3 Towards a Taxonomy for Cloud Computing from an e-Science Perspective |
59 |
3.5 Taxonomies for Cloud Computing
There are some proposals in the literature related to cloud computing taxonomies. All presented taxonomies have focused mostly on the commercial aspect (e.g., business model), lacking on describing the domain according to important aspects for e-Science such as standards, privacy levels, and so on. Cloud computing providers adopt a specialized taxonomy to explain their approach, especially if they have to distinguish themselves from others. This section presents four taxonomies, already developed for the cloud computing domain.
Youseff [34] proposes a unified ontology for cloud computing. It presents a summary of cloud computing components, with a classification of these components, and their relationships. Even though this paper is a step forward, highlighting many technical challenges involved in building cloud components, it is not a real ontology, but a taxonomy that partially covers the cloud computing domain. In fact, this work classifies just the cloud computing components in five main layers. In addition, this ontology only takes the business model into account (classifying cloud computing as software as a service, hardware as a service, and so on). Many other aspects are needed to classify cloud computing environments, particularly for e-Science, such as pricing, access methods, and so on.
Leavitt [13], presents the whole cloud scenario with advantages and disadvantages, explaining the adoption of cloud by companies around the world and classifying cloud computing environments into four types that are equivalent to the business models presented in this paper. However, it proposes a type called “general services” that consider databases and storage provides as a service, differently from our taxonomy that created a new type named DaaS to designate this type of business model. This classification may be too generic since it groups in one class (general services) many important types for e-Science. Services for different purposes are classified as the same, and this may be not be desirable.
Laird [12] classifies cloud environments in a taxonomy that is composed by four main classes: Infrastructure, Platform, Service, and Applications. In each of these classes, it details some aspects and presents cloud environments that correspond to the classification. Many of the classes used in this work are present in our taxonomy. However, it is not focused on e-Science aspects and many important classes are not considered. Laird [12] is focused on commercial environments, and because of that, some classification is missing, such as HPC supporting. Since it is not a fundamental aspect for commercial applications that are executed in clouds, it was not considered.
The United States National Institute of Standards and Technology (NIST) recently provided definitions for cloud computing through an implicit taxonomy [18]. However, different from the taxonomy presented in this chapter, the NIST cloud computing taxonomy has focused on the business model aspect, lacking on describing the domain according to different aspects such as standards, privacy levels, and so on.
60 |
D. de Oliveira et al. |
3.6 Conclusions and Final Remarks
In this chapter, we have introduced a taxonomy for cloud computing from an e-Science perspective. The authors believe that it will be useful for the scientific community in evaluating and comparing different cloud environments. By classifying environments using the proposed taxonomy, they may evaluate which environments meet their needs for executing scientific experiments in clouds. Different from the existing taxonomies, this taxonomy considers a broad view of cloud computing according to important aspects of scientific experiments and aims to explore the major properties of it.
This chapter highlights that despite the high interest about the topic, it is still a wide open field. New solutions for cloud computing are available, and many others are being announced, which makes the cloud computing field very fertile and hard to be understood and classified. It is fundamental that scientists are able to choose the best cloud environment for their experiments. The use of the taxonomy and its common vocabulary may facilitate scientists to find common characteristics of the existing environments and may help them to choose the most adequate one.
Acknowledgments The authors thank CNPq and CAPES for funding this research.
References
1.Altintas I, Berkley C, Jaeger E, Jones M, Ludascher B, Mock S (2004) Kepler: an extensible system for design and execution of scientific workflows. In: 16th SSDBM, Santorini, Greece, pp 423–424
2.Asosheh A, Danesh MH (2008) Comparison of OS level and hypervisor server virtualization. In: Proceedings of the 8th conference on systems theory and scientific computation, Rhodes, Greece, pp 241–246
3.Atom (2010) Atom Publishing Protocol. AtomEnabled.org – Atom Publishing Protocol. http:// www.atomenabled.org/. Accessed 11 Jan 2010
4.Bruno D, Richmond H (2003) The true about taxonomies. Inform Manage J 37:44–46
5.Callahan SP, Freire J, Santos E, Scheidegger CE, Silva CT, Vo HT (2006) VisTrails: visuali zation meets data management. In: Proceedings of the 2006 ACM SIGMOD, Chicago, IL, pp 745–747
6.Foster I, Kesselman C (2004) The grid: blueprint for a new computing infrastructure. Morgan Kaufmann, Los Altos, CA
7.Foster I, Yong Zhao, Raicu I, Lu S (2008) Cloud computing and grid computing 360-degree compared. In: Grid computing environments workshop, 2008. GCE ‘08grid computing environments workshop, 2008. GCE ‘08, Auxtin, TX, pp 1–10
8.Freire J, Koop D, Santos E, Silva CT (2008) Provenance for computational tasks: a survey. Comput Sci Eng 10(3):11–21
9.Hoffa C, Mehta G, Freeman T, Deelman E, Keahey K, Berriman B, Good J (2008) On the use of cloud computing for scientific workflows. In: SWBES 2008SWBES 2008, Indianapolis, IN
10. Jensen M, Schwenk J, Gruschka N, Iacono LL (2009) On technical security issues in cloud computing. In: Proceedings of the 2009 IEEE international conference on cloud computing, Bangalore, India, pp 109–116
11.JSON (2010) JSON interchange format. JSON Interchange Format. http://json.org/. Accessed 11 Jan 2010
3 Towards a Taxonomy for Cloud Computing from an e-Science Perspective |
61 |
12. Laird P (2009) Cloud computing taxonomy. http://peterlaird.blogspot.com/2009/05/cloud- computing-taxonomy-at-interop-las.html. Accessed 11 Jan 2010
13. Leavitt N (2009) Is cloud computing really ready for prime time? Computer 42(1):15–20 14. Lizhe W, Jie T, Kunze M, Castellanos A, Kramer D, Karl W (2008) Scientific cloud comput-
ing: early definition and experience. In: Proceedings of HPCC ‘08, IEEE HPCC ‘08, pp 825–830
15. Matsunaga A, Tsugawa M, Fortes J (2008) CloudBLAST: combining MapReduce and virtualization on distributed resources for bioinformatics applications. In: Proceedings of the fourth IEEE international conference on eScience, e-Science’08, IEEE Computer Society, Washington, DC, vol 0, pp 222–229
16. Mattoso M, Werner C, Travassos GH, Braganholo V, Murta L, Ogasawara E, Oliveira D, Cruz S, Martinho W (2010) Towards supporting large scale in silico experiments life cycle. Int J Bus Process Integr Manage (IJBPIM), 5(1):79–92
17. Mishra P, Chopra D, Moreh J, Philpott R (2003) Differences between OASIS Security Assertion Markup Language (SAML) V1.1 and V1.0. OASIS Draft, Technical Report sstc- saml-diff-1.1-draft-01
18.NIST (2009) NIST.gov – computer security division – computer security resource center. NIST
– cloud computing. http://csrc.nist.gov/groups/SNS/cloud-computing/index.html. Accessed 11 Jan 2010
19. OAuth (2010) OAuth – an open protocol to allow secure API authorization in a simple and standard method from desktop and web applications. http://oauth.net/. Accessed 11 Jan 2010
20. Oliveira D, Cunha L, Tomaz L, Pereira V, Mattoso M (2009) Using ontologies to support deep water oil exploration scientific workflows. In: IEEE international workshop on scientific workflows, Los Angeles, CA
21. OpenID (2010) OpenID Foundation website. http://openid.net/. Accessed 11 Jan 2010
22. OVF (2010) Open virtualization format (OVF) –virtual machines – virtualization. http://www. vmware.com/appliances/getting-started/learn/ovf.html. Accessed 11 Jan 2010
23. RSS (2010) RSS 2.0 specification (version 2.0.11). http://www.rssboard.org/rss-specification. Accessed 11 Jan 2010
24. Simmhan Y, Barga R, van Ingen C, Lazowska E, Szalay A (2008) On Building scientific workflow systems for data management in the cloud. In: Proceedings of the fourth IEEE international conference on eScience ’08, eScience ’08, IEEE Computer Society, Washington, DC, pp 434–435
25. Sotomayor B, Montero RS, Llorente IM, Foster I (2009) Virtual infrastructure management in private and hybrid clouds. IEEE Internet Comput 13(5):14–22
26. Taylor IJ, Deelman E, Gannon DB, Shields M (eds) (2007) Workflows for e-Science: scientific workflows for grids, 1st ed. Springer, London
27. Travassos GH, Barros MO (2003) Contributions of in virtuo and in silico experiments for the future of empirical studies in software engineering. In: Proceedings of 2nd workshop on empirical software engineering the future of empirical studies in software engineering, Fraunhofer IRB Verlag, Roman Castles, Italy
28. Vaquero LM, Rodero-Merino L, Caceres J, Lindner M (2009) A break in the clouds: towards a cloud definition. SIGCOMM Comput Commun Rev 39(1):50–55
29. W3C (2010) World Wide Web Consortium (W3C). http://www.w3.org/. Accessed 11 Jan 2010
30. Weske M, Vossen G, Medeiros CB (1996) Scientific workflow management: WASA architecture and applications, Universitat Munster, Germany
31. WfMC I (2009) Binding, WfMC Standards, WFMC-TC-1023. http://www. wfmc. org/. Accessed 11 Jan 2010
32. XACML (2010) OASIS eXtensible access control markup language (XACML). http:// www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml. Accessed 11 Jan 2010 33. XMPP (2010) XMPP standards foundation. XMPP standards. http://xmpp.org/. Accessed 11
Jan 2010
62 |
D. de Oliveira et al. |
34. Youseff L, Butrico M, Da Silva D (2008) Toward a unified ontology of cloud computing. In: Grid computing environments workshop, 2008. GCE ‘08grid computing environments workshop, 2008, GCE ‘08, pp 1–10
35. Zhang H, Jiang G, Yoshihira K, Chen H, Saxena A (2009) Intelligent workload factoring for a hybrid cloud computing model. In: Proceedings of the 2009 Congress on services – I, IEEE Computer Society, Washington, DC, pp 701–708