Exp 5 ANEKA Cloud Platform Study
Exp 5 ANEKA Cloud Platform Study
8- Part A
Aim: Study the working of ANEKA cloud platform.
Requirements: ECMA Runtime Environment, Database.
Outcome: Understand of cloud computing with application knowledge of information storage
management, operating system, computer system and architecture, and computer networks.
Topics:
Aneka is a Cloud Application Development Platform (CAP) for developing and running
compute and data intensive applications. As a platform it provides users with both a runtime
environment for executing applications developed using any of the three supported
programming models, and a set of APIs and tools that allow you to build new applications
or run existing legacy code. The purpose of this document is to help you through the
process of installing and setting up an Aneka Cloud environment. This document will cover
everything from helping you to understand your existing infrastructure, different deployment
options, installing the Management Studio, configuring Aneka Daemons and Containers, and
finally running some of the samples to test your environment.
A key component of the Aneka platform is the Aneka Management Studio, a portal for
managing your infrastructure and clouds. Administrators use the Aneka Management Studio to
define their infrastructure, deploy Aneka Daemons, and install and configure Aneka
Containers. The figure below shows a high-level representation of an Aneka Cloud,
composed of a Master Container that is responsible for scheduling jobs to Workers, and a group of
Worker Containers that execute the jobs. Each machine is typically configured with a single
instance of the Aneka Daemon and a single instance of the Aneka Container.
Instructions:
Create a case study on Aneka. Points to be included:
1. Introduction
2. Features
3. Architecture
4. Working
5. Benefits
Case Study:
Aneka cloud platform
Aneka is the product of Manjarasoft. It is used for developing, deploying and managing cloud
applications. It can be integrated with existing cloud technologies. It includes extensible set of
APIs associated with programming models like MapReduce. These APIs supports different types of
cloud models like private, public, hybrid cloud.
Aneka framework:
Aneks can be deployed on a network of computers, a multicore server, datacenters, virtual cloud
infrastructures, or a mixture of these.
2. Foundation services:
Fabric Services are fundamental services of the Aneka Cloud and define the basic infrastructure
management features of the system. Foundation Services are related to the logical management
of the distributed system built on top of the infrastructure and provide supporting services for the
execution of distributed applications.
3. Application services:
Application Services manage the execution of applications and constitute a layer that
differentiates according to the specific programming model used for developing distributed
applications on top of Aneka.
Features of Aneka
The Aneka based computing cloud is a collection of physical and virtualized resources
connected through a network, which could be the Internet or a private intranet. Each of
these resources hosts an instance of the Aneka Container representing the runtime
environment in which the distributed applications are executed.
The container provides the basic management features of the single node and leverages all
the other operations on the services that it is hosting. In particular we can identify fabric,
foundation, and execution services. Fabric services directly interact with the node through
the Platform Abstraction Layer (PAL) and perform hardware profiling and dynamic resource
provisioning. Foundation services identify the core system of the Aneka middleware, they
provide a set of basic features on top of which each of the Aneka containers can be
specialized to perform a specific set of tasks. Execution services directly deal with the
scheduling and execution of applications in the Cloud.
One of the key features of Aneka is the ability of providing different ways for expressing
distributed applications by offering different programming models; execution services are
mostly concerned with providing the middleware with an implementation for these models.
Additional services such as persistence and security are transversal to the entire stack of
services that are hosted by the Container. At the application level, a set of different
components and tools are provided to: 1) simplify the development of applications (SDK); 2)
porting existing applications to the Cloud; and 3) monitoring and managing the Aneka Cloud.
Aneka Architecture
Aneka is a platform and a framework for developing distributed applications on the Cloud. It
harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or data
centers on demand. Aneka provides developers with a rich set of APIs for transparently exploiting
such resources and expressing the business logic of applications by using the preferred
programming abstractions. System administrators can leverage on a collection of tools to monitor
and control the deployed infrastructure. This can be a public cloud available to anyone through
the Internet, or a private cloud constituted by a set of nodes with restricted access.
The Aneka based computing cloud is a collection of physical and virtualized resources connected
through a network, which are either the Internet or a private intranet. Each of these resources
hosts an instance of the Aneka Container representing the runtime environment where the
distributed applications are executed. The container provides the basic management features of
the single node and leverages all the other operations on the services that it is hosting. The
services are broken up into fabric, foundation, and execution services. Fabric services directly
interact with the node through the Platform Abstraction Layer (PAL) and perform hardware
profiling and dynamic resource provisioning. Foundation services identify the core system of the
Aneka middleware, providing a set of basic features to enable Aneka containers to perform
specialized and specific sets of tasks. Execution services directly deal with the scheduling and
execution of applications in the Cloud.
One of the key features of Aneka is the ability of providing different ways for expressing
distributed applications by offering different programming models; execution services are mostly
concerned with providing the middleware with an implementation for these models. Additional
services such as persistence and security are transversal to the entire stack of services that are
hosted by the Container. At the application level, a set of different components and tools are
provided to: 1) simplify the development of applications (SDK); 2) porting existing applications to
the Cloud; and 3) monitoring and managing the Aneka Cloud.
A common deployment of Aneka is presented at the side. An Aneka based Cloud is constituted by
a set of interconnected resources that are dynamically modified according to the user needs by
using resource virtualization or by harnessing the spare CPU cycles of desktop machines. If the
deployment identifies a private Cloud all the resources are in house, for example within the
enterprise. This deployment is extended by adding publicly available resources on demand or by
interacting with other Aneka public clouds providing computing resources connected over the
Internet.
Working
The range of tools and services offered by cloud providers play an important role in integrating
WfMSs with clouds (Figure). Such services can facilitate in the deployment, scaling, execution, and
monitoring of workflow systems. This section discusses some of the tools and services offered by
various service providers that can complement and support WfMSs. A WfMS manages dynamic
provisioning of compute and storage resources in the cloud with the help of tools and APIs
provided by service providers. The provisioning is required to dynamically scale up/down
according to application requirements. For instance, data-intensive workflow applications may
require large amount of disk space for storage. A WfMS could provision dynamic volumes of large
capacity that could be shared across all instances of VMs (similar to snapshots and volumes
provided by Amazon). Similarly, for compute-intensive tasks in an workflow, a WfMS could
provision specific instances that would help accelerate the execution of these compute-intensive
tasks.
A WfMS implements scheduling policies to assign tasks to resources based on applications’
objectives. This task-resource mapping is dependent on several factors: compute resource
capacity, application requirements, user’s QoS, and so forth. Based on these objectives, a WfMS
could also direct a VM provisioning system to consolidate data center loads by migrating VMs so
that it could make scheduling decisions based on locality of data and compute resources.
A persistence mechanism is often important in workflow management systems and for managing
metadata such as available resources, job queues, job status, and user data including large input
and output files. Technologies such as Amazon S3, Google’s BigTable, and the Windows Azure
Storage Services can support most storage requirements for workflow systems, while also being
scalable, reliable, and secure. If large quantities of user data are being dealt with, such as a large
number of brain images used in functional magnetic resonance imaging (fMRI) studies [12],
transferring them online can be both expensive and time-consuming. In such cases, traditional
post can prove to be cheaper and faster. Amazon’s AWS Import/Export5 is one such service that
aims to speed up data movement by transferring large amounts of data in portable storage
devices. The data are shipped to/from Amazon and offloaded into/from S3 buckets using
Amazon’s high-speed internal network. The cost savings can be significant when transferring data
on the order of terabytes.
Most cloud providers also offer services and APIs for tracking resource usage and the costs
incurred. This can complement workflow systems that support budget-based scheduling by
utilizing real-time data on the resources used, the duration, and the expenditure. This information
can be used both for making scheduling decisions on subsequent jobs and for billing the user at
the completion of the workflow application.6
Cloud services such as Google App Engine and Windows Azure provide platforms for building
scalable interactive Web applications. This makes it relatively easy to port the graphical
components of a workflow management system to such platforms while benefiting from their
inherent scalability and reduced administration. For instance, such components deployed on
Google App Engine can utilize the same scalable systems that drive Google applications, including
technologies such as BigTable and GFS.
Benefits
Reduced Costs
Any successful enterprise business knows the significance of appraising, managing, and
optimizing the Capital Expenditure and Operational Expenditure and necessities to achieve
cost economies. Aneka adeptly leverages your current infrastructure assets and Cloud
management tools with a lowcost guarantee.
Improved Reliability
As one of its kind, Manjrasoft’s Aneka is the most comprehensive and mature path for
Cloud adoption based on the .NET technology. Aneka’s ability to design and create a solid
fault tolerant system infrastructure without having to build topology level knowledge
base into applications simplifies the application development and automatically manages
application load over Clouds, Grids, clusters, or desktops. This gives a newfound level of
resiliency with a guaranteed quality of service and effective metering and monitoring for
all the service.
Simplicity
Moving to a Cloud based model requires the software development team to assume
responsibility of delivering their application in a utility fashion, making them to tightly
integrate with salient software development approaches and focus on IT efficiency. Aneka
offers the most flexible and robust APIs framework that cleanly handles .NET based
enterprise application management and development with lightweight technology and
flexible application integration approaches. This enables software development team to
be more productive by enabling the developers to focus on business logic, instead of being
stifled by technology barriers.
Seamless Scalability
Aneka helps enterprise customers to enrich their applications and services with support
for distributed and scalable runtime environments for multicore desktops, servers and a
network of computing systems that are presented as Clusters, Grid, and Clouds. Aneka
empowers the enterprise application stack to achieve end-to-end performance,
scalability and high availability thus meeting the service levels agreement and providing
the desired quality of service. This process is completely transparent to applications and
relies on dynamic provisioning multiple virtual and/or physical machines for accelerating
applications in a scalable manner from a single multi-core desktop computer to a large-
scale elastic Cloud computing infrastructure such as Amazon EC2.
All these features make Aneka a winning solution for enterprise customers in the Platform-as-a-
Service scenario. There exist other different solutions in the PaaS market, most notably Google
AppEngine and Microsoft Windows Azure. While AppEngine is mostly concerned with providing
a scalable runtime environment for web applications for Java and Python applications, Aneka
aims to be more general and empower any kind of application that suffers from performance
degradation and lack of responsiveness under huge pressure. Mircrosoft Azure is a giant in the
market of Cloud services development and provides a wide range of services for developing and
deploying services on the Cloud. It leverages the infrastructure provided by Microsoft to host
these services and scale them. Aneka provides a more flexible model for developing distributed
applications and provides integration with external Clouds such as Amazon EC2 and GoGrid.
Moreover, Aneka is a middleware that can be deployed in the private infrastructure thus
maximizing the use of the local existing infrastructure and allowing enterprises to comfortably
scale to the Cloud when needed.
Observation & Learning: We studied the working of ANEKA cloud platform.
Conclusion: We understood of cloud computing with application knowledge of information
storage management, operating system, computer system and architecture, and computer networks.
Questions:
1. What is the use of creating remote repository in Aneka?
Answer:
Remote Installation and Management: This service is created to install and manage Aneka
containers and user applications after provisioning virtual machines. Therefore, the duties of
this service are downloading executables and configuration files of Aneka, and user
applications from a remote repository, installing the containers services and user application
on the remote machines, stopping and starting services, a uninstalling containers services and
user applications.
For installation, Aneka offers two approaches. The first one consists of automatically
downloading the installers and configurations files from a repository to perform a complete
installation in each new provisioned machine. Using this approach the installation is totally
independent of the cloud provider. However, a more efficient deployment is achieved when
preconfigured images are used. Therefore, the second option is to use Aneka installation
service to generate a base image for each cloud provider. Then Aneka will use these images
when the machines are provisioned and the installation service is used just to configure
dynamic aspects such as IP addresses.