UNIT I
UNIT I
Lower IT costs: Cloud lets you offload some or most of the costs
and effort of purchasing, installing, configuring, and managing your
own on-premises infrastructure.
Improve agility and time-to-value: With cloud, your organization
can start using enterprise applications in minutes, instead of waiting
weeks or months for IT to respond to a request, purchase and
configure supporting hardware, and install software. Cloud also lets
you empower certain users—specifically developers and data
scientists—to help themselves to software and support
infrastructure.
Scale more easily and cost-effectively: Cloud provides elasticity—
instead of purchasing excess capacity that sits unused during slow
periods, you can scale capacity up and down in response to spikes
and dips in traffic. You can also take advantage of your cloud
provider’s global network to spread your applications closer to users
around the world.
TYPES OF CLOUD COMPUTING
Public Cloud
Some public cloud examples include those offered by Amazon, Microsoft, or
Google. These companies provide both services and infrastructure, which
are shared by all customers. Public clouds typically have massive amounts
of available space, which translates into easy scalability. A public cloud is
often recommended for software development and collaborative projects.
Companies can design their applications to be portable, so that a project
that’s tested in the public cloud can be moved to the private cloud for
production. Most cloud providers package their computing resources as
part of a service. Public cloud examples range from access to a
completely virtualized infrastructure that provides little more than raw
processing power and storage (Infrastructure as a Service, or IaaS) to
specialized software programs that are easy to implement and use
(Software as a Service, or SaaS).
The great advantage of a public cloud is its versatility and “pay as you go”
structure that allows customers to provision more capacity on demand. Not
only does this make your system scalable, but it does so in a way that
doesn’t require a large capital expenditure. There are significant costs
associated with IT growth from the hardware and space required to
maintenance and staff resources.
Private Cloud
Private clouds usually reside behind a firewall and are utilized by a single
organization. A completely on-premises cloud may be the preferred solution
for businesses with very tight regulatory requirements, though private
clouds implemented through a colocation provider are gaining in popularity.
Authorized users can access, utilize, and store data in the private cloud
from anywhere, just like they could with a public cloud. The difference is
that no one else can access or utilize those computing resources.
However, the benefits associated with a private cloud come at a cost. The
company that owns the cloud is responsible for both software and
infrastructure, making this a less economical model than the public cloud.
Further, private clouds lack the versatility of public clouds. They can only
be expanded by adding more hardware and storage capacity, making it
difficult to scale operations quickly, or frugally, should the business need
arise.
Hybrid Cloud
Hybrid clouds combine public clouds with private clouds. They are designed
to allow the two platforms to interact seamlessly, with data and applications
moving smoothly from one to the other. It’s the perfect solution for a
business or organization who needs a little bit of both options, usually
dependent upon industry and size.
The primary advantage of a hybrid cloud model is its ability to provide the
scalable computing power of a public cloud with the security and control of
a private cloud. Data can be stored safely behind the firewalls and
encryption protocols of the private cloud, then moved securely into a public
cloud environment when needed. This is especially helpful in the age of big
data analytics, when industries like healthcare must adhere to strict data
privacy regulations while also using sophisticated algorithms powered by
artificial intelligence (AI) to derive actionable insights from huge masses of
unstructured data.
Because of the combination of the two cloud models, it can be cost effective,
though the initial expenditure for the private cloud should be considered.
These costs can, in some ways, be recouped later, when scalability and
growth can be handled on the public cloud when that becomes necessary.
Community Cloud
Community clouds are a collaborative, multi-tenant platform used by
several distinct organizations to share the same applications. The users are
typically operating within the same industry or field and share common
concerns in terms of security, compliance, and performance.
As with the other models, scalability is a benefit, and at a cost that can be
shared across organizations. Further, because of common security needs,
organizations can rest easy knowing that they’re fully compliant with any
industry regulations and that it’s in the best interest of their “digital”
neighbors to monitor this as well. Similarly, decision making regarding
changes to the systems are collaborative, ensuring, in many ways, that
decisions are made with the best interests of the group.
Even with the shared space, the system remains highly flexible, with
individual organizations able to set access controls and allows the system to
adjust to the demands of an organization, shifting resources if necessary.
While all of these are strengths, unfortunately, they come with a downside
as well. The shared storage and bandwidth can create issues with
prioritization and performance as servers adjust to demands. And, because
the storage space is shared, data security can be a concern. It’s just not
practical for most businesses for a variety of reasons, most of which have to
do with the potential pitfalls.
Some of the cloud service providers are- Amazon Web Service, Microsoft,
IBM, Salesforce.com
4. Rapid elasticity
On users demand cloud services can be available and released. Cloud
service capabilities are unlimited and used in any quantity at any time.
5. Measured service
Resources used by the users can be monitored, controlled. This reports is
available for both cloud providers and consumer.
On the basis of this measured reports cloud systems automatically controls
and optimizes the resources based on the type of services - Storage,
processing, bandwidth etc.
Cloud computing is all about renting computing services. This idea first
came in the 1950s. In making cloud computing what it is today, five
technologies played a vital role. These are distributed systems and its
peripherals, virtualization, web 2.0, service orientation, and utility
computing.
Distributed Systems
It is a composition of multiple independent systems but all of them are
depicted as a single entity to the users. The purpose of distributed
systems is to share resources and also use them effectively and
efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was
that all the systems were required to be present at the same
geographical location. Thus to solve this problem, distributed
computing led to three more types of computing and they were-
Mainframe computing, cluster computing, and grid computing.
Virtualization
It was introduced nearly 40 years back. It refers to the process of
creating a virtual layer over the hardware which allows the user to run
multiple instances simultaneously on the hardware. It is a key
technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on.
Hardware virtualization is still one of the most common types of
virtualization.
Web 2.0
It is the interface through which the cloud computing services interact
with the clients. It is because of Web 2.0 that we have interactive and
dynamic web pages. It also increases flexibility among web pages.
Popular examples of web 2.0 include Google Maps, Facebook, Twitter,
etc. Needless to say, social media is possible because of this technology
only. In gained major popularity in 2004.
Service orientation
It acts as a reference model for cloud computing. It supports low-cost,
flexible, and evolvable applications. Two important concepts were
introduced in this computing model. These were Quality of Service
(QoS) which also includes the SLA (Service Level Agreement) and
Software as a Service (SaaS).
Utility computing
It is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such
as storage, infrastructure, etc which are provisioned on a pay-per-use
basis.
Parallel Computing
It is the use of multiple processing elements simultaneously for solving any
problem. Problems are broken down into instructions and are solved
concurrently as each resource that has been applied to work is working at
the same time.
Advantages
1. It saves time and money as many resources working together will
reduce the time and cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources
are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel
Computing makes better work of the hardware.
Types of Parallelism
1. Bit-level parallelism
It is the form of parallel computing which is based on the increasing
processor’s size. It reduces the number of instructions that the system
must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute
the sum of two 16-bit integers. It must first sum up the 8 lower-order
bits, then add the 8 higher-order bits, thus requiring two instructions to
perform the operation. A 16-bit processor can perform the operation
with just one instruction.
2. Instruction-level parallelism
A processor can only address less than one instruction for each clock
cycle phase. These instructions can be re-ordered and grouped which
are later on executed concurrently without affecting the result of the
program. This is called instruction-level parallelism.
3. Task Parallelism
Task parallelism employs the decomposition of a task into subtasks and
then allocating each of the subtasks for execution. The processors
perform the execution of sub-tasks concurrently.
4. Data-level parallelism
Instructions from a single stream operate concurrently on several data –
Limited by non-regular data manipulation patterns and by memory
bandwidth
Distributed Computing
In distributed computing we have multiple autonomous computers which
seems to the user as single system. In distributed systems there is no
shared memory and computers communicate with each other through
message passing. In distributed computing a single task is divided among
different computers.
When input comes from a client to the main computer, the master CPU
divides the task into simple jobs and send it to slaves note to do it when
the jobs are done by the slave nodes, they send back to the master node,
and then it shows the result to the main computer
Grid computing: In grid computing, the subgroup consists of distributed
systems, which are often set up as a network of computer systems, each
system can belong to a different administrative domain and can differ
greatly in terms of hardware, software, and implementation network
technology.
The different department has a different computer with different OS to
manage the control node is present which help different computer with
different OS to communicate with each other and transfer messages to
work.
Elasticity in Cloud
Example:
Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can
go for Cloud-Elasticity service rather than Cloud Scalability. As soon as the
season goes out, the deployed resources can then be requested for
withdrawal.
Cloud Scalability:
Cloud scalability is used to handle the growing workload where good
performance is also needed to work efficiently with software or
applications. Scalability is commonly used where the persistent
deployment of resources is required to handle the workload statically.
4. Web 2.0 is the term used to describe a variety of web sites and
applications that allow anyone to create and share online information or
material they have created. A key element of the technology is that it allows
people to create, share, collaborate & communicate.