0% found this document useful (0 votes)
9 views

CC Unit1 Notes Compressed

The document outlines various computing paradigms including high-performance computing (HPC), parallel computing, and distributed computing, detailing their definitions, functionalities, and applications. It emphasizes the importance of HPC for processing large data sets and performing complex calculations at high speeds, while also discussing the benefits of parallel computing in terms of efficiency and cost-effectiveness. Additionally, it describes distributed computing as a method for multiple computers to collaborate on solving common problems, enhancing performance and scalability.

Uploaded by

nikhilbhukya198
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

CC Unit1 Notes Compressed

The document outlines various computing paradigms including high-performance computing (HPC), parallel computing, and distributed computing, detailing their definitions, functionalities, and applications. It emphasizes the importance of HPC for processing large data sets and performing complex calculations at high speeds, while also discussing the benefits of parallel computing in terms of efficiency and cost-effectiveness. Additionally, it describes distributed computing as a method for multiple computers to collaborate on solving common problems, enhancing performance and scalability.

Uploaded by

nikhilbhukya198
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Unit 1

Syllabus
Computing Paradigms: High performance computing, parallel computing,
Distributed computing, cluster computing, Grid computing, cloud computing,
Bio computing, Mobile computing, Quantum computing, optical computing,
Nano computing

Introduction
The term paradigm conveys that there is a set of practices to be followed to accomplish a task. In the
domain of computing, there are many different standard practices being followed based on inventions
and technological advancements. In this chapter, we look into the various computing paradigms:-
namely high performance computing, cluster computing, grid computing, cloud computing, bio-
computing, mobile computing, quantum computing, optical computing, nano-computing, and network
computing. As computing systems become faster and more capable, it is required to note the features of
modern computing in order to relate ourselves.

High performance computing

In high-performance computing systems, a pool of processors (processor machines or central processing


units [CPUs]) connected (networked) with other resources like memory, storage, and input and output
devices, and the deployed software is enabled to run in the entire system of connected components.

High performance computing (HPC) is the ability to process data and perform complex calculations at
high speeds. To put it into perspective, a laptop or desktop with a 3 GHz processor can perform around
3 billion calculations per second. While that is much faster than any human can achieve, it pales in
comparison to HPC solutions that can perform quadrillions of calculations per second.

The need for high-performance computing (HPC)


In the modern world, ground breaking discoveries and inventions can only happen with technology, data
and advanced computing. As cutting-edge technologies like artificial intelligence (AI), machine
learning (ML) and IoT evolve, they require huge amounts of data. They also need high-performance
computing because HPC systems can perform quadrillions of calculations per second, compared to
regular laptops or desktops that can perform at most 3 billion calculations per second (with a 3
GHz processor).

HPC is specifically needed for these reasons:

 It paves the way for new innovations in science, technology, business and academia.
 It improves processing speeds, which can be critical for many kinds of computing
operations, applications and workloads.
 It helps lay the foundation for a reliable, fast IT infrastructure that can store, process and
analyze massive amounts of data for various applications.

For Example :

 High-performance reduces the need for testing devices such as driverless cars in the real world.

How does HPC work?


The HPC workflow consists of the three components, they are:
 Compute
 Network
 Storage

Here, we would like to list major points:

 To build an efficient and high performance computing architecture, in HPC computational


servers are interconnected to work on large or complex operations in the Cluster environment.
 In the cluster, where software programs or algorithms run simultaneously to produce the
effective outcome.
 Then the cluster will be networked together in the data storage to produce the output.
 To operate at the maximum level, each component in the cluster should pace with each other (in
the cluster each component is referred to as a node).
 The storage component should be able to feed and inject the data from various network sources.
 Each networking component should be able to deliver high speed data transportation between the
computational servers and data storage devices.

HPC systems can run different types of workloads. Two popular types are parallel and tightly
coupled workloads.
 In parallel workloads, computational problems are divided into small, independent tasks that can
run in parallel at very high speeds. Often, these workloads don't communicate with each other.
Examples of such workloads include risk simulations, logistics simulations, contextual search
and molecular modeling.

 When workloads are divided into smaller tasks and communicate continuously with each other
as they perform their processing, they are said to be tightly coupled. This usually happens with
workloads across different nodes in a cluster. Some common examples of tightly coupled
workloads are automobile collision emulations, geospatial simulations, weather forecast
modeling and traffic management.

HPC use cases

Deployed on premises, at the edge, or in the cloud, HPC solutions are used for a variety of purposes
across multiple industries. Examples include:

 Research labs. HPC is used to help scientists find sources of renewable energy, understand
the evolution of our universe, predict and track storms, and create new materials.
 Media and entertainment. HPC is used to edit feature films, render mind-blowing special
effects, and stream live events around the world.
 Oil and gas. HPC is used to more accurately identify where to drill for new wells and to
help boost production from existing wells.
 Artificial intelligence and machine learning. HPC is used to detect credit card fraud,
provide self-guided technical support, teach self-driving vehicles, and improve cancer
screening techniques.
 Financial services. HPC is used to track real-time stock trends and automate trading.
 Manufacturing.To design, manufacture and test new products using simulations.
 Healthcare. To research and develop new vaccines, drugs and treatments for diseases;
improve screening techniques; and to make more accurate patient diagnoses.
 Aerospace. For personnel training and to create critical simulations for airplane testing.
 Financial services. To automate trading, detect credit card fraud and track real-time stock
trends.
 Meteorology. To predict and track storms and other unusual weather patterns.
A supercomputer is one of the best-known examples of HPC, in which one large computer is made up of
many computers and processors that work together to achieve parallel processing and high performance.

Benefits of HPC
HPC helps overcome numerous computational barriers that conventional PCs and processors typically
face. The benefits of HPC are many and include the following.
High speeds
HPC is mainly about lightning-fast processing, which means HPC systems can perform massive
amounts of calculations very quickly. In comparison, regular processors and computing systems would
take longer -- days, weeks or even months -- to perform these same calculations.

HPC systems typically use the latest CPUs and GPUs, as well as low-latency networking
fabrics and block storage devices, to improve processing speeds and computing performance.

Lower cost
Because an HPC system can process faster, applications can run faster and yield answers quickly, saving
time or money. Moreover, many such systems are available in "pay as you go" modes and can scale up
or down as needed, further improving their cost-effectiveness.

Reduced need for physical testing


Many modern-day applications require a lot of physical testing before they can be released for public or
commercial use. Self-driven vehicles are one example. Application researchers, developers and testers
can create powerful simulations using HPC systems, thus minimizing or even eliminating the need for
expensive or repeated physical tests.

Parallel Computing

What is Serial Computing?

 Traditionally, software has been written for serial computation to be run on a single computer
having a single Central Processing Unit (CPU)
 A problem is broken into a discrete series of instructions.
 Instructions are executed one after another.
 Only one instruction may execute at any moment in time.

What is Parallel Computing?

 Parallel Computing is a form of computation in which many calculations are carried out
simultaneously.
 In the simplest sense, It uses multiple compute resources to solve a computational problem
using multiple CPUs
 A problem is broken into discrete parts that can be solved concurrently
 Each part is further broken down to a series of instructions
 Instructions from each part execute simultaneously on different CPU’s.

Why use Parallel Computing

Main Reasons :

 The compute resources can include single computer with multiple processors
 Save Time and Money
 Solve Larger Problems Eg: Web Engines/databases Processing millions of transactions per
second.
 Use of non Local Resources

Limitations of Serial Computing

 Limits to serial computing - both physical and practical reasons pose significant constraints to
simply building ever faster serial computers.
 Transmission speeds - the speed of a serial computer is directly dependent upon how fast data
can move through hardware. Absolute limits are
the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9
cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
 Limits to miniaturization - processor technology is allowing an increasing number of transistors
to be placed on a chip. However, even with molecular or atomic-level components, a limit will
be reached on how small components can be.
 Economic limitations - it is increasingly expensive to make a single processor faster. Using a
larger number of moderately fast commodity processors to achieve the same (or better)
performance is less expensive.

Flynn's Classical Taxonomy


 There are different ways to classify parallel computers. One of the more widely used
classifications, in use since 1966, is called Flynn's Taxonomy.
 Flynn's taxonomy distinguishes multi-processor computer architectures according to how they
can be classified along the two independent dimensions of Instruction and Data. Each of these
dimensions can have only one of two possible states Single or Multiple.

Single Instruction, Single Data (SISD)

 A serial (non-parallel) computer


 Single instruction only one instruction stream is being acted on by the CPU during any one clock
cycle
 Single data only one data stream is being used as input during any one clock cycle Deterministic
execution
 This is the oldest and until recently, the most prevalent form of computer
 Examples most PCs, single CPU workstations and mainframes

Single Instruction, Multiple Data (SIMD)

 A type of parallel computer


 Single instruction All processing units execute the same instruction at any given clock cycle
Multiple data Each processing unit can operate on a different data element
 This type of machine typically has an instruction dispatcher, a very high-bandwidth internal
network, and a very large array of very small-capacity instruction units.
 Best suited for specialized problems characterized by a high degree of regularity,such as image
processing.
 Synchronous (lockstep) and deterministic execution
 Two varieties Processor Arrays and Vector Pipelines
 Examples
 Processor Arrays Connection Machine CM-2, Maspar MP-1, MP-2 Vector Pipelines IBM 9000,
Cray C90, Fujitsu VP,NEC SX-2, Hitachi S820

Multiple Instruction, Single Data (MISD)

 A single data stream is fed into multiple processing units.


 Each processing unit operates on the data independently via independent instruction
streams.
 Few actual examples of this class of parallel computer have ever existed. One is the
experimental Carnegie-Mellon C.mmp computer (1971).
 Some conceivable uses might be multiple frequency filters operating on a single
signal stream multiple cryptography algorithms attempting to crack a single coded message.

Multiple Instruction, Multiple Data (MIMD)

 Currently, the most common type of parallel computer. Most modern computers fall into this
category.
 Multiple Instruction every processor may be executing a different instruction stream
 Multiple Data every processor may be working with a different data stream
 Execution can be synchronous or asynchronous, deterministic or non-deterministic
 Examples most current supercomputers, networked parallel computer "grids" and multi-
processor SMP computers - including some types of PCs.

Parallel Computer Memory Architectures

 Shared Memory
 Distributed Memory
 Hybrid Distributed-Shared Memory

Shared Memory

 Shared memory parallel computers vary widely, but generally have in common the ability for all
processors to access all memory as global address space.
 Multiple processors can operate independently but share the same memory resources.
 Changes in a memory location effected by one processor are visible to all other processors.
 Shared memory machines can be divided into two main classes based upon memory access times
UMA and NUMA.

(a) Uniform Memory Access (UMA)

 Most commonly represented by Symmetric Multiprocessor (SMP) machines and access times
to memory Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one
processor updates a location in shared memory, all the other processors know about the update.
Cache coherency is accomplished at the hardware level.

(b)Non-Uniform Memory Access (NUMA)

 Often made by physically linking two or more SMPs One SMP can directly access memory of
another SMP Not all processors have equal access time to all memories Memory access across
link is slower If cache coherency is maintained, then may also be called CC-NUMA - Cache
Coherent NUMA

Distributed Memory

 Like shared memory systems, distributed memory systems vary widely but share a common
characteristic. Distributed memory systems require a communication network to connect
inter-processor memory.
 Processors have their own local memory. Memory addresses in one processor do not map to
another processor, so there is no concept of global address space across all processors.
 Because each processor has its own local memory, it operates independently. Changes it makes
to its local memory have no effect on the memory of other processors. Hence, the concept of
cache coherency does not apply.
 When a processor needs access to data in another processor, it is usually the task of the
programmer to explicitly define how and when data is communicated. Synchronization between
tasks is likewise the programmer's responsibility.
 The network "fabric" used for data transfer varies widely, though it can can be as simple as
Ethernet.

Hybrid Distributed-Shared Memory

 The largest and fastest computers in the world today employ both shared and distributed memory
architectures.
 The shared memory component is usually a cache coherent SMP machine. Processors on a given
SMP can address that machine's memory as global.
 The distributed memory component is the networking of multiple SMPs. SMPs know only about
their own memory - not the memory on another SMP. Therefore, network communications are
required to move data from one SMP to another.
 Current trends seem to indicate that this type of memory architecture will continue to prevail and
increase at the high end of computing for the foreseeable future.

Parallel computing use cases

Today, commercial applications are providing an equal or greater driving force in the development of
faster computers. These applications require the processing of large amounts of data in sophisticated
ways. Example applications include

 parallel databases, data mining


 oil exploration
 web search engines, web based business services
 computer-aided diagnosis in medicine
 management of national and multi-national corporations
 advanced graphics and virtual reality,particularly in the entertainment industry
 weather and climate
 chemical and nuclear reactions
 biological, human genome
 geological, seismic activity
 mechanical devices - from prosthetics to spacecraft
 electronic circuits
 manufacturing processes

Benefits of Parallel computing

Efficiency:

A computer that uses parallel programming can make better use of its resources to process and solve
problems. Most modern computers have hardware that includes multiple cores, threads or processors
that allow them to run many processes at once and maximize their computing potential. When
computers use all their resources to solve a problem or process information, they are more efficient at
performing tasks.

Speed
Another benefit of parallel computing is its ability to solve complex problems. Parallel programs can
divide complex problems down into smaller tasks and process these individual tasks simultaneously. By
separating larger computational problems into smaller tasks and processing them at the same time,
parallel processing allows computers to run faster.

Cost Effectiveness

Additionally, the hardware architecture that allows for parallel programming is more cost-effective than
systems that only allow for serial processing. Although a parallel programming hardware system may
require more parts than a serial processing system, they are more efficient at performing tasks. This
means that they produce more results in less time than serial programs and hold more financial value
over time

Distributed Computing
Early computing was performed on a single processor called centralized computing.

Distributed computing is the method of making multiple computers work together to solve a common
problem. It makes a computer network appear as a powerful single computer that provides large-scale
resources to deal with complex challenges.

Even though the software components may be spread out across multiple computers in multiple
locations, they're run as one system. This is done to improve efficiency and performance. The systems
on different networked computers communicate and coordinate by sending messages back and forth to
achieve a defined task. Distributed computing can increase performance, resilience and scalability,
making it a common computing model in database and application design.

For example, distributed computing can encrypt large volumes of data; solve physics and chemical
equations with many variables; and render high-quality, three-dimensional video animation. Distributed
systems, distributed programming, and distributed algorithms are some other terms that all refer to
distributed computing.

Examples of Distributed systems

 The Internet
 An intranet a network of computers and workstations within an organization, segregated from
the Internet via a protective device (a firewall).

Example of a large-scale distributed system

How distributed computing works

In distributed computing, you design applications that can run on several computers instead of on just
one computer. You achieve this by designing the software so that different computers perform different
functions and communicate to develop the final solution. There are four main types of distributed
architecture.

(1) Client-server architecture


Client-server is the most common method of software organization on a distributed system. The
functions are separated into two categories: clients and servers.
Clients
Clients have limited information and processing ability. Instead, they make requests to the servers,
which manage most of the data and other resources. You can make requests to the client, and it
communicates with the server on your behalf.
Servers
Server computers synchronize and manage access to resources. They respond to client requests with
data or status information. Typically, one server can handle requests from several machines.
Benefits and limitations
Client-server architecture gives the benefits of security and ease of ongoing management. You have
only to focus on securing the server computers. Similarly, any changes to the database systems require
changes to the server only.
The limitation of client-server architecture is that servers can cause communication bottlenecks,
especially when several machines make requests simultaneously.
(2) Three-tier architecture
In three-tier distributed systems, client machines remain as the first tier you access. Server machines, on
the other hand, are further divided into two categories:

Application servers
Application servers act as the middle tier for communication. They contain the application logic or the
core functions that you design the distributed system for.
Database servers
Database servers act as the third tier to store and manage the data. They are responsible for data retrieval
and data consistency.
By dividing server responsibility, three-tier distributed systems reduce communication bottlenecks and
improve distributed computing performance.

(3) N-tier architecture


N-tier models include several different client-server systems communicating with each other to solve the
same problem. Most modern distributed systems use an n-tier architecture with different enterprise
applications working together as one system behind the scenes.

(4) Peer-to-peer architecture


Peer-to-peer distributed systems assign equal responsibilities to all networked computers. There is no
separation between client and server computers, and any computer can perform all responsibilities. Peer-
to-peer architecture has become popular for content sharing, file streaming, and blockchain networks.

How does distributed computing Connected

Distributed computing works by computers passing messages to each other within the distributed
systems architecture. Communication protocols or rules create a dependency between the components of
the distributed system. This interdependence is called coupling, and there are two main types of
coupling.
Loose coupling
In loose coupling, components are weakly connected so that changes to one component do not affect the
other. For example, client and server computers can be loosely coupled by time. Messages from the
client are added to a server queue, and the client can continue to perform other functions until the server
responds to its message.
Tight coupling
High-performing distributed systems often use tight coupling. Fast local area networks typically connect
several computers, which creates a cluster. In cluster computing, each computer is set to perform the
same task. Central control systems, called clustering middleware, control and schedule the tasks and
coordinate communication between the different computers.

Benefits

Scalability
Distributed systems can grow with your workload and requirements. You can add new nodes, that is,
more computing devices, to the distributed computing network when they are needed.
Availability
Your distributed computing system will not crash if one of the computers goes down. The design shows
fault tolerance because it can continue to operate even if individual computers fail.
Consistency
Computers in a distributed system share information and duplicate data between them, but the system
automatically manages data consistency across all the different computers. Thus, you get the benefit of
fault tolerance without compromising data consistency.
Transparency
Distributed computing systems provide logical separation between the user and the physical devices.
You can interact with the system as if it is a single computer without worrying about the setup and
configuration of individual machines. You can have different hardware, middleware, software, and
operating systems that work together to make your system function smoothly.
Efficiency
Distributed systems offer faster performance with optimum resource use of the underlying hardware. As
a result, you can manage any workload without worrying about system failure due to volume spikes or
underuse of expensive hardware.

Distributed computing use cases

Distributed computing is everywhere today. Mobile and web applications are examples of distributed
computing because several machines work together in the backend for the application to give you the
correct information. However, when distributed systems are scaled up, they can solve more complex
challenges. Let’s explore some ways in which different industries use high-performing distributed
applications.
Healthcare and life sciences
Healthcare and life sciences use distributed computing to model and simulate complex life science data.
Image analysis, medical drug research, and gene structure analysis all become faster with distributed
systems. These are some examples:
 Accelerate structure-based drug design by visualizing molecular models in three dimensions.
 Reduce genomic data processing times to get early insights into cancer, cystic fibrosis, and
Alzheimer’s.
 Develop intelligent systems that help doctors diagnose patients by processing a large volume of
complex images like MRIs, X-rays, and CT scans.

Engineering research
Engineers can simulate complex physics and mechanics concepts on distributed systems. They use this
research to improve product design, build complex structures, and design faster vehicles. Here are some
examples:
 Computation fluid dynamics research studies the behavior of liquids and implements those
concepts in aircraft design and car racing.
 Computer-aided engineering requires compute-intensive simulation tools to test new plant
engineering, electronics, and consumer goods.

Financial services
Financial services firms use distributed systems to perform high-speed economic simulations that assess
portfolio risks, predict market movements, and support financial decision-making. They can create web
applications that use the power of distributed systems to do the following:
 Deliver low-cost, personalized premiums
 Use distributed databases to securely support a very high volume of financial transactions.
 Authenticate users and protect customers from fraud

Energy and environment


Energy companies need to analyze large volumes of data to improve operations and transition to
sustainable and climate-friendly solutions. They use distributed systems to analyze high-volume data
streams from a vast network of sensors and other intelligent devices. These are some tasks they might
do:
 Streaming and consolidating seismic data for the structural design of power plants
 Real-time oil well monitoring for proactive risk management

Cluster Computing

Cluster computing is a collection of tightly or loosely connected computers that work together so that
they act as a single entity. The connected computers execute operations all together thus creating the
idea of a single system. The clusters are generally connected through fast local area networks (LANs)

Why is Cluster Computing important?


1. Cluster computing gives a relatively inexpensive, unconventional to the large server or
mainframe computer solutions.
2. It resolves the demand for content criticality and process services in a faster way.
3. Many organizations and IT companies are implementing cluster computing to augment
their scalability, availability, processing speed and resource management at economic
prices.
4. It ensures that computational power is always available.
5. It provides a single general strategy for the implementation and application of parallel
high-performance systems independent of certain hardware vendors and their product
decisions.
A Simple Cluster Computing Layout
Types of Cluster computing :

1. High performance (HP) clusters :


HP clusters use computer clusters and supercomputers to solve advance computational problems.
They are used to performing functions that need nodes to communicate as they perform their jobs.
They are designed to take benefit of the parallel processing power of several nodes.

2. Load-balancing clusters :
Incoming requests are distributed for resources among several nodes running similar programs or
having similar content. This prevents any single node from receiving a disproportionate amount of
task. This type of distribution is generally used in a web-hosting environment.
3. High Availability (HA) Clusters :
HA clusters are designed to maintain redundant nodes that can act as backup systems in case any
failure occurs. Consistent computing services like business activities, complicated databases, customer
services like e-websites and network file distribution are provided. They are designed to give
uninterrupted data availability to the customers.
Cluster Computing Architecture :
 It is designed with an array of interconnected individual computers and the computer
systems operating collectively as a single standalone system.
 It is a group of workstations or computers working together as a single, integrated
computing resource connected via high speed interconnects.
 A node – Either a single or a multiprocessor network having memory, input and output
functions and an operating system.
 Two or more nodes are connected on a single line or every node might be connected
individually through a LAN connection.

Cluster Computing Architecture


Advantages of Cluster Computing :

1. High Performance :
The systems offer better and enhanced performance than that of mainframe computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers to the network. Cluster
computing is capable of combining several additional resources or the networks to the existing
computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function as a proxy for the failed
node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be added.
Applications of Cluster Computing :
 Various complex computational problems can be solved.
 It can be used in the applications of aerodynamics, astrophysics and in data mining.
 Weather forecasting.
 Image Rendering.
 Various e-commerce applications.
 Earthquake Simulation.
 Petroleum reservoir simulation.

Grid Computing

• Grid computing network might consist of several distributed computing systems.


• Grid computing is a computing infrastructure that combines computer resources spread over
different geographical locations to to accomplish a joint task.
• These tasks are compute-intensive and difficult for a single machine to handle.
• So, Several machines on a network collaborate under a common protocol and work as a single
virtual supercomputer to get complex tasks done.
• Mainly used for Technical or Scientific that requires a great number of computer processing
cycles or access to large amounts of data.

For example, meteorologists use grid computing for weather modeling. Weather modeling is a
computation-intensive problem that requires complex data management and analysis. Processing
massive amounts of weather data on a single computer is slow and time consuming. That’s why
meteorologists run the analysis over geographically dispersed grid computing infrastructure and
combine the results.
Grid computing is more popular due to the following reasons:
• Its ability to make use of unused computing power, and thus, it is a cost-effective solution (reducing
investments, only recurring costs)
• As a way to solve problems in line with any HPC-based application
• Enables heterogeneous resources of computers to work cooperatively and collaboratively to solve a
scientific problem.
- Researchers associate the term grid to the way electricity is distributed in municipal areas for the
common man.

How does grid computing work?

Grid nodes and middleware work together to perform the grid computing task. In grid operations, the
three main types of grid nodes perform three different roles.

(1) User node

A user node is a computer that requests resources shared by other computers in grid computing. When
the user node requires additional resources, the request goes through the middleware and is delivered to
other nodes on the grid computing system.

(2) Control node/Server

A control node administers the network and manages the allocation of the grid computing resources.
The middleware runs on the control node. When the user node requests a resource, the middleware
checks for available resources and assigns the task to a specific provider node.

(3) Provider node/Grid node

A provider node is a computer that shares its resources for grid computing. When provider machines
receive resource requests, they perform subtasks for the user nodes, such as forecasting stock prices for
different markets. At the end of the process, the middleware collects and compiles all the results to
obtain a global forecast.
What are the types of grid computing?

Grid computing is generally classified as follows.

Computational grid
A computational grid consists of high-performance computers. It allows researchers to use the combined
computing power of the computers. Researchers use computational grid computing to perform resource-
intensive tasks, such as mathematical simulations.
Scavenging grid
While similar to computational grids, CPU scavenging grids have many regular computers. The
term scavenging describes the process of searching for available computing resources in a network of
regular computers. While other network users access the computers for non-grid–related tasks, the grid
software uses these nodes when they are free. The scavenging grid is also known as CPU scavenging or
cycle scavenging.
Data grid
A data grid is a grid computing network that connects to multiple computers to provide large data
storage capacity. You can access the stored data as if on your local machine without having to worry
about the physical location of your data on the grid.

Grid computing Benefits


Organizations use grid computing for several reasons.
Efficiency
With grid computing, you can break down an enormous, complex task into multiple subtasks. Multiple
computers can work on the subtasks concurrently, making grid computing an efficient computational
solution.
Cost
Grid computing works with existing hardware, which means you can reuse existing computers. You can
save costs while accessing your excess computational resources. You can also cost-effectively access
resources from the cloud.
Flexibility
Grid computing is not constrained to a specific building or location. You can set up a grid computing
network that spans several regions. This allows researchers in different countries to work collaboratively
with the same supercomputing power.

Use cases of grid computing


Gaming
The gaming industry uses grid computing to provide additional computational resources for game
developers. The grid computing system splits large tasks, such as creating in-game designs, and
allocates them to multiple machines. This results in a faster turnaround for the game developers.
Entertainment
Some movies have complex special effects that require a powerful computer to create. The special
effects designers use grid computing to speed up the production timeline. They have grid-supported
software that shares computational resources to render the special-effect graphics.
Engineering
Engineers use grid computing to perform simulations, create models, and analyze designs. They run
specialized applications concurrently on multiple machines to process massive amounts of data. For
example, engineers use grid computing to reduce the duration of a Monte Carlo simulation, a software
process that uses past data to make future predictions.

Cloud Computing

Cloud computing is a virtualization-based technology that allows us to create, configure, and customize
applications via an internet connection. The cloud technology includes a development platform, hard
disk, software application, and database.The term cloud refers to a network or the internet. It is a
technology that uses remote servers on the internet to store, manage, and access data online rather than
local drives. The data can be anything such as files, images, documents, audio, video, and more.

There are the following operations that we can do using cloud computing:

o Developing new applications and services


o Storage, back up, and recovery of data
o Hosting blogs and websites
o Delivery of software on demand
o Analysis of data
o Streaming videos and audios

Definition of Cloud Computing:

The term “Cloud Computing” refers to services provided by the cloud that is responsible for delivering of
computing services such as servers, storage, databases, networking, software, analytics, intelligence, and
more, over the Cloud (Internet).
Cloud computing applies a virtualized platform with elastic resources on demand by provisioning
hardware, software, and data sets dynamically

Cloud Computing provides an alternative to the on-premises data center. With an on- premises data
center, we must manage everything, such as purchasing and installing hardware, virtualization,
installing the operating system, and any other required applications, setting up the network, configuring
the firewall, and setting up storage for data. After doing all the set-up, we become responsible for
maintaining it through its entire lifecycle.

However, if we choose Cloud Computing, a cloud vendor is responsible for the hardware purchase and
maintenance. They also provide a wide variety of software and platform as a service. We can take any
required services on rent. The cloud computing services are charged based on usage.

The cloud environment provides an easily accessible online portal that makes handy for the user to
manage the compute, storage, network, and application resources. Some of the cloud service providers are
in the following figure.

Cloud Computing Services


Cloud computing is not a single piece of technology like a microchip or a cellphone. It's a system
primarily comprised of three services:

(1) infrastructure-as-a-service (IaaS), and


(2) platform-as-a-service (PaaS).
(3) software-as-a-service (SaaS)

(1)Infrastructure as a Service

• IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed


over the internet. The main advantage of using IaaS is that it helps users to avoid the cost and
complexity of purchasing and managing the physical servers.

Characteristics of IaaS

There are the following characteristics of IaaS -

• Resources are available as a service


• Services are highly scalable
• Dynamic and flexible
• GUI and API-based access
• Automated administrative tasks

Companies providing IAAS are DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft
Azure, Google Compute Engine (GCE)

Example: AWS provides full control of virtualized hardware, memory, and storage. Servers, firewalls,
and routers are provided, and a network topology can be configured by the tenant.,

(2)Platform as a service

• PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.

Characteristics of PaaS

There are the following characteristics of PaaS -

• Accessible to various users via the same development application.


• Integrates with web services and databases.
• Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.
• Support multiple languages and frameworks.
• Provides an ability to "Auto-scale".

Companies offering PaaS are AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google
App Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.

Example of PaaS is The World Wide Web (WWW) can be considered as the operating system
for all our Internet-based applications. However, one has to understand that we will always need a local
operating system in our computer to access webbased applications.
The basic meaning of the term platform is that it is the support on which applications run or give results
to the users. For example, Microsoft Windows is a platform. But, a platform does not have to be an
operating system. Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends (applications) such as Office
2.0, more and more applications that were originally available on desktop computers are now being
converted into web–cloud applications. Word processors like Buzzword and office suites
like Google Docs are now available in the cloud as their desktop counterparts.All these kinds of trends
in providing applications via the cloud are turning cloud computing into a platform or to act as a
platform.

(3)Software as a Service:

• SaaS is also known as "on-demand software". It is a software in which the applications are
hosted by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.

Characteristics of SaaS

• There are the following characteristics of SaaS -


• Managed from a central location
• Hosted on a remote server
• Accessible over the internet
• Users are not responsible for hardware and software updates. Updates are applied automatically
• The services are purchased on the pay-as-per-use basis

Companies providing SaaS are BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco
WebEx, ZenDesk, Slack, and GoToMeeting.

Example of SaaS is The simplest thing that any computer does is allow us to store and retrieve
information. We can store our family photographs, our favorite songs, or even save movies on it, which
is also the most basic service offered by cloud computing. Let us look at the example of a popular
application called Flickr to illustrate the meaning of this section.

While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to
store those images. In many ways, it is superior to storing the images on your computer:
1. First, Flickr allows us to easily access our images no matter where we are or what type of device we
are using. While we might upload the photos of our vacation from our home computer, later, we can
easily access them from our laptop at the office.
2. Second, Flickr lets us share the images. There is no need to burn them to a CD or save them on a flash
drive. We can just send someone our Flickr address to share these photos or images.
3. Third, Flickr provides data security. By uploading the images to Flickr, we are providing ourselves
with data security by creating a backup on the web. And, while it is always best to keep a local copy—
either on a computer, a CD, or a flash drive—the truth is that we are far more likely to lose the images
that we store locally than Flickr is of losing our images.
Types of Cloud
1. Public cloud
2. Private cloud
3. Community cloud
4. Hybrid cloud
(1)Public Cloud
 Public clouds are managed by third parties which provide cloud services over the internet to
the public, these services are available as pay-as-you-go billing models.
 They offer solutions for minimizing IT infrastructure costs and become a good option for
handling peak loads on the local infrastructure.
 Public clouds are the go-to option for small enterprises, which can start their businesses
without large upfront investments by completely relying on public infrastructure for their IT
needs.
 The fundamental characteristics of public clouds are multitenancy. A public cloud is meant to
serve multiple users, not a single customer. A user requires a virtual computing environment
that is separated, and most likely isolated, from other users.

Public cloud
Advantages of using a Public cloud are:

1. High Scalability
2. Cost Reduction
3. Reliability and flexibility
4. Disaster Recovery
Disadvantages of using a Public cloud are:
1. Loss of control over data
2. Data security and privacy
3. Limited Visibility
4. Unpredictable cost

(2)Private cloud
Private clouds are distributed systems that work on private infrastructure and provide the users with
dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private clouds,
there could be other schemes that manage the usage of the cloud and proportionally billing of the
different departments or sections of an enterprise. Private cloud providers are HP Data Centers,
Ubuntu, Elastic-Private cloud, Microsoft, etc.

Private Cloud

The advantages of using a private cloud are as follows:

1. Customer information protection: In the private cloud security concerns are less since
customer data and other sensitive information do not flow out of private infrastructure.
2. Infrastructure ensuring SLAs: Private cloud provides specific operations such as
appropriate clustering, data replication, system monitoring, and maintenance, disaster
recovery, and other uptime services.
3. Compliance with standard procedures and operations: Specific procedures have to be
put in place when deploying and executing applications according to third-party
compliance standards. This is not possible in the case of the public cloud.
Disadvantages of using a private cloud are:
1. The restricted area of operations: Private cloud is accessible within a particular area. So
the area of accessibility is restricted.
2. Expertise requires: In the private cloud security concerns are less since customer data
and other sensitive information do not flow out of private infrastructure. Hence skilled
people are required to manage & operate cloud services.
(3)Community cloud
Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing
responsibilities among the organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have shared concerns
or tasks. An organization or a third party may manage the cloud.

Community Cloud

Advantages of using Community cloud are:

1. Because the entire cloud is shared by numerous enterprises or a community, community


clouds are cost-effective.
2. Because it works with every user, the community cloud is adaptable and scalable. Users
can alter the documents according to their needs and requirements.
3. Public cloud is less secure than the community cloud, which is more secure than private
cloud.
4. Thanks to community clouds, we may share cloud resources, infrastructure, and other
capabilities between different enterprises.

Disadvantages of using Community cloud are:


1. Not all businesses should choose community cloud.
2. gradual adoption of data
3. It’s challenging for corporations to share duties.
Sectors that use community clouds are:

1. Media industry: Media companies are looking for quick, simple, low-cost ways for increasing the
efficiency of content generation. Most media productions involve an extended ecosystem of partners.
In particular, the creation of digital content is the outcome of a collaborative process that includes the
movement of large data, massive compute-intensive rendering tasks, and complex workflow
executions.

2. Healthcare industry: In the healthcare industry community clouds are used to share information
and knowledge on the global level with sensitive data in the private infrastructure.

3. Energy and core industry: In these sectors, the community cloud is used to cluster a set of
solution which collectively addresses the management, deployment, and orchestration of services and
operations.
4. Scientific research: In this organization with common interests in science share a large distributed
infrastructure for scientific computing.

(4) Hybrid cloud:


A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public
cloud and private cloud. For this reason, they are also called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on-demand and efficiently address
peak loads. Here public clouds are needed. Hence, a hybrid cloud takes advantage of both public and
private clouds.

Advantages of using a Hybrid cloud are:

1) Cost: Available at a cheap cost than other clouds because it is formed by a distributed system.
2) Speed: It is efficiently fast with lower cost, It reduces the latency of the data transfer process.
3) Security: Most important thing is security. A hybrid cloud is totally safe and secure because it
works on the distributed system network.

Disadvantages of using a Hybrid cloud are:


1. It’s possible that businesses lack the internal knowledge necessary to create such a hybrid
environment. Managing security may also be more challenging. Different access levels and
security considerations may apply in each environment.
2. Managing a hybrid cloud may be more difficult. With all of the alternatives and choices
available today, not to mention the new PaaS components and technologies that will be
released every day going forward, public cloud and migration to public cloud are already
complicated enough. It could just feel like a step too far to include hybrid.

Advantages of cloud computing.

1. Cost Savings
2. Security
3. Flexibility
4. Mobility
5. Insight
6. Increased Collaboration
7. Quality Control
8. Disaster Recovery
9. Loss Prevention
10. Automatic Software Updates
11. Competitive Edge
12. Sustainability

Disadvantages of Cloud Storage

1. Internet Connection
Cloud based storage is dependent on having an internet connection. If you are on a slow network
you may have issues accessing your storage. In the event you find yourself somewhere without
internet, you won't be able to access your files.
2. Costs
There are additional costs for uploading and downloading files from the cloud. These can
quickly add up if you are trying to access lots of files often.
3. Hard Drives
Cloud storage is supposed to eliminate our dependency on hard drives right? Well some business
cloud storage providers require physical hard drives as well.
4. Support
Support for cloud storage isn't the best, especially if you are using a free version of a cloud
provider. Many providers refer you to a knowledge base or FAQs.
5. Privacy
When you use a cloud provider, your data is no longer on your physical storage. So who is
responsible for making sure that data is secure? That's a gray area that is still being figured out.

Bio Computing
Growing needs of mankind-Rapid Development. Rapid advancement in computer technology will lose
its momentum when silicon chip reaches its full capacity & miniaturization and Solving complex
problems which today's supercomputers are unable to perform in stipulated period of time.
What could be a remedy to this concern?????

Biological Computers are computers which use synthesized biological components to store and
manipulate data analogous to processes in the human body. The result is small yet faster computer that
operates with great accuracy. Main biological component used in a Biological Computer is : DNA
DNA Stands for DeOxyRiboNucleic Acid. A hereditary material found in almost all living organisms.
Located inside the nucleus of a cell. Helps in long term storage of information. Information in DNA is
stored as a code made of four chemical bases (A,T,G & C). Order & sequence of these bases determine
the kind of information stored.

Graphical Representation of Inherent Bonding Properties of DNA

In the above Diagram, the molecules A,T,G,C are nothing but bits as classical computers. Here they are
performing calculations in Parallel as well as Multiple also.
Example here we have to perform A=B+C+D+E and A= B-C-D-E then as per the above diagram in first
strands Addition is performing and in the second strands the subtraction calculation is done.

DNA Computers are small, fast and highly efficient computers which includes the following
properties:- Dense data storage. Massively parallel computation. Extraordinary energy efficiency.

How Dense is the Data Storage


 with bases spaced at 0.35 nm along DNA, data density is over a million Gbits/inch compared to
7 Gbits/inch in typical high performance HDD.

How Enormous is the Parallelism


 A test tube of DNA can contain trillions of strands. Each operation on a test tube of DNA is
carried out on all strands in the tube in parallel !

How Extraordinary is the Energy Efficiency


 Modern supercomputers only operate at 109 operations per joule. Adleman figured his computer
was running 2 x 10 19 operations per joule.

Is DNA computing possible?


 Yes, DNA computing is possible. In 2019, researchers from Microsoft and the University of
Washington demonstrated the first fully automated system to store and retrieve data in
manufactured DNA. They wrote… (drumroll, please), “Hello.”
 The end goal is to reduce warehouse-sized data centers into something much smaller. And
speaking of storage, so far, the researchers at the University of Washington have managed to
store one gigabyte in DNA. (Yes, they did store funny cat photos.)
What is DNA computing used for?
 The data we produce every second, of every hour, of every day has to go somewhere. But
where? Currently a lot of it ends up in the cloud. Cloud-hosted storage is super cheap, and it’s
estimated by 2025, half of the 175 zettabytes of data produced will be stored in the cloud.
 What is a zettabyte? It’s a billion terabytes!
 Microsoft is also concerned about having to store all of these cat GIFs and meme generators, so
they’re investing in DNA computing technology.

Security and DNA computing in cryptography


 One area that will benefit hugely from DNA computing is data security. DNA-based security
sounds like a terrible plot in a cheap sci-fi movie, but it’s real . . . or rather, it will be.
 DNA-based cryptography works largely like classic cryptography, using a private and public
key. But, because DNA cryptography is incredibly fast, the keys can be massive.

Applications:
 Pattern recognition
 Cryptography
 Evaluating gene
 Medical Application: ‘developing disease’ treatments such as cancer

Advantages of Biological Computers


 A DNA computer can solve hardest of problems in a matter of weeks.
 Perform millions of operations simultaneously.
 Efficiently handle massive amounts of working memory.
 Cheap, clean, readily available materials.
 Amazing ability to store information.

Why DNA computing is part of the future of tech


 DNA computing carries the promise of cheap, huge, accessible data storage and an exponential
increase in computing power and speed.
 There are still huge challenges, though, not least of which is the cost of creating the DNA.
However, we are already past the proof-of-concept phase and real money is being spent to create
real commercial solutions for DNA cryptography in cloud computing and network security.

Mobile Computing
Mobile Computing is a technology that provides an environment that enables users to transmit data from
one device to another device without the use of any physical link or cables.
In other words, you can say that mobile computing allows transmission of data, voice and video via a
computer or any other wireless-enabled device without being connected to a fixed physical link. In this
technology, data transmission is done wirelessly with the help of wireless devices such as mobiles,
laptops etc.
This is only because of Mobile Computing technology that you can access and transmit data from any
remote locations without being present there physically. Mobile computing technology provides a vast
coverage diameter for communication. It is one of the fastest and most reliable sectors of the computing
technology field.
The concept of Mobile Computing can be divided into three parts:
o Mobile Hardware
o Mobile Software
o Mobile Communication

(1)Mobile Hardware
Mobile hardware consists of mobile devices or device components that can be used to receive or access
the service of mobility. Examples of mobile hardware can be smartphones, laptops, portable PCs, tablet
PCs, Personal Digital Assistants, etc.

These devices are inbuilt with a receptor medium that can send and receive signals. These devices are
capable of operating in full-duplex. It means they can send and receive signals at the same time. They
don't have to wait until one device has finished communicating for the other device to initiate
communications.

(2)Mobile Software
Mobile software is a program that runs on mobile hardware. This is designed to deal capably with the
characteristics and requirements of mobile applications. This is the operating system for the appliance of
mobile devices. In other words, you can say it the heart of the mobile systems. This is an essential
component that operates the mobile device.

This provides portability to mobile devices, which ensures wireless communication.

(3) MOBILE COMMUNICATION


Mobile Communication is the use of technology that allows us to communicate with others in
different locations without the use of any physical connection (wires or cables). It uses full duplex two
way radio telecommunication over a cellular network of base stations and uses multiplexing to send
information. Multiplexing is a method to combine multiple digital or analog signals into one signal over
the data channel.These are the types of multiplexing options available to communication channels –

• FDM (Frequency Division Multiplexing) − Here each user is assigned a different frequency
from the complete spectrum. All the frequencies can then simultaneously travel on the data
channel.
• TDM (Time Division Multiplexing) − A single radio frequency is divided into multiple slots
and each slot is assigned to a different user. So multiple users can be supported simultaneously.
• CDMA (Code Division Multiplexing) − Here several users share the same frequency spectrum
simultaneously. They are differentiated by assigning unique codes to them. The receiver has the
unique key to identify the individual calls.
Mobile communication refers to an infrastructure that ensures seamless and reliable communication
among wireless devices. This framework ensures the consistency and reliability of communication
between wireless devices. The mobile communication framework consists of communication devices
such as protocols, services, bandwidth, and portals necessary to facilitate and support the stated services.
These devices are responsible for delivering a smooth communication process.

Mobile communication framework can be divided in the following four types:

1. Fixed and Wired


2. Fixed and Wireless
3. Mobile and Wired
4. Mobile and Wireless

Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a position, and they are
connected through a physical link to communicate with other devices.
For Example, Desktop Computer.
Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a position, and they
are connected through a wireless link to make communication with other devices.
For Example, Communication Towers, WiFi router
Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and some are mobile.
They altogether make communication with other devices.
For Example, Laptops.
Mobile and Wireless: In Mobile and Wireless configuration, the devices can communicate with each
other irrespective of their position. They can also connect to any network without the use of any wired
device.
For Example, WiFi Dongle.

Mobile Communication Protocols:

 Some of the Protocols are GSM,CDMA,WLL,GPRS


GSM:
• GSM stands for Global System for Mobile communications. GSM is one of the most widely
used digital wireless telephony system. It was developed in Europe in 1980s and is now
international standard in Europe, Australia, Asia and Africa.
• Any GSM handset with a SIM (Subscriber Identity Module) card can be used in any country that
uses this standard. Every SIM card has a unique identification number. It has memory to store
applications and data like phone numbers, processor to carry out its functions and software to
send and receive messages
• GSM technology uses TDMA (Time Division Multiple Access) to support up to eight calls
simultaneously. It also uses encryption to make the data more secure.
• The frequencies used by the international standard is 900 MHz to 1800 MHz However, GSM
phones used in the US use 1900 MHz frequency and hence are not compatible with the
international system

CDMA
• CDMA stands for Code Division Multiple Access. It was first used by the British military during
World War II. After the war its use spread to civilian areas due to high service quality.
• As each user gets the entire spectrum all the time, voice quality is very high. Also, it is
automatically encrypted and hence provides high security against signal interception and
eavesdropping.

WLL
• WLL stands for Wireless in Local Loop. It is a wireless local telephone service that can be
provided in homes or offices. The subscribers connect to their local exchange instead of the
central exchange wirelessly. Using wireless link eliminates last mile or first mile construction of
network connection, thereby reducing cost and set up time. As data is transferred over very short
range, it is more secure than wired networks.
• WLL system consists of user handsets and a base station. The base station is connected to the
central exchange as well as an antenna. The antenna transmits to and receives calls from users
through terrestrial microwave links. Each base station can support multiple handsets depending
on its capacity.

GPRS
• GPRS stands for General Packet Radio Services. It is a packet based wireless communication
technology that charges users based on the volume of data they send rather than the time
duration for which they are using the service. This is possible because GPRS sends data over the
network in packets and its throughput depends on network traffic. As traffic increases, service
quality may go down due to congestion, hence it is logical to charge the users as per data volume
transmitted.
• GPRS is the mobile communication protocol used by second (2G) and third generation (3G) of
mobile telephony. It pledges a speed of 56 kbps to 114 kbps, however the actual speed may vary
depending on network load.

Applications of Mobile Computing

Following is a list of some significant fields in which mobile computing is generally applied:
o Web or Internet access.
o Global Position System (GPS).
o Emergency services.
o Entertainment services.
o Educational services.

Advantages of Mobile Computing


 Seamless and reliable communication
 Increased Productivity- Mobile devices can be used in the field by a variety of businesses, saving
time and money for both clients and employees.
 Entertainment- Mobile devices may be used for entertainment, personal use, and even
presentations to clients and colleagues.
 Portability- one of the key benefits of mobile computing is that you are not limited to a single
location in order to complete tasks or access email when on the go.
 Cloud Computing- This program allows you to save data on an online server and access them
from any computer with an internet connection. You can also access these files on several
mobile devices.

Disadvantages of Mobile Computing

In Mobile Computing, there are many drawbacks and challenges.


 The obstacle to battery consumption.
 The transmission bandwidth is inefficient.
 Over the whole network, there are link losses.
 Fluctuation in the stability of the network.
 Small screen sizes.
 The issue of interoperability.

Quantum Computing
What Is Quantum Computing?
Quantum computing is an area of computer science that uses the principles of quantum theory.
Quantum theory explains the behaviour of energy and material on the atomic and subatomic levels.
Quantum computing uses subatomic particles, such as electrons or photons. Quantum bits, or qubits,
allow these particles to exist in more than one state (i.e., 1 and 0) at the same time.
Theoretically, linked qubits can "exploit the interference between their wave-like quantum states to
perform calculations that might otherwise take millions of years."1
Classical computers today employ a stream of electrical impulses (1 and 0) in a binary manner to
encode information in bits. This restricts their processing ability, compared to quantum computing.
Understanding Quantum Computing
The field of quantum computing emerged in the 1980s. It was discovered that certain computational
problems could be tackled more efficiently with quantum algorithms than with their classical
counterparts.
Quantum computing has the capability to sift through huge numbers of possibilities and extract
potential solutions to complex problems and challenges. Where classical computers store information
as bits with either 0s or 1s, quantum computers use qubits. Qubits carry information in a quantum state
that engages 0 and 1 in a multidimensional way.
Such massive computing potential and the projected market size for its use have attracted the attention
of some of the most prominent companies. These include IBM, Microsoft, Google, D-Waves Systems,
Alibaba, Nokia, Intel, Airbus, HP, Toshiba, Mitsubishi, SK Telecom, NEC, Raytheon, Lockheed
Martin, Rigetti, Biogen, Volkswagen, and Amgen.

Uses and Benefits of Quantum Computing


Quantum computing could contribute greatly to the fields of security, finance, military affairs and
intelligence, drug design and discovery, aerospace designing, utilities (nuclear fusion), polymer
design, machine learning, artificial intelligence (AI), Big Data search, and digital manufacturing.
Quantum computers could be used to improve the secure sharing of information. Or to improve radars
and their ability to detect missiles and aircraft. Another area where quantum computing is expected to
help is the environment and keeping water clean with chemical sensors.
Here are some potential benefits of quantum computing:
 Financial institutions may be able to use quantum computing to design more effective and
efficient investment portfolios for retail and institutional clients. They could focus on creating
better trading simulators and improve fraud detection.
 The healthcare industry could use quantum computing to develop new drugs and genetically-
targeted medical care. It could also power more advanced DNA research.
 For stronger online security, quantum computing can help design better data encryption and
ways to use light signals to detect intruders in the system.
 Quantum computing can be used to design more efficient, safer aircraft and traffic planning
systems.

Features of Quantum Computing


Superposition and entanglement are two features of quantum physics on which quantum computing is
based. They empower quantum computers to handle operations at speeds exponentially higher than
conventional computers and with much less energy consumption.
Superposition
According to IBM, it's what a qubit can do rather than what it is that's remarkable. A qubit places the
quantum information that it contains into a state of superposition. This refers to a combination of all
possible configurations of the qubit. "Groups of qubits in superposition can create complex,
multidimensional computational spaces. Complex problems can be represented in new ways in these
spaces."6
Entanglement
Entanglement is integral to quantum computing power. Pairs of qubits can be made to become
entangled. This means that the two qubits then exist in a single state. In such a state, changing one
qubit directly affects the other in a manner that's predictable.
Quantum algorithms are designed to take advantage of this relationship to solve complex problems.
While doubling the number of bits in a classical computer doubles its processing power, adding qubits
results in an exponential upswing in computing power and ability.7

Quantum Computers In Development


Google
Google is spending billions of dollars to build its quantum computer by 2029. The company opened a
campus in California called Google AI to help it meet this goal. Once developed, Google could launch
a quantum computing service via the cloud.3
IBM
IBM plans to have a 1,000-qubit quantum computer in place by 2023. For now, IBM allows access to
its machines for those research organizations, universities, and laboratories that are part of its Quantum
Network.11
Microsoft
Microsoft offers companies access to quantum technology via the Azure Quantum platform.
Others
There’s interest in quantum computing and its technology from financial services firms such as
JPMorgan Chase and Visa.

What Is Quantum Computing in Simplest Terms?


Quantum computing relates to computing made by a quantum computer. Compared to traditional
computing done by a classical computer, a quantum computer should be able to store much more
information and operate with more efficient algorithms. This translates to solving extremely complex
tasks faster.

How Hard Is It to Build a Quantum Computer?


Building a quantum computer takes a long time and is vastly expensive. Google has been working on
building a quantum computer for years and has spent billions of dollars. It expects to have its quantum
computer ready by 2029. IBM hopes to have a 1,000-qubit quantum computer in place by 2023.

How Much Does a Quantum Computer Cost?


A quantum computer cost billions to build. However, China-based Shenzhen SpinQ Technology plans
to sell a $5,000 desktop quantum computer to consumers for schools and colleges. Last year, it started
selling a quantum computer for $50,000.

How Fast Is a Quantum Computer?


A quantum computer is many times faster than a classical computer or a supercomputer. Google’s
quantum computer in development, Sycamore, is said to have performed a calculation in 200 seconds,
compared to the 10,000 years that one of the world’s fastest computers, IBM's Summit, would take to
solve it. IBM disputed Google's claim, saying its supercomputer could solve the calculation in 2.5 days.
Even so, that's 1,000 times slower than Google's quantum machine.

Applications of quantum computing


 Healthcare
 Finance
 Climate forecasting
 Travel and transportation
 Privacy/security

Advantages of Quantum computing:


1. The main advantage of quantum computing is that it is even classical algorithm calculations.
They are also performed easily which is similar to the classical computer.
2. If we adding the qubits to the register we increase its storage capacity exponentially.
3. In this computing qubit is the conventional superposition state. So there are advantages of
exponential speedup to the resulted by handle the number of calculations and method.
4. Quantum computing required less power.
5. The other advantage of quantum computing is it can execute any task very faster and very
accurately compared to a classical computer. Generally, the atom changes very faster in the
case of traditional computing whereas in quantum computing it changes even faster.

Disadvantages of Quantum computing:


1. The research for this problem is still continuing the effort applied to identify a solution for
this problem that has no positive progress.
2. Qubits are not digital bits of the day thus they cannot use as conventional error correction.
3. The main disadvantage of Quantum computing is the technology required to implement a
quantum computer is not available at present days.
4. The minimum energy requirement for quantum logical operations is five times that of
classical computers.
5. Quantum CPU will have efficiency and heating problems of its own.
6. When a measurement of any type is made to a quantum system, decoherence is totally broken
down and the wave function collapses into a single state.

Limitations of Quantum Computing


Quantum computing offers enormous potential for developments and problem-solving in many
industries. However, currently, it has its limitations.
 Decoherence, or decay, can be caused by the slightest disturbance in the qubit environment.
This results in the collapse of computations or errors to them. As noted above, a quantum
computer must be protected from all external interference during the computing stage.
 Error correction during the computing stage hasn't been perfected. That makes computations
potentially unreliable. Since qubits aren't digital bits of data, they can't benefit from
conventional error correction solutions used by classical computers.
 Retrieving computational results can corrupt the data. Developments such as a particular
database search algorithm that ensures that the act of measurement will cause the quantum state
to decohere into the correct answer hold promise.8
 Security and quantum cryptography is not yet fully developed.
 A lack of qubits prevents quantum computers from living up to their potential for impactful use.
Researchers have yet to produce more than 128.7
According to global energy leader Iberdola, "quantum computers must have almost no atmospheric
pressure, an ambient temperature close to absolute zero (-273°C) and insulation from the earth's
magnetic field to prevent the atoms from moving, colliding with each other, or interacting with the
environment."
"In addition, these systems only operate for very short intervals of time, so that the information
becomes damaged and cannot be stored, making it even more difficult to recover the data.
Optical Computing

Computers have revolutionized the world to a point where we cannot imagine our lives without them.
We have entered a digital era where most priority is given to enhancing the currently available
technologies to make them more advanced as well as accessible. There are a lot of questions going
around about which technology will take the throne to be the next big thing. Which technology has the
potential to catapult us into the next generation?

Many would agree that the next generation of quantum computing will be optical
computing, which operates in parallel rather than in series. This sets them apart from electrical systems
where classic computers have significant challenges including light reflection and bandwidth since they
often work in series.

It is widely considered that the future of information processors and problem-solving is in the
hands of optical computing. We need to be informed about the various technologies that have the
potential to make a huge impact on the coming.

Why Optical Computing?

The central processing unit's (CPU) ability to retrieve and execute instructions depends on the processor
clock speed of the computer. The processor clock speed of a computer is also called the computation
speed of the computer. The computation speed is a function of the rate of transmission, processing, and
calculation of the input data.
A fast-developing technology called quantum computing uses the principles of quantum physics to solve
issues that are too complicated for conventional computers. Optical Computing is widely referred to as
the future of information processors and problem-solving.
A computer information processor manipulates data to produce comprehensible outputs. The gathering,
recording, assembling, retrieval, and transmission of information are all examples of processing. Optical
computing can also solve very complex network optimization problems that would take centuries for
classical computers.
Optical computing makes use of photons to utilize the wave interference pattern as well as wave
propagation to identify outputs. By this method, optical computing enables immediate, error-free
calculating. As the data is transferred, it gets processed. Processing does not require stopping movement
or information flow, revolutionizing the computer industry.

Okay, But What Is Optical Computing?

The term "optical computing" refers to a computing paradigm that uses photons produced by lasers and
diodes for digital computation. It is also commonly referred to as "optoelectronic computing" and
"photonic computing." As we have discussed earlier, optical computing utilizes photons to perform
various digital calculations to compute various tasks.
Photons can be thought of as tiny packets of light. It has been demonstrated that photons provide us with
a larger bandwidth than the electrons used in conventional computing devices. Therefore it is far more
favorable if we are able to harness the power of photons. The main purpose of developing the concept of
optical computing is to advance the future of information processors and the concepts of problem-
solving.
Here, very limited information is accessible because optical computing analyses data as it is moving.
This automatically results in providing higher security to the computer system as it is much harder to
intercept data for the wrong purposes this way.

How Optical Systems Operate

The operating principle of an optical computer is identical to that of a conventional computer, except for
some sections that conduct functional activities in optical mode. Photons are produced by LEDs, lasers,
and a range of other technologies. They can be used to encode data in the same way that electrons can.
With the ultimate goal of developing an optical computer, optical transistor design and implementation
are now in development. Researchers are experimenting with optical transistors of multiple designs.
Dielectric substances with the capability of serving as polarizers are also used to create optical
transistors. While theoretically feasible, optical logic gates present several technical challenges. They
would consist of single control and numerous beams that would produce accurate logical output.

What Are The Major Components Used In Optical Computers?


The following are the primary optical components necessary for computation in an optical computer:

1. VCSEL (Vertical Cavity Surface Emitting Micro Laser)


The VCSEL is a surface-emitting semiconductor Micro Laser Diode that produces light in a vertical
direction. It essentially transforms an electrical signal into an optical signal. It is the ideal illustration of
a single-dimensional photonic crystal.

2. Spatial Light Modulators


Both the optical beam's intensity and phase can be modulated by spatial light modulators. Due to the
fact that they convert the data into a laser beam, they are employed in holographic data storage systems.

3. Optical Logical Gates


In essence, an optical switch that manages light beams is what makes up an optical logic gate. When a
gadget transmits light, it is said to be "ON," and when a device blocks light, it is said to be "OFF."

4. Smart Pixels
High-level electronic signal processing optical systems are assisted by smart pixels.

The Benefits Of Optical Computing

Light has the potential to address various problems with conventional computer systems:
 Heat is generated as a result of the wires' resistance. Without a heat sensor in a microprocessor to
turn it off if it overheats and a fan to cool it down, this heat would dissipate in milliseconds due
to its extreme intensity. As a result, a processor's maximum clock speed is constrained.
However, difficulties like these can be avoided by using light.
 A propagation delay is introduced by the unique inductance and capacitance that each wire and
transistor have. The delays mount up as billions of transistors are stacked, and they present the
chip designers with enormous difficulties. At the clock frequency at which modern
microprocessors operate, the inductive effect is extremely potent. It's challenging to accelerate
the clock speed any further.
 As it dissipates into heat and electromagnetic radiation, electricity is an inefficient use of energy.
Without a repeater, an optical signal can be sent over long distances. The size, weight, and power
requirements of an equivalent electric wire would be substantially greater. The speed and
efficiency of an entirely photonic computer might likely surpass those of an electronic one.

The Drawbacks Of Optical Computing

 The price of parts for traditional computers is inexpensive primarily because they are produced
in factories whose sole purpose is to produce these parts. The cost of optical components is high
because there aren't any producers who focus exclusively on the production of these parts.
 The size of optical components is greater than that of traditional computer components. Optical
parts that are small enough to fit together to make a motherboard have not yet been made by
researchers.
 The computer's miniature components must be produced precisely for the device to function
properly. As previously stated, this has yet to be accomplished. Even the smallest changes can
cause the laser light beams to deviate, which can cause enormous issues. As a result, it is
reasonable to conclude that the manufacturing process is highly pricey.
 The Von Neumann architecture is used to assemble conventional computers. This architecture
serves as the foundation for operating systems and application software. The parallelism of the
optical computing system, however, necessitates a new architecture for its construction.
Operating systems like Microsoft Windows could consequently perform poorly or perhaps cease
to function altogether.

Nano Computing

Nanotechnology is the use of science, engineering, and technology at the nanoscale, or between 1 and
100 nanometers. Richard Feynman, a physicist, is credited with founding nanotechnology.

Nanoscience and nanotechnology, which study and use very small objects, are applicable to all other
scientific disciplines, including chemistry, biology, physics, materials science, and engineering.
Nanotechnology, or nanotech as it is sometimes abbreviated, is the utilization of matter at the atomic,
molecular, and supramolecular scales for commercial applications.

The first, the most popular definition of nanotechnology spoke of the specific technological objective of
finely controlling atoms and molecules in order to produce macroscale goods. The National
Nanotechnology Initiative went on to describe nanotechnology as the manipulation of matter with at
least one dimension scaled from 1 to 100 nanometers, which is a broader definition of the discipline.

Nanotechnology, as defined by size, encompasses a variety of scientific disciplines, including surface


science, organic chemistry, molecular biology, semiconductor physics, energy storage, engineering, and
molecular engineering.

History of Nanotechnology:

In 1959, renowned physicist Richard Feynman first outlined the ideas that would eventually give rise to
nanotechnology in his lecture There's Plenty of Room at the Bottom.

In this discussion, he described the prospect of synthesis by the direct manipulation of atoms. Norio
Taniguchi coined the phrase "nano-technology" for the first time in 1974, albeit it was not widely used
at the time.

In his 1986 book Engines of Creation: The Coming Era of Nanotechnology, K. Eric Drexler coined the
word "nanotechnology" and advocated the idea of a nanoscale "assembler" that would be able to
construct a replica of itself and of other items of arbitrary complexity with atomic control.

What are the Types of Nanotechnology?

Top-down or bottom-up processes and dry or wet working environments are used to categorize the
many forms of nanotechnology:

 Descending (top-down): Mechanisms and structures are miniaturized at the nanometer scale -
from one to 100 nanometers - this is the most frequent trend, especially in the electronic field.
 Ascending (bottom-up): By mounting or self-assembling a nanometric structure, you form a
larger mechanism than what you started with.
 Dry Nanotechnology:Dry Nanotechnology: It does not require humidity to manufacture
structures made from coal, silicon, inorganic materials, metals, and semiconductors.
 Wet Nanotechnology: This technology utilizes biological systems that exist in aqueous
environments, including genetic material, membranes, enzymes, and other aspects of the cell.

Nanotechnology Applications:

Various industrial sectors can benefit from nanotechnology and nanomaterials.The following areas are
usually where nanomaterials and nanotechnologies are found:

 Nanotechnology in Electronics: There is a strong likelihood that carbon nanotubes will replace
silicon for microchips, devices, and quantum nanowires in a material that is smaller, faster, and
more efficient. Flexible touchscreens can be made with graphene due to their unique properties.
 Nanotechnology in Energy: Solar panels can now be manufactured with a semiconductor
developed by Kyoto University that doubles the amount of electricity they produce from
sunlight. As a result of nanotechnology, wind turbines are stronger, lighter, and more energy-
efficient, and some nano components are thermally insulated, which can lead to increased
efficiency and a decrease in costs.
 Nanotechnology in Biomedicine: Nanomaterials have a number of properties that make them
ideal for diagnosing and treating neurodegenerative diseases and cancer at an early stage. These
agents are able to selectively attack cancer cells without harming healthy ones. Pharmaceutical
products such as sunscreen can also be enhanced with nanoparticles.
 Nanotechnology in Environment: Nanofiltration systems for heavy metals, water purification
through nanobubbles, and air purification with ions are some of the environmentally friendly
applications of nanotechnology. Chemical reactions can also be made more efficient and less
polluting by using nanocatalysts.
 Nanotechnology in Food: Nano biosensors and nanocomposites might be used to detect
pathogens in food or to increase the thermal and mechanical resistance and decrease oxygen
transfer in packaged foods, respectively.
 Nanotechnology in Textile: As a result of nanotechnology, smart fabrics that won't stain or
wrinkle can be produced, as well as lighter, stronger, and more durable materials for sports
equipment or motorcycle helmets.

The other following areas of science employ nanotechnology:

Surface Science, Organic chemistry, Molecular biology, Semiconductor physics, Microfabrication,


Molecular engineering, etc.

Advantages

 The use of nanotechnology can create materials that are unique and are stronger, cheaper, and
durable.
 Cleaning of pollution utilizes such technology.
 The manufacturing cost of using nanotechnology will be very low.
 Body sculpturing is a great advantage of nanotechnology.
 Nanotechnology leads to the mass production of food and consumables.

Disadvantages

The usage of this technology can cause issues related to health and safety. It can cause severe damage to
human bodies. The massive production can lead to the loss of employment in the sectors of farming and
manufacturing. The accessing of atomic weapons becomes easier and at the same time can be
destructive as well.

Nano Mission (Mission on Nano Science and Technology):

 The government of India initiated the Nano mission in 2007. This program is an "umbrella
capacity-building program".
 All institutions, businesses, and scientists in the nation will be the focus of the Mission's
programs.
 Encouraging fundamental research, the growth of human resources, the development of research
infrastructure, and international partnerships, among other things, will also increase efforts in
nanoscience and technology.
 The Department of Science and Technology will serve as its focal point, and a renowned
scientist will serve as the council's chair.

Results of the Nano Mission and its Importance:

 India is currently among the top five nations in the world for scientific publications in
nanoscience and technology because of the efforts made possible by the Nano Mission (moving
from 4th to the 3rd position).
 About 5000 research articles and 900 PhDs have been produced as a result of the Nano Mission,
along with some practical goods like eye drops made of nano hydrogel, water filters that remove
fluoride and arsenic, and water filters that remove pesticides, etc.
 Thus, the Nano Mission has aided in the development of a favorable environment in the nation
for the pursuit of cutting-edge basic research as well as the seeding and nurturing of R&D that is
application-oriented and centered on practical technologies and goods.

You might also like