CC Unit1 Notes Compressed
CC Unit1 Notes Compressed
Syllabus
Computing Paradigms: High performance computing, parallel computing,
Distributed computing, cluster computing, Grid computing, cloud computing,
Bio computing, Mobile computing, Quantum computing, optical computing,
Nano computing
Introduction
The term paradigm conveys that there is a set of practices to be followed to accomplish a task. In the
domain of computing, there are many different standard practices being followed based on inventions
and technological advancements. In this chapter, we look into the various computing paradigms:-
namely high performance computing, cluster computing, grid computing, cloud computing, bio-
computing, mobile computing, quantum computing, optical computing, nano-computing, and network
computing. As computing systems become faster and more capable, it is required to note the features of
modern computing in order to relate ourselves.
High performance computing (HPC) is the ability to process data and perform complex calculations at
high speeds. To put it into perspective, a laptop or desktop with a 3 GHz processor can perform around
3 billion calculations per second. While that is much faster than any human can achieve, it pales in
comparison to HPC solutions that can perform quadrillions of calculations per second.
It paves the way for new innovations in science, technology, business and academia.
It improves processing speeds, which can be critical for many kinds of computing
operations, applications and workloads.
It helps lay the foundation for a reliable, fast IT infrastructure that can store, process and
analyze massive amounts of data for various applications.
For Example :
High-performance reduces the need for testing devices such as driverless cars in the real world.
HPC systems can run different types of workloads. Two popular types are parallel and tightly
coupled workloads.
In parallel workloads, computational problems are divided into small, independent tasks that can
run in parallel at very high speeds. Often, these workloads don't communicate with each other.
Examples of such workloads include risk simulations, logistics simulations, contextual search
and molecular modeling.
When workloads are divided into smaller tasks and communicate continuously with each other
as they perform their processing, they are said to be tightly coupled. This usually happens with
workloads across different nodes in a cluster. Some common examples of tightly coupled
workloads are automobile collision emulations, geospatial simulations, weather forecast
modeling and traffic management.
Deployed on premises, at the edge, or in the cloud, HPC solutions are used for a variety of purposes
across multiple industries. Examples include:
Research labs. HPC is used to help scientists find sources of renewable energy, understand
the evolution of our universe, predict and track storms, and create new materials.
Media and entertainment. HPC is used to edit feature films, render mind-blowing special
effects, and stream live events around the world.
Oil and gas. HPC is used to more accurately identify where to drill for new wells and to
help boost production from existing wells.
Artificial intelligence and machine learning. HPC is used to detect credit card fraud,
provide self-guided technical support, teach self-driving vehicles, and improve cancer
screening techniques.
Financial services. HPC is used to track real-time stock trends and automate trading.
Manufacturing.To design, manufacture and test new products using simulations.
Healthcare. To research and develop new vaccines, drugs and treatments for diseases;
improve screening techniques; and to make more accurate patient diagnoses.
Aerospace. For personnel training and to create critical simulations for airplane testing.
Financial services. To automate trading, detect credit card fraud and track real-time stock
trends.
Meteorology. To predict and track storms and other unusual weather patterns.
A supercomputer is one of the best-known examples of HPC, in which one large computer is made up of
many computers and processors that work together to achieve parallel processing and high performance.
Benefits of HPC
HPC helps overcome numerous computational barriers that conventional PCs and processors typically
face. The benefits of HPC are many and include the following.
High speeds
HPC is mainly about lightning-fast processing, which means HPC systems can perform massive
amounts of calculations very quickly. In comparison, regular processors and computing systems would
take longer -- days, weeks or even months -- to perform these same calculations.
HPC systems typically use the latest CPUs and GPUs, as well as low-latency networking
fabrics and block storage devices, to improve processing speeds and computing performance.
Lower cost
Because an HPC system can process faster, applications can run faster and yield answers quickly, saving
time or money. Moreover, many such systems are available in "pay as you go" modes and can scale up
or down as needed, further improving their cost-effectiveness.
Parallel Computing
Traditionally, software has been written for serial computation to be run on a single computer
having a single Central Processing Unit (CPU)
A problem is broken into a discrete series of instructions.
Instructions are executed one after another.
Only one instruction may execute at any moment in time.
Parallel Computing is a form of computation in which many calculations are carried out
simultaneously.
In the simplest sense, It uses multiple compute resources to solve a computational problem
using multiple CPUs
A problem is broken into discrete parts that can be solved concurrently
Each part is further broken down to a series of instructions
Instructions from each part execute simultaneously on different CPU’s.
Main Reasons :
The compute resources can include single computer with multiple processors
Save Time and Money
Solve Larger Problems Eg: Web Engines/databases Processing millions of transactions per
second.
Use of non Local Resources
Limits to serial computing - both physical and practical reasons pose significant constraints to
simply building ever faster serial computers.
Transmission speeds - the speed of a serial computer is directly dependent upon how fast data
can move through hardware. Absolute limits are
the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9
cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
Limits to miniaturization - processor technology is allowing an increasing number of transistors
to be placed on a chip. However, even with molecular or atomic-level components, a limit will
be reached on how small components can be.
Economic limitations - it is increasingly expensive to make a single processor faster. Using a
larger number of moderately fast commodity processors to achieve the same (or better)
performance is less expensive.
Currently, the most common type of parallel computer. Most modern computers fall into this
category.
Multiple Instruction every processor may be executing a different instruction stream
Multiple Data every processor may be working with a different data stream
Execution can be synchronous or asynchronous, deterministic or non-deterministic
Examples most current supercomputers, networked parallel computer "grids" and multi-
processor SMP computers - including some types of PCs.
Shared Memory
Distributed Memory
Hybrid Distributed-Shared Memory
Shared Memory
Shared memory parallel computers vary widely, but generally have in common the ability for all
processors to access all memory as global address space.
Multiple processors can operate independently but share the same memory resources.
Changes in a memory location effected by one processor are visible to all other processors.
Shared memory machines can be divided into two main classes based upon memory access times
UMA and NUMA.
Most commonly represented by Symmetric Multiprocessor (SMP) machines and access times
to memory Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one
processor updates a location in shared memory, all the other processors know about the update.
Cache coherency is accomplished at the hardware level.
Often made by physically linking two or more SMPs One SMP can directly access memory of
another SMP Not all processors have equal access time to all memories Memory access across
link is slower If cache coherency is maintained, then may also be called CC-NUMA - Cache
Coherent NUMA
Distributed Memory
Like shared memory systems, distributed memory systems vary widely but share a common
characteristic. Distributed memory systems require a communication network to connect
inter-processor memory.
Processors have their own local memory. Memory addresses in one processor do not map to
another processor, so there is no concept of global address space across all processors.
Because each processor has its own local memory, it operates independently. Changes it makes
to its local memory have no effect on the memory of other processors. Hence, the concept of
cache coherency does not apply.
When a processor needs access to data in another processor, it is usually the task of the
programmer to explicitly define how and when data is communicated. Synchronization between
tasks is likewise the programmer's responsibility.
The network "fabric" used for data transfer varies widely, though it can can be as simple as
Ethernet.
The largest and fastest computers in the world today employ both shared and distributed memory
architectures.
The shared memory component is usually a cache coherent SMP machine. Processors on a given
SMP can address that machine's memory as global.
The distributed memory component is the networking of multiple SMPs. SMPs know only about
their own memory - not the memory on another SMP. Therefore, network communications are
required to move data from one SMP to another.
Current trends seem to indicate that this type of memory architecture will continue to prevail and
increase at the high end of computing for the foreseeable future.
Today, commercial applications are providing an equal or greater driving force in the development of
faster computers. These applications require the processing of large amounts of data in sophisticated
ways. Example applications include
Efficiency:
A computer that uses parallel programming can make better use of its resources to process and solve
problems. Most modern computers have hardware that includes multiple cores, threads or processors
that allow them to run many processes at once and maximize their computing potential. When
computers use all their resources to solve a problem or process information, they are more efficient at
performing tasks.
Speed
Another benefit of parallel computing is its ability to solve complex problems. Parallel programs can
divide complex problems down into smaller tasks and process these individual tasks simultaneously. By
separating larger computational problems into smaller tasks and processing them at the same time,
parallel processing allows computers to run faster.
Cost Effectiveness
Additionally, the hardware architecture that allows for parallel programming is more cost-effective than
systems that only allow for serial processing. Although a parallel programming hardware system may
require more parts than a serial processing system, they are more efficient at performing tasks. This
means that they produce more results in less time than serial programs and hold more financial value
over time
Distributed Computing
Early computing was performed on a single processor called centralized computing.
Distributed computing is the method of making multiple computers work together to solve a common
problem. It makes a computer network appear as a powerful single computer that provides large-scale
resources to deal with complex challenges.
Even though the software components may be spread out across multiple computers in multiple
locations, they're run as one system. This is done to improve efficiency and performance. The systems
on different networked computers communicate and coordinate by sending messages back and forth to
achieve a defined task. Distributed computing can increase performance, resilience and scalability,
making it a common computing model in database and application design.
For example, distributed computing can encrypt large volumes of data; solve physics and chemical
equations with many variables; and render high-quality, three-dimensional video animation. Distributed
systems, distributed programming, and distributed algorithms are some other terms that all refer to
distributed computing.
The Internet
An intranet a network of computers and workstations within an organization, segregated from
the Internet via a protective device (a firewall).
In distributed computing, you design applications that can run on several computers instead of on just
one computer. You achieve this by designing the software so that different computers perform different
functions and communicate to develop the final solution. There are four main types of distributed
architecture.
Application servers
Application servers act as the middle tier for communication. They contain the application logic or the
core functions that you design the distributed system for.
Database servers
Database servers act as the third tier to store and manage the data. They are responsible for data retrieval
and data consistency.
By dividing server responsibility, three-tier distributed systems reduce communication bottlenecks and
improve distributed computing performance.
Distributed computing works by computers passing messages to each other within the distributed
systems architecture. Communication protocols or rules create a dependency between the components of
the distributed system. This interdependence is called coupling, and there are two main types of
coupling.
Loose coupling
In loose coupling, components are weakly connected so that changes to one component do not affect the
other. For example, client and server computers can be loosely coupled by time. Messages from the
client are added to a server queue, and the client can continue to perform other functions until the server
responds to its message.
Tight coupling
High-performing distributed systems often use tight coupling. Fast local area networks typically connect
several computers, which creates a cluster. In cluster computing, each computer is set to perform the
same task. Central control systems, called clustering middleware, control and schedule the tasks and
coordinate communication between the different computers.
Benefits
Scalability
Distributed systems can grow with your workload and requirements. You can add new nodes, that is,
more computing devices, to the distributed computing network when they are needed.
Availability
Your distributed computing system will not crash if one of the computers goes down. The design shows
fault tolerance because it can continue to operate even if individual computers fail.
Consistency
Computers in a distributed system share information and duplicate data between them, but the system
automatically manages data consistency across all the different computers. Thus, you get the benefit of
fault tolerance without compromising data consistency.
Transparency
Distributed computing systems provide logical separation between the user and the physical devices.
You can interact with the system as if it is a single computer without worrying about the setup and
configuration of individual machines. You can have different hardware, middleware, software, and
operating systems that work together to make your system function smoothly.
Efficiency
Distributed systems offer faster performance with optimum resource use of the underlying hardware. As
a result, you can manage any workload without worrying about system failure due to volume spikes or
underuse of expensive hardware.
Distributed computing is everywhere today. Mobile and web applications are examples of distributed
computing because several machines work together in the backend for the application to give you the
correct information. However, when distributed systems are scaled up, they can solve more complex
challenges. Let’s explore some ways in which different industries use high-performing distributed
applications.
Healthcare and life sciences
Healthcare and life sciences use distributed computing to model and simulate complex life science data.
Image analysis, medical drug research, and gene structure analysis all become faster with distributed
systems. These are some examples:
Accelerate structure-based drug design by visualizing molecular models in three dimensions.
Reduce genomic data processing times to get early insights into cancer, cystic fibrosis, and
Alzheimer’s.
Develop intelligent systems that help doctors diagnose patients by processing a large volume of
complex images like MRIs, X-rays, and CT scans.
Engineering research
Engineers can simulate complex physics and mechanics concepts on distributed systems. They use this
research to improve product design, build complex structures, and design faster vehicles. Here are some
examples:
Computation fluid dynamics research studies the behavior of liquids and implements those
concepts in aircraft design and car racing.
Computer-aided engineering requires compute-intensive simulation tools to test new plant
engineering, electronics, and consumer goods.
Financial services
Financial services firms use distributed systems to perform high-speed economic simulations that assess
portfolio risks, predict market movements, and support financial decision-making. They can create web
applications that use the power of distributed systems to do the following:
Deliver low-cost, personalized premiums
Use distributed databases to securely support a very high volume of financial transactions.
Authenticate users and protect customers from fraud
Cluster Computing
Cluster computing is a collection of tightly or loosely connected computers that work together so that
they act as a single entity. The connected computers execute operations all together thus creating the
idea of a single system. The clusters are generally connected through fast local area networks (LANs)
2. Load-balancing clusters :
Incoming requests are distributed for resources among several nodes running similar programs or
having similar content. This prevents any single node from receiving a disproportionate amount of
task. This type of distribution is generally used in a web-hosting environment.
3. High Availability (HA) Clusters :
HA clusters are designed to maintain redundant nodes that can act as backup systems in case any
failure occurs. Consistent computing services like business activities, complicated databases, customer
services like e-websites and network file distribution are provided. They are designed to give
uninterrupted data availability to the customers.
Cluster Computing Architecture :
It is designed with an array of interconnected individual computers and the computer
systems operating collectively as a single standalone system.
It is a group of workstations or computers working together as a single, integrated
computing resource connected via high speed interconnects.
A node – Either a single or a multiprocessor network having memory, input and output
functions and an operating system.
Two or more nodes are connected on a single line or every node might be connected
individually through a LAN connection.
1. High Performance :
The systems offer better and enhanced performance than that of mainframe computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers to the network. Cluster
computing is capable of combining several additional resources or the networks to the existing
computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function as a proxy for the failed
node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be added.
Applications of Cluster Computing :
Various complex computational problems can be solved.
It can be used in the applications of aerodynamics, astrophysics and in data mining.
Weather forecasting.
Image Rendering.
Various e-commerce applications.
Earthquake Simulation.
Petroleum reservoir simulation.
Grid Computing
For example, meteorologists use grid computing for weather modeling. Weather modeling is a
computation-intensive problem that requires complex data management and analysis. Processing
massive amounts of weather data on a single computer is slow and time consuming. That’s why
meteorologists run the analysis over geographically dispersed grid computing infrastructure and
combine the results.
Grid computing is more popular due to the following reasons:
• Its ability to make use of unused computing power, and thus, it is a cost-effective solution (reducing
investments, only recurring costs)
• As a way to solve problems in line with any HPC-based application
• Enables heterogeneous resources of computers to work cooperatively and collaboratively to solve a
scientific problem.
- Researchers associate the term grid to the way electricity is distributed in municipal areas for the
common man.
Grid nodes and middleware work together to perform the grid computing task. In grid operations, the
three main types of grid nodes perform three different roles.
A user node is a computer that requests resources shared by other computers in grid computing. When
the user node requires additional resources, the request goes through the middleware and is delivered to
other nodes on the grid computing system.
A control node administers the network and manages the allocation of the grid computing resources.
The middleware runs on the control node. When the user node requests a resource, the middleware
checks for available resources and assigns the task to a specific provider node.
A provider node is a computer that shares its resources for grid computing. When provider machines
receive resource requests, they perform subtasks for the user nodes, such as forecasting stock prices for
different markets. At the end of the process, the middleware collects and compiles all the results to
obtain a global forecast.
What are the types of grid computing?
Computational grid
A computational grid consists of high-performance computers. It allows researchers to use the combined
computing power of the computers. Researchers use computational grid computing to perform resource-
intensive tasks, such as mathematical simulations.
Scavenging grid
While similar to computational grids, CPU scavenging grids have many regular computers. The
term scavenging describes the process of searching for available computing resources in a network of
regular computers. While other network users access the computers for non-grid–related tasks, the grid
software uses these nodes when they are free. The scavenging grid is also known as CPU scavenging or
cycle scavenging.
Data grid
A data grid is a grid computing network that connects to multiple computers to provide large data
storage capacity. You can access the stored data as if on your local machine without having to worry
about the physical location of your data on the grid.
Cloud Computing
Cloud computing is a virtualization-based technology that allows us to create, configure, and customize
applications via an internet connection. The cloud technology includes a development platform, hard
disk, software application, and database.The term cloud refers to a network or the internet. It is a
technology that uses remote servers on the internet to store, manage, and access data online rather than
local drives. The data can be anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
The term “Cloud Computing” refers to services provided by the cloud that is responsible for delivering of
computing services such as servers, storage, databases, networking, software, analytics, intelligence, and
more, over the Cloud (Internet).
Cloud computing applies a virtualized platform with elastic resources on demand by provisioning
hardware, software, and data sets dynamically
Cloud Computing provides an alternative to the on-premises data center. With an on- premises data
center, we must manage everything, such as purchasing and installing hardware, virtualization,
installing the operating system, and any other required applications, setting up the network, configuring
the firewall, and setting up storage for data. After doing all the set-up, we become responsible for
maintaining it through its entire lifecycle.
However, if we choose Cloud Computing, a cloud vendor is responsible for the hardware purchase and
maintenance. They also provide a wide variety of software and platform as a service. We can take any
required services on rent. The cloud computing services are charged based on usage.
The cloud environment provides an easily accessible online portal that makes handy for the user to
manage the compute, storage, network, and application resources. Some of the cloud service providers are
in the following figure.
(1)Infrastructure as a Service
Characteristics of IaaS
Companies providing IAAS are DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft
Azure, Google Compute Engine (GCE)
Example: AWS provides full control of virtualized hardware, memory, and storage. Servers, firewalls,
and routers are provided, and a network topology can be configured by the tenant.,
(2)Platform as a service
• PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.
Characteristics of PaaS
Companies offering PaaS are AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google
App Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
Example of PaaS is The World Wide Web (WWW) can be considered as the operating system
for all our Internet-based applications. However, one has to understand that we will always need a local
operating system in our computer to access webbased applications.
The basic meaning of the term platform is that it is the support on which applications run or give results
to the users. For example, Microsoft Windows is a platform. But, a platform does not have to be an
operating system. Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends (applications) such as Office
2.0, more and more applications that were originally available on desktop computers are now being
converted into web–cloud applications. Word processors like Buzzword and office suites
like Google Docs are now available in the cloud as their desktop counterparts.All these kinds of trends
in providing applications via the cloud are turning cloud computing into a platform or to act as a
platform.
(3)Software as a Service:
• SaaS is also known as "on-demand software". It is a software in which the applications are
hosted by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.
Characteristics of SaaS
Companies providing SaaS are BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco
WebEx, ZenDesk, Slack, and GoToMeeting.
Example of SaaS is The simplest thing that any computer does is allow us to store and retrieve
information. We can store our family photographs, our favorite songs, or even save movies on it, which
is also the most basic service offered by cloud computing. Let us look at the example of a popular
application called Flickr to illustrate the meaning of this section.
While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to
store those images. In many ways, it is superior to storing the images on your computer:
1. First, Flickr allows us to easily access our images no matter where we are or what type of device we
are using. While we might upload the photos of our vacation from our home computer, later, we can
easily access them from our laptop at the office.
2. Second, Flickr lets us share the images. There is no need to burn them to a CD or save them on a flash
drive. We can just send someone our Flickr address to share these photos or images.
3. Third, Flickr provides data security. By uploading the images to Flickr, we are providing ourselves
with data security by creating a backup on the web. And, while it is always best to keep a local copy—
either on a computer, a CD, or a flash drive—the truth is that we are far more likely to lose the images
that we store locally than Flickr is of losing our images.
Types of Cloud
1. Public cloud
2. Private cloud
3. Community cloud
4. Hybrid cloud
(1)Public Cloud
Public clouds are managed by third parties which provide cloud services over the internet to
the public, these services are available as pay-as-you-go billing models.
They offer solutions for minimizing IT infrastructure costs and become a good option for
handling peak loads on the local infrastructure.
Public clouds are the go-to option for small enterprises, which can start their businesses
without large upfront investments by completely relying on public infrastructure for their IT
needs.
The fundamental characteristics of public clouds are multitenancy. A public cloud is meant to
serve multiple users, not a single customer. A user requires a virtual computing environment
that is separated, and most likely isolated, from other users.
Public cloud
Advantages of using a Public cloud are:
1. High Scalability
2. Cost Reduction
3. Reliability and flexibility
4. Disaster Recovery
Disadvantages of using a Public cloud are:
1. Loss of control over data
2. Data security and privacy
3. Limited Visibility
4. Unpredictable cost
(2)Private cloud
Private clouds are distributed systems that work on private infrastructure and provide the users with
dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private clouds,
there could be other schemes that manage the usage of the cloud and proportionally billing of the
different departments or sections of an enterprise. Private cloud providers are HP Data Centers,
Ubuntu, Elastic-Private cloud, Microsoft, etc.
Private Cloud
1. Customer information protection: In the private cloud security concerns are less since
customer data and other sensitive information do not flow out of private infrastructure.
2. Infrastructure ensuring SLAs: Private cloud provides specific operations such as
appropriate clustering, data replication, system monitoring, and maintenance, disaster
recovery, and other uptime services.
3. Compliance with standard procedures and operations: Specific procedures have to be
put in place when deploying and executing applications according to third-party
compliance standards. This is not possible in the case of the public cloud.
Disadvantages of using a private cloud are:
1. The restricted area of operations: Private cloud is accessible within a particular area. So
the area of accessibility is restricted.
2. Expertise requires: In the private cloud security concerns are less since customer data
and other sensitive information do not flow out of private infrastructure. Hence skilled
people are required to manage & operate cloud services.
(3)Community cloud
Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing
responsibilities among the organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have shared concerns
or tasks. An organization or a third party may manage the cloud.
Community Cloud
1. Media industry: Media companies are looking for quick, simple, low-cost ways for increasing the
efficiency of content generation. Most media productions involve an extended ecosystem of partners.
In particular, the creation of digital content is the outcome of a collaborative process that includes the
movement of large data, massive compute-intensive rendering tasks, and complex workflow
executions.
2. Healthcare industry: In the healthcare industry community clouds are used to share information
and knowledge on the global level with sensitive data in the private infrastructure.
3. Energy and core industry: In these sectors, the community cloud is used to cluster a set of
solution which collectively addresses the management, deployment, and orchestration of services and
operations.
4. Scientific research: In this organization with common interests in science share a large distributed
infrastructure for scientific computing.
1) Cost: Available at a cheap cost than other clouds because it is formed by a distributed system.
2) Speed: It is efficiently fast with lower cost, It reduces the latency of the data transfer process.
3) Security: Most important thing is security. A hybrid cloud is totally safe and secure because it
works on the distributed system network.
1. Cost Savings
2. Security
3. Flexibility
4. Mobility
5. Insight
6. Increased Collaboration
7. Quality Control
8. Disaster Recovery
9. Loss Prevention
10. Automatic Software Updates
11. Competitive Edge
12. Sustainability
1. Internet Connection
Cloud based storage is dependent on having an internet connection. If you are on a slow network
you may have issues accessing your storage. In the event you find yourself somewhere without
internet, you won't be able to access your files.
2. Costs
There are additional costs for uploading and downloading files from the cloud. These can
quickly add up if you are trying to access lots of files often.
3. Hard Drives
Cloud storage is supposed to eliminate our dependency on hard drives right? Well some business
cloud storage providers require physical hard drives as well.
4. Support
Support for cloud storage isn't the best, especially if you are using a free version of a cloud
provider. Many providers refer you to a knowledge base or FAQs.
5. Privacy
When you use a cloud provider, your data is no longer on your physical storage. So who is
responsible for making sure that data is secure? That's a gray area that is still being figured out.
Bio Computing
Growing needs of mankind-Rapid Development. Rapid advancement in computer technology will lose
its momentum when silicon chip reaches its full capacity & miniaturization and Solving complex
problems which today's supercomputers are unable to perform in stipulated period of time.
What could be a remedy to this concern?????
Biological Computers are computers which use synthesized biological components to store and
manipulate data analogous to processes in the human body. The result is small yet faster computer that
operates with great accuracy. Main biological component used in a Biological Computer is : DNA
DNA Stands for DeOxyRiboNucleic Acid. A hereditary material found in almost all living organisms.
Located inside the nucleus of a cell. Helps in long term storage of information. Information in DNA is
stored as a code made of four chemical bases (A,T,G & C). Order & sequence of these bases determine
the kind of information stored.
In the above Diagram, the molecules A,T,G,C are nothing but bits as classical computers. Here they are
performing calculations in Parallel as well as Multiple also.
Example here we have to perform A=B+C+D+E and A= B-C-D-E then as per the above diagram in first
strands Addition is performing and in the second strands the subtraction calculation is done.
DNA Computers are small, fast and highly efficient computers which includes the following
properties:- Dense data storage. Massively parallel computation. Extraordinary energy efficiency.
Applications:
Pattern recognition
Cryptography
Evaluating gene
Medical Application: ‘developing disease’ treatments such as cancer
Mobile Computing
Mobile Computing is a technology that provides an environment that enables users to transmit data from
one device to another device without the use of any physical link or cables.
In other words, you can say that mobile computing allows transmission of data, voice and video via a
computer or any other wireless-enabled device without being connected to a fixed physical link. In this
technology, data transmission is done wirelessly with the help of wireless devices such as mobiles,
laptops etc.
This is only because of Mobile Computing technology that you can access and transmit data from any
remote locations without being present there physically. Mobile computing technology provides a vast
coverage diameter for communication. It is one of the fastest and most reliable sectors of the computing
technology field.
The concept of Mobile Computing can be divided into three parts:
o Mobile Hardware
o Mobile Software
o Mobile Communication
(1)Mobile Hardware
Mobile hardware consists of mobile devices or device components that can be used to receive or access
the service of mobility. Examples of mobile hardware can be smartphones, laptops, portable PCs, tablet
PCs, Personal Digital Assistants, etc.
These devices are inbuilt with a receptor medium that can send and receive signals. These devices are
capable of operating in full-duplex. It means they can send and receive signals at the same time. They
don't have to wait until one device has finished communicating for the other device to initiate
communications.
(2)Mobile Software
Mobile software is a program that runs on mobile hardware. This is designed to deal capably with the
characteristics and requirements of mobile applications. This is the operating system for the appliance of
mobile devices. In other words, you can say it the heart of the mobile systems. This is an essential
component that operates the mobile device.
• FDM (Frequency Division Multiplexing) − Here each user is assigned a different frequency
from the complete spectrum. All the frequencies can then simultaneously travel on the data
channel.
• TDM (Time Division Multiplexing) − A single radio frequency is divided into multiple slots
and each slot is assigned to a different user. So multiple users can be supported simultaneously.
• CDMA (Code Division Multiplexing) − Here several users share the same frequency spectrum
simultaneously. They are differentiated by assigning unique codes to them. The receiver has the
unique key to identify the individual calls.
Mobile communication refers to an infrastructure that ensures seamless and reliable communication
among wireless devices. This framework ensures the consistency and reliability of communication
between wireless devices. The mobile communication framework consists of communication devices
such as protocols, services, bandwidth, and portals necessary to facilitate and support the stated services.
These devices are responsible for delivering a smooth communication process.
Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a position, and they are
connected through a physical link to communicate with other devices.
For Example, Desktop Computer.
Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a position, and they
are connected through a wireless link to make communication with other devices.
For Example, Communication Towers, WiFi router
Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and some are mobile.
They altogether make communication with other devices.
For Example, Laptops.
Mobile and Wireless: In Mobile and Wireless configuration, the devices can communicate with each
other irrespective of their position. They can also connect to any network without the use of any wired
device.
For Example, WiFi Dongle.
CDMA
• CDMA stands for Code Division Multiple Access. It was first used by the British military during
World War II. After the war its use spread to civilian areas due to high service quality.
• As each user gets the entire spectrum all the time, voice quality is very high. Also, it is
automatically encrypted and hence provides high security against signal interception and
eavesdropping.
WLL
• WLL stands for Wireless in Local Loop. It is a wireless local telephone service that can be
provided in homes or offices. The subscribers connect to their local exchange instead of the
central exchange wirelessly. Using wireless link eliminates last mile or first mile construction of
network connection, thereby reducing cost and set up time. As data is transferred over very short
range, it is more secure than wired networks.
• WLL system consists of user handsets and a base station. The base station is connected to the
central exchange as well as an antenna. The antenna transmits to and receives calls from users
through terrestrial microwave links. Each base station can support multiple handsets depending
on its capacity.
GPRS
• GPRS stands for General Packet Radio Services. It is a packet based wireless communication
technology that charges users based on the volume of data they send rather than the time
duration for which they are using the service. This is possible because GPRS sends data over the
network in packets and its throughput depends on network traffic. As traffic increases, service
quality may go down due to congestion, hence it is logical to charge the users as per data volume
transmitted.
• GPRS is the mobile communication protocol used by second (2G) and third generation (3G) of
mobile telephony. It pledges a speed of 56 kbps to 114 kbps, however the actual speed may vary
depending on network load.
•
Applications of Mobile Computing
Following is a list of some significant fields in which mobile computing is generally applied:
o Web or Internet access.
o Global Position System (GPS).
o Emergency services.
o Entertainment services.
o Educational services.
Quantum Computing
What Is Quantum Computing?
Quantum computing is an area of computer science that uses the principles of quantum theory.
Quantum theory explains the behaviour of energy and material on the atomic and subatomic levels.
Quantum computing uses subatomic particles, such as electrons or photons. Quantum bits, or qubits,
allow these particles to exist in more than one state (i.e., 1 and 0) at the same time.
Theoretically, linked qubits can "exploit the interference between their wave-like quantum states to
perform calculations that might otherwise take millions of years."1
Classical computers today employ a stream of electrical impulses (1 and 0) in a binary manner to
encode information in bits. This restricts their processing ability, compared to quantum computing.
Understanding Quantum Computing
The field of quantum computing emerged in the 1980s. It was discovered that certain computational
problems could be tackled more efficiently with quantum algorithms than with their classical
counterparts.
Quantum computing has the capability to sift through huge numbers of possibilities and extract
potential solutions to complex problems and challenges. Where classical computers store information
as bits with either 0s or 1s, quantum computers use qubits. Qubits carry information in a quantum state
that engages 0 and 1 in a multidimensional way.
Such massive computing potential and the projected market size for its use have attracted the attention
of some of the most prominent companies. These include IBM, Microsoft, Google, D-Waves Systems,
Alibaba, Nokia, Intel, Airbus, HP, Toshiba, Mitsubishi, SK Telecom, NEC, Raytheon, Lockheed
Martin, Rigetti, Biogen, Volkswagen, and Amgen.
Computers have revolutionized the world to a point where we cannot imagine our lives without them.
We have entered a digital era where most priority is given to enhancing the currently available
technologies to make them more advanced as well as accessible. There are a lot of questions going
around about which technology will take the throne to be the next big thing. Which technology has the
potential to catapult us into the next generation?
Many would agree that the next generation of quantum computing will be optical
computing, which operates in parallel rather than in series. This sets them apart from electrical systems
where classic computers have significant challenges including light reflection and bandwidth since they
often work in series.
It is widely considered that the future of information processors and problem-solving is in the
hands of optical computing. We need to be informed about the various technologies that have the
potential to make a huge impact on the coming.
The central processing unit's (CPU) ability to retrieve and execute instructions depends on the processor
clock speed of the computer. The processor clock speed of a computer is also called the computation
speed of the computer. The computation speed is a function of the rate of transmission, processing, and
calculation of the input data.
A fast-developing technology called quantum computing uses the principles of quantum physics to solve
issues that are too complicated for conventional computers. Optical Computing is widely referred to as
the future of information processors and problem-solving.
A computer information processor manipulates data to produce comprehensible outputs. The gathering,
recording, assembling, retrieval, and transmission of information are all examples of processing. Optical
computing can also solve very complex network optimization problems that would take centuries for
classical computers.
Optical computing makes use of photons to utilize the wave interference pattern as well as wave
propagation to identify outputs. By this method, optical computing enables immediate, error-free
calculating. As the data is transferred, it gets processed. Processing does not require stopping movement
or information flow, revolutionizing the computer industry.
The term "optical computing" refers to a computing paradigm that uses photons produced by lasers and
diodes for digital computation. It is also commonly referred to as "optoelectronic computing" and
"photonic computing." As we have discussed earlier, optical computing utilizes photons to perform
various digital calculations to compute various tasks.
Photons can be thought of as tiny packets of light. It has been demonstrated that photons provide us with
a larger bandwidth than the electrons used in conventional computing devices. Therefore it is far more
favorable if we are able to harness the power of photons. The main purpose of developing the concept of
optical computing is to advance the future of information processors and the concepts of problem-
solving.
Here, very limited information is accessible because optical computing analyses data as it is moving.
This automatically results in providing higher security to the computer system as it is much harder to
intercept data for the wrong purposes this way.
The operating principle of an optical computer is identical to that of a conventional computer, except for
some sections that conduct functional activities in optical mode. Photons are produced by LEDs, lasers,
and a range of other technologies. They can be used to encode data in the same way that electrons can.
With the ultimate goal of developing an optical computer, optical transistor design and implementation
are now in development. Researchers are experimenting with optical transistors of multiple designs.
Dielectric substances with the capability of serving as polarizers are also used to create optical
transistors. While theoretically feasible, optical logic gates present several technical challenges. They
would consist of single control and numerous beams that would produce accurate logical output.
4. Smart Pixels
High-level electronic signal processing optical systems are assisted by smart pixels.
Light has the potential to address various problems with conventional computer systems:
Heat is generated as a result of the wires' resistance. Without a heat sensor in a microprocessor to
turn it off if it overheats and a fan to cool it down, this heat would dissipate in milliseconds due
to its extreme intensity. As a result, a processor's maximum clock speed is constrained.
However, difficulties like these can be avoided by using light.
A propagation delay is introduced by the unique inductance and capacitance that each wire and
transistor have. The delays mount up as billions of transistors are stacked, and they present the
chip designers with enormous difficulties. At the clock frequency at which modern
microprocessors operate, the inductive effect is extremely potent. It's challenging to accelerate
the clock speed any further.
As it dissipates into heat and electromagnetic radiation, electricity is an inefficient use of energy.
Without a repeater, an optical signal can be sent over long distances. The size, weight, and power
requirements of an equivalent electric wire would be substantially greater. The speed and
efficiency of an entirely photonic computer might likely surpass those of an electronic one.
The price of parts for traditional computers is inexpensive primarily because they are produced
in factories whose sole purpose is to produce these parts. The cost of optical components is high
because there aren't any producers who focus exclusively on the production of these parts.
The size of optical components is greater than that of traditional computer components. Optical
parts that are small enough to fit together to make a motherboard have not yet been made by
researchers.
The computer's miniature components must be produced precisely for the device to function
properly. As previously stated, this has yet to be accomplished. Even the smallest changes can
cause the laser light beams to deviate, which can cause enormous issues. As a result, it is
reasonable to conclude that the manufacturing process is highly pricey.
The Von Neumann architecture is used to assemble conventional computers. This architecture
serves as the foundation for operating systems and application software. The parallelism of the
optical computing system, however, necessitates a new architecture for its construction.
Operating systems like Microsoft Windows could consequently perform poorly or perhaps cease
to function altogether.
Nano Computing
Nanotechnology is the use of science, engineering, and technology at the nanoscale, or between 1 and
100 nanometers. Richard Feynman, a physicist, is credited with founding nanotechnology.
Nanoscience and nanotechnology, which study and use very small objects, are applicable to all other
scientific disciplines, including chemistry, biology, physics, materials science, and engineering.
Nanotechnology, or nanotech as it is sometimes abbreviated, is the utilization of matter at the atomic,
molecular, and supramolecular scales for commercial applications.
The first, the most popular definition of nanotechnology spoke of the specific technological objective of
finely controlling atoms and molecules in order to produce macroscale goods. The National
Nanotechnology Initiative went on to describe nanotechnology as the manipulation of matter with at
least one dimension scaled from 1 to 100 nanometers, which is a broader definition of the discipline.
History of Nanotechnology:
In 1959, renowned physicist Richard Feynman first outlined the ideas that would eventually give rise to
nanotechnology in his lecture There's Plenty of Room at the Bottom.
In this discussion, he described the prospect of synthesis by the direct manipulation of atoms. Norio
Taniguchi coined the phrase "nano-technology" for the first time in 1974, albeit it was not widely used
at the time.
In his 1986 book Engines of Creation: The Coming Era of Nanotechnology, K. Eric Drexler coined the
word "nanotechnology" and advocated the idea of a nanoscale "assembler" that would be able to
construct a replica of itself and of other items of arbitrary complexity with atomic control.
Top-down or bottom-up processes and dry or wet working environments are used to categorize the
many forms of nanotechnology:
Descending (top-down): Mechanisms and structures are miniaturized at the nanometer scale -
from one to 100 nanometers - this is the most frequent trend, especially in the electronic field.
Ascending (bottom-up): By mounting or self-assembling a nanometric structure, you form a
larger mechanism than what you started with.
Dry Nanotechnology:Dry Nanotechnology: It does not require humidity to manufacture
structures made from coal, silicon, inorganic materials, metals, and semiconductors.
Wet Nanotechnology: This technology utilizes biological systems that exist in aqueous
environments, including genetic material, membranes, enzymes, and other aspects of the cell.
Nanotechnology Applications:
Various industrial sectors can benefit from nanotechnology and nanomaterials.The following areas are
usually where nanomaterials and nanotechnologies are found:
Nanotechnology in Electronics: There is a strong likelihood that carbon nanotubes will replace
silicon for microchips, devices, and quantum nanowires in a material that is smaller, faster, and
more efficient. Flexible touchscreens can be made with graphene due to their unique properties.
Nanotechnology in Energy: Solar panels can now be manufactured with a semiconductor
developed by Kyoto University that doubles the amount of electricity they produce from
sunlight. As a result of nanotechnology, wind turbines are stronger, lighter, and more energy-
efficient, and some nano components are thermally insulated, which can lead to increased
efficiency and a decrease in costs.
Nanotechnology in Biomedicine: Nanomaterials have a number of properties that make them
ideal for diagnosing and treating neurodegenerative diseases and cancer at an early stage. These
agents are able to selectively attack cancer cells without harming healthy ones. Pharmaceutical
products such as sunscreen can also be enhanced with nanoparticles.
Nanotechnology in Environment: Nanofiltration systems for heavy metals, water purification
through nanobubbles, and air purification with ions are some of the environmentally friendly
applications of nanotechnology. Chemical reactions can also be made more efficient and less
polluting by using nanocatalysts.
Nanotechnology in Food: Nano biosensors and nanocomposites might be used to detect
pathogens in food or to increase the thermal and mechanical resistance and decrease oxygen
transfer in packaged foods, respectively.
Nanotechnology in Textile: As a result of nanotechnology, smart fabrics that won't stain or
wrinkle can be produced, as well as lighter, stronger, and more durable materials for sports
equipment or motorcycle helmets.
Advantages
The use of nanotechnology can create materials that are unique and are stronger, cheaper, and
durable.
Cleaning of pollution utilizes such technology.
The manufacturing cost of using nanotechnology will be very low.
Body sculpturing is a great advantage of nanotechnology.
Nanotechnology leads to the mass production of food and consumables.
Disadvantages
The usage of this technology can cause issues related to health and safety. It can cause severe damage to
human bodies. The massive production can lead to the loss of employment in the sectors of farming and
manufacturing. The accessing of atomic weapons becomes easier and at the same time can be
destructive as well.
The government of India initiated the Nano mission in 2007. This program is an "umbrella
capacity-building program".
All institutions, businesses, and scientists in the nation will be the focus of the Mission's
programs.
Encouraging fundamental research, the growth of human resources, the development of research
infrastructure, and international partnerships, among other things, will also increase efforts in
nanoscience and technology.
The Department of Science and Technology will serve as its focal point, and a renowned
scientist will serve as the council's chair.
India is currently among the top five nations in the world for scientific publications in
nanoscience and technology because of the efforts made possible by the Nano Mission (moving
from 4th to the 3rd position).
About 5000 research articles and 900 PhDs have been produced as a result of the Nano Mission,
along with some practical goods like eye drops made of nano hydrogel, water filters that remove
fluoride and arsenic, and water filters that remove pesticides, etc.
Thus, the Nano Mission has aided in the development of a favorable environment in the nation
for the pursuit of cutting-edge basic research as well as the seeding and nurturing of R&D that is
application-oriented and centered on practical technologies and goods.