0% found this document useful (0 votes)
58 views

Distributed Systems

Distributed computing involves connecting independent computers over a network to work together on tasks. It allows for resource sharing and provides advantages like scalability, reliability, and fault tolerance. Some examples of distributed systems include web search, massively multiplayer online games, and financial trading networks. The goals of distributed systems include transparency, openness, and scalability. Techniques for improving scalability include hiding communication latencies, distributing services across multiple servers, and replicating components.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Distributed Systems

Distributed computing involves connecting independent computers over a network to work together on tasks. It allows for resource sharing and provides advantages like scalability, reliability, and fault tolerance. Some examples of distributed systems include web search, massively multiplayer online games, and financial trading networks. The goals of distributed systems include transparency, openness, and scalability. Techniques for improving scalability include hiding communication latencies, distributing services across multiple servers, and replicating components.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Distributed Computing

By Junaid Muzaffar
• A collection of independent computers that appears to its users as a single
coherent system.

• Examples:
• Distributed object-based systems (CORBA, DCOM)
Distributed file systems (NFS)
etc.
WHAT IS DISTRIBUTED SYSTEM AND DISTRIBUTED COMPUTING?

The process of computation was started from working on a single processor. This uni- processor computing can
be termed as centralized computing.
A distributed system is a collection of independent computers, interconnected via a network, capable of
collaborating on a task. Distributed computing is computing performed in a distributed system.

• Distributed computing is widely used due to advancements in machines and faster and cheaper networks. In
distributed systems, the entire network will be viewed as a computer. The multiple systems connected to the
network will appear as a single system to the user. Thus the distributed systems hide the complexity of the
underlying architecture to the user.
The definition of distributed systems deals with two aspects that:

• Deals with hardware: The machines linked in a distributed system are autonomous

• Deals with software: A distributed system gives an impression to the users that they are dealing with a
single system.
Centralized vs. Distributed Computing

term in al
m ain fram e c o m p u ter
w o rk s tatio n

n etw o rk lin k

n etw o rk ho s t
ce n tra lize d com pu tin g
dis tribu te d co m pu tin g
Features of Distributed Systems:

• Communication is hidden from users


• Applications interact in uniform and consistent way
• High degree of scalability
• A distributed system is functionally equivalent to the systems of which it is composed.
• Resource sharing is possible in distributed systems.
• Distributed systems act as fault tolerant systems
• Enhanced performance
The Weaknesses and Strengths of Distributed Computing

• Concurrency
• Distributed system function in a heterogeneous environment. So adaptability is a major issue.
• Latency
• Memory considerations: The distributed systems work on both local and shared memory.
• Synchronization issues
• Applications must need to adapt gracefully without affecting other parts of the systems in case of failures.

• Since they are widespread, security is a major issue


• Limits imposed on scalability
• They are less transparent.
• Knowledge about the dynamic network topology is a must
Differences between centralized and distributed systems
SOME EXAMPLES OF DISTRIBUTED SYSTEMS

• Web Search
• (class task study how web search work)
• Massively multiplayer online games
• Financial Trading
The distributed application architecture:
• Integrated applications
• Applications can share resources
• A single instance of functionality (service) can be reused.
• Common user interfaces
Why distributed computing?
• Economics: distributed systems allow the pooling of resources,
including CPU cycles, data storage, input/output devices, and services.
• Reliability: a distributed system allow replication of resources and/or
services, thus reducing service outage due to failures.
• The Internet has become a universal platform for distributed
computing.
Goals of a Distributed System
to support heterogeneous computers and networks and to provide a single-
system view, a distributed system is often organized by means of a layer of
software called middleware that extends over multiple machines

a distributed system organized as middleware; note that the


middleware layer extends over multiple machines, and offers
each application the same interface
Goals of a distributed system:
 a distributed system should
 easily connect users with resources (printers, computers, storage
facilities, data, files, Web pages, ...)
 reasons: economics, to collaborate and exchange information
 be transparent: hide the fact that the resources and processes are
distributed across multiple computers
 be open
 be scalable
Transparency in a Distributed System
a distributed system that is able to present itself to users
and applications as if it were only a single computer system
is said to be transparent
different forms of transparency in a distributed system
TransparencyDescription
AccessHide differences in data representation
(endianness, file naming, ...) and how a resource
is accessed
Location Hide where a resource is physically located; where
is http://www.prenhall.com/index.html? (naming)
Migration Hide that a resource may move to another location
Relocation Hide that a resource may be moved to another location while in use; e.g., mobile
users using their wireless laptops
Replication Hide that a resource is replicated
Concurrency Hide that a resource may be shared by several competitive users; a resource
must be left in a consistent state
Failure Hide the failure and recovery of a resource
 But trying to achieve all distribution transparency may be impossible or may not be a good
idea
 Openness in a Distributed System
 a distributed system should be open
 we need well-defined interfaces
 interoperability
 components of different origin can communicate
 portability
 components work on different platforms
 another goal of an open distributed system is that it should
be flexible and extensible; easy to configure the system out
of different components; easy to add new components,
replace existing ones; easier said than done
 an Open Distributed System is a system that offers services
according to standard rules that describe the syntax and
semantics of those services; e.g., protocols in networks
 standards - a necessity
 should allow competition in non-normative areas
16
 in distributed systems, such services are often specified
through interfaces often described using an Interface
Definition Language (IDL)
 specify only syntax: the names of the functions, types
of parameters, return values, possible exceptions, ...
 Semantics are given in an informal way by means of
natural languages

 Scalability in Distributed Systems


 a distributed system should be scalable
 size: adding more users and resources to the system
 geographically: users and resources may be far apart
 administratively: should be easy to manage even if it
spans many administrative organizations
 but a scalable system may exhibit performance problems

17
 scalability problems

Concept Example
Single server for all users-mostly for security
Centralized services
reasons
Centralized data A single on-line telephone book
Doing routing based on complete
Centralized algorithms
information
examples of scalability limitations

 Scaling Techniques
 how to solve scaling problems
 the problem is mainly performance, and arises as a result
of limitations in the capacity of servers and networks (for
geographical scalability)
 three possible solutions: hiding communication latencies,
distribution, and replication

18
a. Hide Communication Latencies
 try to avoid waiting for responses to remote service
requests
 let the requester do other useful job
 i.e., construct requesting applications that use only
asynchronous communication instead of synchronous
communication; when a reply arrives the application is
interrupted
 good for batch processing and parallel applications but
not for interactive applications
 for interactive applications, move part of the job to the
client to reduce communication; e.g. filling a form and
checking the entries

19
(a) a server checking the correctness of field entries
(b) a client doing the job
• e.g., checking the completeness of mandatory fields
• shipping code is now supported in Web applications using Java Applets and Javascript

20
b. Distribution
• e.g., DNS - Domain Name System ([email protected])
• divide the name space into nonoverlapping zones
• for details, see later in Chapter 5 - Naming

an example of dividing the DNS name space into zones 21


c. Replication
 replicate components across a distributed system to
increase availability and for load balancing, leading to
better performance
 decided by the owner of a resource
 caching (a special form of replication) also reduces
communication latency; decided by the user
 but, caching and replication may lead to consistency
problems (see Chapter 7 - Consistency and Replication)

22
Pitfalls when Developing Distributed Systems
False assumptions made by first time developers
 The network is reliable
 The network is secure
 The network is homogeneous
 The topology does not change
 Latency is zero
 Bandwidth is infinite
 Transport cost is zero
 There is one administrator

23
1.3 Types of Distributed Systems
 Three types: distributed computing systems, distributed
information systems, and distributed embedded systems
1. Distributed Computing Systems
 Used for high-performance computing tasks
 two types: cluster computing and grid computing
 Cluster Computing
 a collection of similar workstations or PCs
(homogeneous), closely connected by means of a
high-speed LAN
 each node runs the same operating system
 used for parallel programming in which a single
compute intensive program is run in parallel on
multiple machines

24
an example of a cluster computing system

25
 Grid Computing
 “Resource sharing and coordinated problem solving
in dynamic, multi-institutional virtual organizations”
(I. Foster)
 high degree of heterogeneity: no assumptions are
made concerning hardware, operating systems,
networks, administrative domains, security policies,
etc.
2. Distributed Information Systems
 problem: many networked applications with a problem of
interoperability
 at the lowest level: wrap a number of requests into a
single larger request and have it executed as a
distributed transaction
 how to let applications communicate directly with each
other, i.e., Enterprise Application Integration (EAI)

26
 Transaction Processing Systems
 Consider database applications
 special primitives are required to program transactions,
supplied either by the underlying distributed system or
by the language runtime system
 exact list of primitives depends on the type of application

Primitive Description
BEGIN_TRANSACTION Mark the start of a transaction
Terminate the transaction and try to
END_TRANSACTION
commit
Kill the transaction and restore the old
ABORT_TRANSACTION
values
Read data from a file, a table, or
READ
otherwise
Write data to a file, a table, or
WRITE
otherwise
27
 The Transaction Model
 the model for transactions comes from the world of
business
 a supplier and a retailer negotiate on
 price
 delivery date
 quality
 etc.
 until the deal is concluded they can continue
negotiating or one of them can terminate
 but once they have reached an agreement they are
bound by law to carry out their part of the deal
 transactions between processes is similar with this
scenario

28
 e.g., assume the following banking operation
 withdraw an amount x from account 1
 deposit the amount x to account 2
 what happens if there is a problem after the first activity
is carried out?
 group the two operations into one transaction; either
both are carried out or neither
 we need a way to roll back when a transaction is not
completed

29
 e.g. reserving a seat from White Plains to Malindi through
JFK and Nairobi airports

BEGIN_TRANSACTION BEGIN_TRANSACTION
reserve WP  JFK; reserve WP  JFK;
reserve JFK  Nairobi; reserve JFK  Nairobi;
reserve Nairobi  Malindi; reserve Nairobi  Malindi full 
END_TRANSACTION ABORT_TRANSACTION
(a) (b)

(a) transaction to reserve three flights commits


(b) transaction aborts when third flight is unavailable

30
 properties of transactions, often referred to as ACID
1. Atomic: to the outside world, the transaction happens
indivisibly; a transaction either happens completely or
not at all; intermediate states are not seen by other
processes
2. Consistent: the transaction does not violate system
invariants; e.g., in an internal transfer in a bank, the
amount of money in the bank must be the same as it
was before the transfer (the law of conservation of
money); this may be violated for a brief period of time,
but not seen to other processes
3. Isolated or Serializable: concurrent transactions do not
interfere with each other; if two or more transactions
are running at the same time, the final result must look
as though all transactions run sequentially in some
order
4. Durable: once a transaction commits, the changes are
permanent; see later in Chapter 8
31
 Classification of Transactions
 a transaction could be flat, nested or distributed
 Flat Transaction
 consists of a series of operations that satisfy the ACID
properties
 simple and widely used but with some limitations
 do not allow partial results to be committed or aborted
 i.e., atomicity is also partly a weakness
 in our airline reservation example, we may want to
accept the first two reservations and find an
alternative one for the last
 some transactions may take too much time

32
 Nested Transaction
 constructed from a number of subtransactions; it is
logically decomposed into a hierarchy of
subtransactions
 the top-level transaction forks off children that run in
parallel, on different machines; to gain performance or
for programming simplicity
 each may also execute one or more subtransactions
 permanence (durability) applies only to the top-level
transaction; commits by children should be undone
 Distributed Transaction
 a flat transaction that operates on data that are
distributed across multiple machines
 problem: separate algorithms are needed to handle the
locking of data and committing the entire transaction;
see later in Chapter 8 for distributed commit

33
(a) a nested transaction
(b) distributed transaction

34
 Enterprise Application Integration
 how to integrate applications independent from their
databases
 transaction systems rely on request/reply
 how can applications communicate with each other

middleware as a communication facilitator in enterprise application


integration 35
 there are different communication models
 RPC (Remote procedure Call)
 RMI (Remote Method Invocation)
 MOM (Message-Oriented Communication)
 see later in Chapter 4
3. Distributed Pervasive Systems
 the distributed systems discussed so far are
characterized by their stability; fixed nodes having high-
quality connection to a network
 there are also mobile and embedded computing devices
with wireless connections

36
 three requirements for pervasive applications
 embrace contextual changes: a device is aware that
its environment may change all the time
 encourage ad hoc composition: devices are used in
different ways by different users
 recognize sharing as the default: devices join a
system to access or provide information
 examples of pervasive systems
 Home Systems
 Electronic Health Care Systems
 Sensor Networks
 read pages 27 - 30

37
CHALLENGES IN DISTRIBUTED SYSTEMS

Heterogeneity
Heterogeneity
Heterogeneity means the diversity of the distributed systems in terms of hardware, software, platform, etc.
Modern distributed systems will likely to be operating with different:
• Hardware devices: computers, tablets, mobile phones, embedded devices, etc.
• Operating System: Ms Windows, Linux, Mac, Unix, etc.
• Network: Local network, the Internet, wireless network, satellite links, etc.
• Programing languages: Java, C/C++, Python, PHP, etc.
• Different roles of software developers, designers, system managers
• Middleware: Middleware applies to a software layer that provides a programming abstraction as well as
masking the heterogeneity of the underlying networks, hardware, operating systems and programming
languages. Eg: CORBA, RMI.
• Heterogeneity in mobile code: Mobile code is used to refer to program code that can be transferred from one
computer to another and run at the destination.
• Eg: Java applets.
Cont..
• Heterogeneity refers to the ability for the system to operate on a
variety of different hardware and software components. This is
achieved through the implementation of middle-ware in the software
layer.
• In a distributed system, heterogeneity is almost unavoidable, as
different components may require different implementation
technologies.
Heterogeneity
• Applies to all of the following:
• networks
• Internet protocols mask the differences between networks
• computer hardware
• e.g. data types such as integers can be represented differently
• operating systems
• e.g. the API to IP differs from one OS to another
• programming languages
• data structures (arrays, records) can be represented differently
• implementations by different developers
• they need agreed standards so as to be able to interwork

CDK Ch. 1.4


42
Middleware
• A software layer that
• masks the heterogeneity of systems
• provides a convenient programming abstraction
• provides protocols for providing general-purpose services
to more specific applications, eg.
• authentication protocols
• authorization protocols
• distributed commit protocols
• distributed locking protocols
• high-level communication protocols
• remote procedure calls (RPC)
• remote method invocation (RMI)

43
Middleware
• General structure of a distributed system as middleware.

1-22

44
Middleware and Openness

1.23

• In an open middleware-based distributed system, the protocols used by


each middleware layer should be the same, as well as the interfaces they
offer to applications.

TvS 1.25
45
Middleware programming models
• Remote Calls
• remote Procedure Calls (RPC)
• distributed objects and Remote Method Invocation (RMI)
• eg. Java RMI
• Common Object Request Broker Architecture (CORBA)
• cross-language RMI
• Other programming models
• remote event notification
• remote SQL access
• distributed transaction processing
CDK Ch 1
46
Challenges in Distributed systems

You might also like