Istributed Perating Ystems: Jyotsna Singh
Istributed Perating Ystems: Jyotsna Singh
Jyotsna Singh
Department of Computer Sc.& Engineering
Why do we develop distributed systems? availability of powerful yet cheap microprocessors (PCs, workstations), continuing advances in communication technology,
Examples: Network of workstations Distributed manufacturing system (e.g., automated assembly line) Network of branch office computers
Economics: a collection of microprocessors offer a better price/performance than mainframes. Low price/performance ratio: cost effective way to increase computing power. Speed: a distributed system may have more total computing power than a mainframe. Ex. 10,000 CPU chips, each running at 50 MIPS. Not possible to build 500,000 MIPS single processor since it would require 0.002 nsec instruction cycle. Enhanced performance through load distributing.
Inherent distribution: Some applications are inherently distributed. Ex. a supermarket chain.
Reliability: If one machine crashes, the system as a whole can still survive. Higher availability and improved reliability. Incremental growth: Computing power can be added in small increments. Modular expandability Another deriving force: the existence of large number of personal computers, the need for people to collaborate and share information.
3
Data sharing: allow many users to access to a common data base Resource Sharing: expensive peripherals like color printers Communication: enhance human-to-human communication, e.g., email, chat Flexibility: spread the workload over the available machines
Software: difficult to develop software for distributed systems Network: saturation, lossy transmissions Security: easy access also applies to secrete data
HARDWARE CONCEPTS
Taxonomy
Tightly coupled systems (multiprocessors) o shared memory o intermachine delay short, data rate high Loosely coupled systems (multicomputers) o private memory o intermachine delay long, data rate low
Bus Switched
SWITCHED MULTIPROCESSORS
Switched Multiprocessors
for connecting large number (say over 64) of processors crossbar switch: n**2 switch points omega network: 2x2 switches for n CPUs and n memories, log n switching stages, each with n/2 switches, total (n log n)/2 switches delay problem: E.g., n=1024, 10 switching stages from CPU to memory. a total of 20 switching stages. 100 MIPS 10 nsec instruction execution time need 0.5 nsec switching time NUMA (Non-Uniform Memory Access): placement of program and data building a large, tightly-coupled, shared memory multiprocessor is possible, but is difficult and expensive
9
MULTICOMPUTERS
Bus-Based Multicomputers
easy to build communication volume much smaller relatively slow speed LAN (10-100 MIPS, compared to 300 MIPS and up for a backplane bus)
Switched Multicomputers
10
SOFTWARE CONCEPTS
Software more important for users Three types: 1. Network Operating Systems
2.
3.
11
loosely-coupled software on loosely-coupled hardware A network of workstations connected by LAN each machine has a high degree of autonomy
o o
Files servers: client and server model Clients mount directories on file servers Best known network OS:
o
a few system-wide requirements: format and meaning of all the messages exchanged
12
NFS
NFS Architecture
NSF Protocols
For handling mounting For read/write: no open/close, stateless
NSF Implementation
13
14
Tightly-coupled software on tightly-coupled hardware Examples: high-performance servers shared memory single run queue traditional file system as on a single-processor system: central block cache
15
16
1. TRANSPARENCY
How to achieve the single-system image, i.e., how to make a collection of computers appear as a single computer. Hiding all the distribution from the users as well as the application programs can be achieved at two levels:
hide the distribution from users 2) at a lower level, make the system look transparent to programs. 1) and 2) requires uniform interfaces such as access to files, communication.
1)
17
TYPES OF TRANSPARENCY
Location Transparency: users cannot tell where hardware and software resources such as CPUs, printers, files, data bases are located. Migration Transparency: resources must be free to move from one location to another without their names changed. E.g., /usr/lee, /central/usr/lee Replication Transparency: OS can make additional copies of files and resources without users noticing. Concurrency Transparency: The users are not aware of the existence of other users. Need to allow multiple users to concurrently access the same resource. Lock and unlock for mutual exclusion. Parallelism Transparency: Automatic use of parallelism without having to program explicitly. The holy grail for distributed and parallel system designers. Users do not always want complete transparency: a fancy printer 1000 miles away
18
2. FLEXIBILITY
Make it easier to change Monolithic Kernel: systems calls are trapped and executed by the kernel. All system calls are served by the kernel, e.g., UNIX. Microkernel: provides minimal services. (Fig 9-15) 1) IPC 2) some memory management 3) some low-level process management and scheduling 4) low-level i/o E.g., Mach can support multiple file systems, multiple system interfaces.
19
3. RELIABILITY
Distributed system should be more reliable than single system. Example: 3 machines with .95 probability of being up. 1-.05**3 probability of being up.
Availability: fraction of time the system is usable. Redundancy improves it. Need to maintain consistency Need to be secure Fault tolerance: need to mask failures, recover from errors.
20
4. PERFORMANCE
Without gain on this, why bother with distributed systems. Performance loss due to communication delays:
21
5. SCALABILITY
Systems grow with time or become obsolete. Techniques that require resources linearly in terms of the size of the system are not scalable. e.g., broadcast based query won't work for large distributed systems. Examples of bottlenecks
Centralized components: a single mail server o Centralized tables: a single URL address book o Centralized algorithms: routing based on complete information
o
22
25
Resource allocation Distributed deadlock mechanisms Protection and security Managing communication resources
26
Amoeba groups of threads (processor pool) Mach synch msg-based IPC, threads
Argus ('79) "Guardians" for transactions Clouds object-based (like methods) V-system (1984) IPC via kernel (msgs only), true multi-threading (unavailable in UNIX '84) Cedar - coded VM, IDE, concurrent execution
27
XDFS - ca. 1977 - servers Cambridge File Server ca.1982 - indexes on servers used by OS to build its directory Sun NFS networked UNIX FS
28