0% found this document useful (0 votes)
10 views

CN UNIT 2 full notes

The Data Link Layer (DLL) is the second layer of the OSI model, responsible for node-to-node data delivery and ensuring error-free transmission. It consists of two sub-layers: Logical Link Control (LLC), which manages data flow and error messages, and Media Access Control (MAC), which handles addressing and physical media access. Key functions of the DLL include frame synchronization, flow control, error control, and various error detection techniques such as Single Parity Check, Checksum, and Cyclic Redundancy Check (CRC).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

CN UNIT 2 full notes

The Data Link Layer (DLL) is the second layer of the OSI model, responsible for node-to-node data delivery and ensuring error-free transmission. It consists of two sub-layers: Logical Link Control (LLC), which manages data flow and error messages, and Media Access Control (MAC), which handles addressing and physical media access. Key functions of the DLL include frame synchronization, flow control, error control, and various error detection techniques such as Single Parity Check, Checksum, and Cyclic Redundancy Check (CRC).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

UNIT – III

THE DATA LINK LAYER


Introduction to DLL

The Data-link layer is the second layer from the bottom in the OSI (Open System
Interconnection) network architecture model. It is responsible for the node-to-node delivery
of data. Its major role is to ensure error-free transmission of information. DLL is also
responsible to encode, decode and organize the outgoing and incoming data. This is considered
the most complex layer of the OSI model as it hides all the underlying complexities of the
hardware from the other above layers.

Sub-layers of Data Link Layer:


The data link layer is further divided into two sub-layers, which are as follows:

Logical Link Control (LLC):


This sublayer of the data link layer deals with multiplexing, the flow of data among
applications and other services, and LLC is responsible for providing error messages and
acknowledgments as well.

Media Access Control (MAC):


MAC sublayer manages the device’s interaction, responsible for addressing frames, and
also controls physical media access.

The data link layer receives the information in the form of packets from the Network layer,it
divides packets into frames and sends those frames bit-by-bit to the underlying physical
layer.

Design Issues with Data Link Layer


1. Services provided to the network layer: The data link layer act as a service interface
to the network layer. The principle service is transferring data from network layer on
sending machine to the network layer on destination machine. This transfer also takes place
via DLL (Data link-layer).
2. Frame synchronization: The source machine sends data in the form of blocks called
frames to the destination machine. The starting and ending of each frame should be
identified so that the frame can be recognized by the destination machine.
3. Flow control: Flow control is done to prevent the flow of data frame at the receiver end.
The source machine must not send data frames at a rate faster than the capacity of
destination machine to accept them.
4. Error control: Error control is done to prevent duplication of frames. The errors introduced
during transmission from source to destination machines must be detected and corrected
at the destination machine.

Error Detection
When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted by
another device. An Error is a situation when the message received at the receiver end is not
identical to the message transmitted.

Types of Errors

Errors can be classified into two categories:


o Single-Bit Error
o Burst Error

Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed
to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1’s and for a single-
bit error to occurred, a noise must be more than 1’s.

Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires are
used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is corrupted
per byte.

Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.
The Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
Burst Errors are most likely to occur in Serial Data Transmission.
The number of affected bits depends on the duration of the noise and data rate.

How to Detect the Errors?


Error deduction means to decide whether the received data is correct or not without
having a copy of original data.
To detect or correct errors we need to send some extra bits along with original message (data).
These extra bits are called redundant bits. To detect errors we use Generator and
Checker.
Generator: Generator is used at sender end, it is used to generate redundancy bits. These
redundancy bits are appended with sending data.
Checker: Checker is used at receiver end, it verify the data and redundancy bits. If sending
information are unchanged then data is accepted otherwise rejected.
Block diagram of error deduction is given below
Error Detecting Techniques:
The most popular Error Detecting Techniques are:

Single Parity Check


o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at
the end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s
bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.

Performance of Single Parity


 It can detect single bit errors easily.
 It can detect burst error only if the No. of errors is Odd. If even number of bits
changed then it cannot detect errors.
Drawbacks Of Single Parity Checking
o It can only detect single-bit errors, which are very rare.
o If two bits are interchanged, then it cannot detect the errors.

Checksum
Checksum is a error detection which detects the error by dividing the data into the
segments of equal size and then use 1's complement to find the sum of the segments and
then sum is transmitted with the data to the receiver and same process is done by the receiver
and at the receiver side, all zeros in the sum indicates the correctness of the data.

Step-01:
At sender side,
 If m bit checksum is used, the data unit to be transmitted is divided into segments of m
bits.
 All the m bit segments are added.
 The result of the sum is then complemented using 1’s complement arithmetic.
 The value so obtained is called as checksum.

Step-02:
 The data along with the checksum value is transmitted to the receiver.
Step-03:
At receiver side,
 If m bit checksum is being used, the received data unit is divided into segments of m
bits.
 All the m bit segments are added along with the checksum value.
 The value so obtained is complemented and the result is checked.

Then, following two cases are possible-


Case-01: Result = 0
If the result is zero,
 Receiver assumes that no error occurred in the data during the transmission.
 Receiver accepts the data.

Case-02: Result ≠ 0
If the result is non-zero,
 Receiver assumes that error occurred in the data during the transmission.
 Receiver discards the data and asks the sender for retransmission.

Example of Checksum Error Detection


Cyclic Redundancy Check (CRC)
Let us try to understand the cyclic redundancy check using a real-life example.
Consider a situation where you are sending data to your friend. As you know, the basic
functioning of how data is transferred over the internet. The data is transferred in the form
of packets. Due to several issues on the internet there, the data you send to your friend might
get changed or corrupted. Then the file received by your friend will not be the same that you
sent.
But how come your friend knows if the received file has some error? For this purpose,
we use the Cyclic Redundancy Check method to check whether the message received bythe
receiver is correct or not.
In other words, a change in bits ( from 0 to 1 or 1 to 0) in transmitted data because of
a fault in transmission media leads to incorrect data on the receiver side. To solve this problem,
we use a cyclic redundancy check.
The CRC is a network method designed to detect errors in the data and information
transmitted over the network. This is performed by performing a binary solution on the
transmitted data at the sender’s side and verifying the same at the receiver’s side.
The term CRC is used to describe this method because Check represents the “data
verification,” Redundancy refers to the “recheck method,” and Cyclic points to the
“algorithmic formula.”
Now that we are aware about CRC, let's look into some terms and conditions related
to the CRC method.

Following are the steps used in CRC for error detection:


o In CRC technique, a string of n 0s is appended to the data unit, and this n number is
less than the number of bits in a predetermined number, known as division which is n+1
bits.
o Secondly, the newly extended data is divided by a divisor using a process is known
as binary division. The remainder generated from this division is known as CRC
remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat
this whole unit as a single unit, and it is divided by the same divisor that was used to
find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is
accepted.

If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Let's understand this concept through an example:
Suppose the original data is 11100 and divisor is 1001.
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the
end of the data as the length of the divisor is 4 and we know that the length of the
string 0s to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and
the final string would be 11100111 which is sent across the network.
CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker
performs the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.

In Error Detection, the receiver only needs to know that the received codeword is
invalid; But in Error Correction the receiver needs to guess the Original codeword that is
sent. In this way, error Correction is much more difficult than Error Detection.
The need for redundant bits is more during error correction rather than for error detection.

Structure of the Encoder and Decoder in the Error Correction

In order to detect or correct the errors, there is a need to send some extra bits along with
the data. These extra bits are commonly known as Redundant bits.
As we had learned in the previous topics that original data is divided into segments
of k bits; it is referred to as dataword. When we add r redundant bits to each block in order
to make the length;n=k+r then it is referred to as Codeword.
There are two ways to handle the error correction:
1. Whenever an error discovered, the receiver can have the sender in order to retransmit
the entire data unit. This technique is known as the Backward Error correction
technique. This technique is simple and inexpensive in the case of wired transmission
like fiber optics; there is no expense in retransmitting the data. In the case of wireless
transmission, retransmission costs too much thus forward error correction technique is
used then.
2. The receiver can use an error-correcting code that automatically contains certain errors.
This technique is known as the Forward Error Correction technique.
In order to correct the errors, one has to know the exact position of the error. For
example, In case if we want to calculate a single-bit error, the error correction code then mainly
determines which one of seven bits is in the error.
In order to achieve this, we have to add some additional redundant bits.
Suppose r (as the redundant bits) and d indicates the total number of data bits. In order to
calculate the redundant bits(r), the given formula is used;
2r= d+r+1
Error correction is mainly done with the help of the Hamming code.

History of Hamming code


 Hamming code is a technique build by R.W.Hamming to detect errors.
 Hamming code should be applied to data units of any length and uses the
relationship between data and redundancy bits.
 He worked on the problem of the error-correction method and developed an
increasingly powerful array of algorithms called Hamming code. 
 In 1950, he published the Hamming Code, which widely used today in applications
like ECC memory.

How to Encode a message in Hamming Code


The process used by the sender to encode the message includes the following three steps:
 Calculation of total numbers of redundant bits.
 Checking the position of the redundant bits. 
 Lastly, calculating the values of these redundant bits.
When the above redundant bits are embedded within the message, it is sent to the user.

Step 1) Calculation of the total number of redundant bits.


Let assume that the message contains:
 n– number of data bits
 p – number of redundant bits which are added to it so that np can indicate at least (n
+ p + 1) different states.
Here, (n + p) depicts the location of an error in each of (n + p) bit positions and one extra
state indicates no error. As p bits can indicate 2p states, 2p has to at least equal to (n + p + 1).
Step 2) Placing the redundant bits in their correct position.
The p redundant bits should be placed at bit positions of powers of 2. For example, 1, 2, 4,
8, 16, etc. They are referred to as p1 (at position 1), p2 (at position 2), p3 (at position 4), etc.

Step 3) Calculation of the values of the redundant bit.


The redundant bits should be parity bits makes the number of 1s either even or odd.
The two types of parity are ?
 Total numbers of bits in the message is made even is called even parity.
 The total number of bits in the message is made odd is called odd parity.

Here, all the redundant bit, p1, is must calculated as the parity. It should cover all the bit
positions whose binary representation should include a 1 in the 1st position excluding the
position of p1.
P1 is the parity bit for every data bits in positions whose binary representation includes a 1
in the less important position not including 1 Like (3, 5, 7, 9, …. )
P2 is the parity bit for every data bits in positions whose binary representation include 1 in the
position 2 from right, not including 2 Like (3, 6, 7, 10, 11,…)
P3 is the parity bit for every bit in positions whose binary representation includes a 1 in the
position 3 from right not include 4 Like (5-7, 12-15,… )

Decrypting a Message in Hamming code


Receiver gets incoming messages which require to performs recalculations to find and
correct errors.
The recalculation process done in the following steps:
 Counting the number of redundant bits.
 Correctly positioning of all the redundant bits.
 Parity check
Step 1) Counting the number of redundant bits
You can use the same formula for encoding, the number of redundant bits2p
?n+p+1
Here, the number of data bits and p is the number of redundant bits.

Step 2) Correctly positing all the redundant bits


Here, p is a redundant bit which is located at bit positions of powers of 2, For example, 1, 2,4,
8, etc.

Step 3) Parity check


Parity bits need to calculated based on data bits and the redundant bits.p1
= parity(1, 3, 5, 7, 9, 11…)
p2 = parity(2, 3, 6, 7, 10, 11… )
p3 = parity(4-7, 12-15, 20-23… )
Let's understand the Hamming code concept with an example: Let's say you have received a
7-bit Hamming code which is 1011011.

First, let us talk about the redundant bits.


The redundant bits are some extra binary bits that are not part of the original data, but
they are generated & added to the original data bit. All this is done to ensure that thedata
bits don't get damaged and if they do, we can recover them.
Now the question arises, how do we determine the number of redundant bits to be added?
We use the formula, 2r >= m+r+1; where r = redundant bit & m = data bit.

From the formula we can make out that there are 4 data bits and 3 redundancy bits, referring
to the received 7-bit hamming code.

What is Parity Bit?


To proceed further we need to know about parity bit, which is a bit appended to the
data bits which ensures that the total number of 1's are even (even parity) or odd (odd parity).
While checking the parity, if the total number of 1's are odd then write the value of
parity bit P1(or P2 etc.) as 1 (which means the error is there ) and if it is even then the value of
parity bit is 0 (which means no error).
Hamming Code in Computer Network For Error Detection
As we go through the example, the first step is to identify the bit position of the data&
all the bit positions which are powers of 2 are marked as parity bits (e.g. 1, 2, 4, 8, etc.). The
following image will help in visualizing the received hamming code of 7 bits.

First, we need to detect whether there are any errors in this received hamming code.
Step 1: For checking parity bit P1, use check one and skip one method, which means,
starting from P1 and then skip P2, take D3 then skip P4 then take D5, and then skip D6 and
take D7, this way we will have the following bits,

As we can observe the total number of bits are odd so we will write the value of parity bit
as P1 = 1. This means error is there.
Step 2: Check for P2 but while checking for P2, we will use check two and skip two method,
which will give us the following data bits. But remember since we are checking for P2, so
we have to start our count from P2 (P1 should not be considered).
As we can observe that the number of 1's are even, then we will write the value of P2 = 0. This
means there is no error.
Step 3: Check for P4 but while checking for P4, we will use check four and skip four
method, which will give us the following data bits. But remember since we arechecking for
P4, so we have started our count from P4(P1 & P2 should not be considered).

As we can observe that the number of 1's are odd, then we will write the value of P4 = 1.
This means the error is there.
So, from the above parity analysis, P1 & P4 are not equal to 0, so we can clearly say that the
received hamming code has errors.

Hamming Code in Computer Network For Error Correction


Since we found that received code has an error, so now we must correct them. To correct
the errors, use the following steps:
Now the error word E will be:

Now we have to determine the decimal value of this error word 101 which is 5.
We get E = 5, which states that the error is in the fifth data bit. To correct it, just invert the
fifth data bit.
So the correct data will be:

Conclusion:
We have seen how the hamming code in computer network technique works for error
detection and correction in a data packet transmitted over a network.

Application of Hamming code


 Satellites
 Computer Memory
 Modems
 PlasmaCAM
 Open connectors
 Shielding wire
 Embedded Processor
Advantages of Hamming code
 Hamming code method is effective on networks where the data streams are given for
the single-bit errors.
 Hamming code not only provides the detection of a bit error but also helps you to
indent bit containing error so that it can be corrected.
 The ease of use of hamming codes makes it best them suitable for use in computer
memory and single-error correction.

Disadvantages of Hamming code


 Single-bit error detection and correction code. However, if multiple bits are founded
error, then the outcome may result in another bit which should be correct to be changed.
This can cause the data to be further errored.
 Hamming code algorithm can solve only single bits issues. 
Elementary Data Link Protocols
Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control. Framing is the process of dividing bit -
streams from physical layer into data frames whose size ranges from a few hundred to a
few thousand bytes. Error control mechanisms deals with transmission errors and
retransmission of corrupted and lost frames. Flow control regulates speed of delivery and
so that a fast sender does not drown a slow receiver.

Types of Data Link Protocols


Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noisy.

Simplex Protocol
 In simplest protocol, there is no flow control and error control mechanism. It is a
unidirectional protocol in which data frames travel in only one direction (from sender
to receiver).
 Also, the receiver can immediately handle any received frame with a processing time
that is small enough to be negligible.
 The protocol consists of two distinct procedures :a sender and receiver. The sender runs in
the data link layer of the source machine and the receiver runs in the data link layer of
the destination machine. No sequence number or acknowledgements are used here.

Stop AND Wait Protocol / Straight forward method


 Transfer of frames over noiseless channels.
 Data is transferred in unidirectional flow control.
 In this protocol we have flow control but no error control.
 After transmitting one frame, the sender waits for an acknowledgement(ack) before
transmitting the next frame.

Primitives/Rules
Sender side
Rule 1: Send one data packet at a time.
Rule 2: Send the next packet only after receiving acknowledgement of the previous
one.
Receiver side
Rule 1: Receive and consume data packet.
Rule 2: After consuming packet, acknowledgement need to send (flow control).
Diagram
Sender Receiver
Issues faced by Stop AND Wait protocol
1. Problem due to loss of packet.
Diagram:
Sender Receiver

2. Problem due to loss of acknowledgement.


Diagram:
Sender Receiver

3. Problem due to delay in acknowledgement or data.


Diagram:
Sender Receiver

Noise Channels
1. Stop And Wait ARQ protocol
2. Go- Back N ARQ protocol
3. Selective Repeat ARQ
Stop And Wait ARQ protocol
 It is one of the straightforward protocol.
 After sending data packet wait for acknowledgement before transferring next Frame.
 If acknowledgement does not arrive for a certain period of time, the sender timeout
and re-transmits the original frame.
 Stop AND Wait ARQ=Stop AND Wait +timeout time+ Sequence Number.

Diagram
Sender Receiver

Timeout (TO)

a) Problem due to Frame Lost


Sender Receiver

Timeout Frame is Lost


(TO)
X

Timeout
(TO)
b) Problem due to Acknowledgement Lost
Sender Receiver

Timeout Acknowledgement is Lost


(TO)

Timeout
(TO) Retransmit the Frame

c) Problem due to delay in acknowledgement


Sender Receiver

Timeout
Both are not lost but time
(TO)
expired

Timeout
(TO)

Retransmit the Frame

Drawbacks
 One frame at a time (high bandwidth is waste).
 Poor utilization of Bandwidth.
 Poor performance.
Sliding Window Protocol
 Combination of Go-back N ARQ and Selective Repeat ARQ is Sliding window
protocol.
 We can send multiple frames at a time.
 Number of frames to be sent based on window size.
 Each Frame is numbered (sequence number)

Working of SWP (Sliding Window Protocol)


Window Size=4

10 9 8 7 6 5 4 3 2 1 0

Sliding Window

10 9 8 7 6 5 4 3 2 1 0

Frames to be Frames sent and Frames sent and


sent waiting for Ack ACK received

Sender Receiver
Window Size=4
0
0
1

1
2

2
3

6
Go Back -N-ARQ
 Uses concept of protocol pipelining i.e the sender can send multiple frames before
receiving the ACK for first frame.
 There are finite number of frames the frames are numbered in a sequential manner.
 The number of frames that can be sent depends on the window size of the sender.
 If the acknowledgement of frame is not received within the agreed upon time period
all frames in the current sliding window are re-transmitted.
 The size of the sending window determines the sequence number of outbound frame.
 N-Sender window size.
 Ex: If the sending window is 4(22) then the sequence number will be 0,1,2,3,0,1,2,3,0,1.
 The number of bits in the sequence number to generate the binary sequence
00,01,10,11.

10 9 8 7 6 5 4 3 2 1 0
Sliding Window panel

10 9 8 7 6 5 4 3 2 1 0

Re-transmits all the frames in the


sliding window because Frame 2
Acknowledgement is missing of
Timeout (TO)
Sender Receiver
0

0
1

1
2

2
3

Re-transmitted
frames
frames in memoryand sends NACK for only frame which is missing
Select Repeat
ARQ or damaged.
 The sender will send/re-transmit packet for which NACK is received.
 Only the

10
erroneous

or lost Sliding Window


frames are
re- 10 9 8 7 6 5 4 3 2 1 0
transmitte
d while It only re-transmit Frame 2
correct
frames are
received Sender Receiver
and
buffered.
 The
receiver
while
keeping
track of
sequence
numbers,
buffer the
THE NETWORK LAYER
Introduction

The network Layer controls the operation of the subnet. The main aim of this layer is to
deliver packets from source to destination across multiple links (networks). If two computers
(system) are connected on the same link, then there is no need for a network layer. It routes
the signal through different channels to the other end and acts as a network controller.

It also divides the outgoing messages into packets and to assemble incoming packets into
messages for higher levels.

In broadcast networks, the routing problem is simple, so the network layer is often thin or
even non-existent.

Functions of Network Layer


1. It translates logical network address into physical address. Concerned with circuit,
message or packet switching.
2. Routers and gateways operate in the network layer. Mechanism is provided by
Network Layer for routing the packets to final destination.
3. Connection services are provided including network layer flow control, networklayer
error control and packet sequence control.
4. Breaks larger packets into small packets.

Design Issues with Network Layer


 A key design issue is determining how packets are routed from source to destination.
Routes can be based on static tables that are wired into the network and rarely changed.
They can also be highly dynamic, being determined anew for each packet, to reflect the
current network load.
 If too many packets are present in the subnet at the same time, they will get into one
another's way, forming bottlenecks. The control of such congestion also belongs to the
network layer.
 Moreover, the quality of service provided(delay, transmit time, jitter, etc) is also a
network layer issue.
 When a packet has to travel from one network to another to get to its destination,
many problems can arise such as:
o The addressing used by the second network may be different from the first
one.
o The second one may not accept the packet at all because it is too large.
o The protocols may differ, and so on.
 It is up to the network layer to overcome all these problems to allow heterogeneous
networks to be interconnected.

Routing: It is a process of forwarding a packet in a network so that it reaches its intended


destination.

Main goal of routing algorithms:


o Correctness: Routing should be done correctly
o Simplicity: Routing should be done in a simple manner by reducing overhead
o Robustness: The ability to withstand the system over years
o Stability: Routing algorithm should be stable under all circumstances
o Fairness: Every node connected to network should get a fair chance of transmitting
their packet
o Optimality: optimal in terms of through put and minimizing mean packet delays.

Classification of a Routing algorithm

o
o

1) Adaptive Routing (Dynamic):


 Changes routes dynamically
 Gather information at run-time
o Locally
o From adjacent routers
o From all other routers
 Changes routes
o Every delta ‘T’ seconds
o When load changes
o When topology changes
2) Non- Adaptive Routing (Static):
Choice of route is computed in advance offline and downloaded to all the routers
when network is booted

Differences b/w Adaptive and Non-Adaptive Routing Algorithm


Basis Of Adaptive Routing algorithm Non-Adaptive Routing algorithm
Comparison
Define Adaptive Routing algorithm is an The Non-Adaptive Routingalgorithm
algorithm that constructs the is an algorithm that constructs the
routing table based on the static table to determine which node
network conditions. to send the
packet.
Usage Adaptive routing algorithm is The Non-Adaptive Routing
used by dynamic routing. algorithm is used by static routing.
Routing Routing decisions are made Routing decisions are the static
decision based on topology and network tables.
traffic.
Categorization The types of adaptive routing The types of Non Adaptive routing
algorithm, are Centralized, algorithm are flooding and random
isolation and distributed walks.
algorithm.
Complexity Adaptive Routing algorithms Non-Adaptive Routing algorithms
are more complex. are simple.

Routing classifications:
Different Routing Algorithms:
1. Static Routing Algorithm
a. Shortest path routing
b. Flooding
c. Flow based routing
2. Dynamic Routing Algorithm
a. Distance vector Routing
b. Link State Routing
3. Hierarchical routing
4. Routing for mobile hosts
5. Broad cast routing
6. Multicast routing
1. Static Routing Algorithm

Shortest path routing algorithm:


Finds the shortest path between a given pair of routers
The cost of link may be a function of
 Distance
 Bandwidth
 Average traffic
 Communication cost
 Delay & etc
Flooding:
Flooding is a non-adaptive routing technique following this simple method: when a data
packet arrives at a router, it is sent to all the outgoing links except the one it has arrived on.

estination
pkt
B D
D

pkt
A
Source

C E
pkt

Using flooding technique −


 An incoming packet to A, will be sent to B and C.
 B will send the packet to D.
 C will send the packet to D and E.
 D will send the packet to B and C.
 E will send the packet to C.

The major disadvantages are vast number of duplicate packets are generated.
How to stop and eliminate duplicate packets:
a) Using hop counter
 Decrement in each router
 Discard packet if counter is ‘0’
B
data

D D
Destination

b) Apply sequence number in packet:


 Which avoid sending the same packet second time
 And keep in each router per square a list of packets already seen
c) Selective Flooding:
 Use only those lines that are going approximately in right direction.

D D

Advantages of Flooding
 It is very simple to setup and implement, since a router may know only its neighbours.
 It is extremely robust. Even in case of malfunctioning of a large number routers, the
packets find a way to reach the destination.
 All nodes which are directly or indirectly connected are visited. So, there are no chances
for any node to be left out. This is a main criteria in case of broadcast messages.
 The shortest path is always chosen by flooding.
Limitations of Flooding
 Flooding tends to create an infinite number of duplicate data packets, unless some
measures are adopted to damp packet generation.
 It is wasteful if a single destination needs the packet, since it delivers the data packet
to all nodes irrespective of the destination.
 The network may be clogged with unwanted and duplicate data packets. This may
hamper delivery of other data packets.
2. Dynamic Routing Algorithm:
 Distance vector routing algorithm:
o Select the least cost between nodes
o Bellman ford Algorithm
o One routing table per node should be present
 Bellman’s ford Algorithm:
o Defines distance at each node
dx(y) = cost of least cost path from x to y.x
= Source, y= Destination
update distance based on neighbour nodesdx(y)
= min { cost (x,v) + dv(y)}
x = Source, y= Destination, v=Intermediate node,
dv(y) = Distance from intermediate node v to y

Source
9
1 B E

A 3 2

5 C D
4

Example:
 Lets us take B as source, and A as destination
Routing Table: B -> A = 1
B -> C -> A = 3+5=8
B -> E -> D -> A = 9+2+4+5 =20

Distance Vector:
Routing table for B

Source
9
1 B E

A 3 2

5 C D
4
Destination Cost Nexthop
A 1 A
C 3 C
D 7 C
E 9 E

In Other destination scenario i.e.,


B to C
B -> C = 3
B -> A -> C = 6
B -> E -> D -> C = 15

B to D
B -> A -> C -> D = 10
B -> C -> D = 7
B -> E -> D = 11

B to E
B -> E = 9
B -> C -> D = 9
B -> A -> C -> D -> E = 12

Link State Routing (Dijkstra’s Algorithm):


- Used to find out shortest path from one node to every other node in the network
- In this, each router shares the knowledge of its neighbourhood with every other route
on the internetwork
There are 2 phases:
1) Phase 01: Reliable flooding
- Each node knows the cost of its neighbours
- Each node knows the entire graph
2) Phase 02: Route calculation
- Each node uses Dijkstra’s algorithm to calculate the optimal path
Example: Take a below network
node cost
A 2
node cost D 2
B 3 F 1
C 2
2
D 5 node cost

2 C 1
node cost E 2
2
3

4
node cost
node cost B 4
A 3 D 3
D 1 F 2
E 4

Source is A
Iteration Tree B C D E F
Initial {A} 3 2 5 ∞ ∞
1 {A, C} 3 - 4 ∞ 3
2 {A, C, B} - - 4 7 3
2 {A, B, C, F} - - 4 5 -
3 {A, C, B, F, D} - - - 5 -
4 {A, C, B, F, D, E} - - - - -
Dijkstra’s Algorithm:
{
Tree = { root }
For (y=1 to N)
{
If( y in the root)
D[y=0]
Else if ( y in the neighbour)
D[y] = c[root][y]
Else D[y]
=∞
Repeat
{
Find a node with D[N] minimum
Tree = Tree u{w}
For (every node x, which is a neighbour)
{
D[x]=min(D[x],D[w] + c[w][x])
}
}
Until all nodes
}

Hierarchical Routing
 When the network size grows, the number of routers in the network increases,
consequently the size of routing table increases, as well and routers can’t handle
network traffic as efficiently.
o So, we use hierarchical routing to overcome this problem.
 This type of reading is essentially a “Divide and Conquer” strategy
 The network is divided into different region and a router for a particular region
knows only about its own domain and other neighbour routers
 In this routing routers are classified in groups known as regions.
 Each router has only the information about the routers in its own region and has no
information about its own domain and other neighbour routers
 Each router has only the information about the routers in its own region and has no
information about routers in other region. So, region and has no information about.
So, routers just save 1 record in their table for every other region.
Example: If ‘A’ wants to send the packet to any router in region 2(D, E, F & G) sends them
to B and so on as it is seen in this type of routing, the tables can be summarised. So, network
efficiency improves. The below example shows 2 level hierarchical routing.

A’s routing table


Destination Line Weight
A - -
B B 1
C C 1
Region 2 B 2
Region 3 C 4
Region 4 C 3
Region 5 C 2

We can also use three or four level hierarchical routing


In three-level hierarchical routing, the network is classified into a number of clusters.
Each cluster is made up of a number of regions, and each region contains a number or routers,
Hierarchical routing is widely used in internet routing and makes use of several routing
protocols.

Clusters Regions Routers

Three-level Two-level Routing Table for


Hierarchical Routing Hierarchical Routing Every Router
Congestion in Computer Network
What is Congestion ?
Congestion is a state in the network layer that may occur in contrast to the packet switching
technique of data transmission. It is a situation, where the number of packets that a network
can carry gets exceed which results in message traffic and thus slows down the data
transmission rate.

In short, we can say congestion means traffic in the network occurs due to the availability
of extra packets in the subnet.
For example, If we compare the congestion with a real-life example then it would be the
same as road traffic which we encounter occasionally while travelling. Both are having almost
similar reasons to occur, i.e., load is greater than available resources.
Causes Of Network Congestion
There may be several reasons for the congestion in a network. Let’s understand them so that
necessary steps can be taken as a preventive measure. The network team uses NPM (Network
Performance Monitors) to find out the any problem that may cause during the transmission.

1. Non-Compactable or Outdated Hardware: The network team should be aware of the


need of the enterprises as well as one should be updated with the latest hardware
components so that the devices like switches, router, servers can be updated with the
most optimal hardware layout.
2. Poor Network Design And Subnet: Yes, a poorly designed network can lead to
congestion. So the network layout needs to be designed fully optimised so that every
part of the network is connected to communicate effectively. Also, the subnet should be
appropriately seized as per the traffic.
3. Too Many Devices: Too many devices connected in a network will also lead to
congestion, as every network operates over a specific capacity with limited
bandwidth and traffic. So, if there will be more devices in your network than
previously specified then NPM will identify and inform you to handle them.
4. Bandwidth Hog: Bandwidth Hog, it is a term used for the user or the device that
consumes more data than the other devices. Bandwidth hog will utilize more resources
and can lead to congestion. NPM (Network Performance Monitors) will inform when any
device will drain the bandwidth above the expected level.

Effect Of Congestion
The main function of the network gets affected by the congestion, which results in:
1. Slowing down the response time
2. Retransmission of data
3. Confidentiality Breach

Congestion Control
Congestion control is a method to monitor the traffic over the network to keep it at an
acceptable level so that congestion can be prevented before it occurs or if congestion already
occurred, it can be removed.
We can deal with congestion, either by increasing the resources or either by reducing the load.
We will discuss few techniques as well as algorithms for congestion control.

Congestion control algorithms


 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.

There are two congestion control algorithm which are as follows:


1. Leaky Bucket
2. Token Bucket

Leaky Bucket Algorithm


 The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the inefficientuse
of available network resources.
 The large area of network resources such as bandwidth is not being used effectively.
Let us consider an example to understand
Imagine a bucket with a small hole in the bottom. No matter at what rate water
enters the bucket, the outflow is at constant rate. When the bucket is full with water
additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps areinvolved
in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.

Token bucket Algorithm


 The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate -
limiting.
 It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm: -


The leaky bucket algorithm enforces output pattern at the average rate, no matter
how bursty the traffic is. So, in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:


1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,


In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure
(B) We see that three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket : The leaky bucket algorithm
controls the rate at which the packets are introduced in the network, but it is very conservative
in nature. Some flexibility is introduced in the token bucket algorithm. In the token bucket,
algorithm tokens are generated at each tick (up to a certain limit). For an incoming packet to be
transmitted, it must capture a token and the transmission takes place at the same rate. Hence
some of the busty packets are transmitted at the same rate if tokens are available and thus
introduces some amount of flexibility in the system.

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate ρ – Token


arrival rate C – Capacity of the token bucket in byte
Congestion Control Techniques
There are two techniques used for congestion control. One technique deals with prevention
while another deals with cure.

The two congestion control techniques are:


1. Open Loop Congestion Control
2. Closed Loop Congestion Control

Open Loop Congestion Control: This technique is used to prevent congestion beforeit
occurs. Either sender or receiver controls the congestion using this method. In Open loop
congestion control, few policies are used to prevent the congestion before it happens.

Admission Control
In admission policy, the availability of the resources for the transmission is checked by the
switches. If there is a congestion or even a chance for the same to occur then the router will
deny to establish the virtual network connection. It is one of techniques that is widely used in
virtual-circuit networks to keep congestion at bay. The idea is do not set up a new virtual circuit
unless the network can carry the added traffic without becoming congested.
Admission control can also be combined with traffic aware routing by considering routes
around traffic hotspots as part of the setup procedure.
Example
Take two networks (a) A congestion network and (b) The portion of the network that is not
congested. A virtual circuit A to B is also shown below −

Explanation
Step 1 − Suppose a host attached to router A wants to set up a connection to a host attached
to router B. Normally this connection passes through one of the congested routers.
Step 2 − To avoid this situation, we can redraw the network as shown in figure (b),
removing the congested routers and all of their lines.
Step 3 − The dashed line indicates a possible route for the virtual circuit that avoids the
congested routers.

Closed Loop Congestion Control: This technique is used to remove the congestion if
congestion has already occurred in the network. We have further few techniques inside closed
loop to deal with the connection that already occurred.
Choke Packets:
This approach can be used in virtual circuits as well as in the datagram subnets. In this
technique, each router associates a real variable with each of its output lines.

This real variable says “u” has a value between 0and1 and it indicates the percentage utilization
of that line. If the value of “u” goes above the threshold, then that output line will enter
into a “warning” state.

Hop-by-hop choke packets:


This technique is an advancement over the Choked packet method. At high speed over
long distances, sending a packet back to the source doesn’t help much, because by the time the
choke packet reaches the source, already a lot of packets destined for the same original the
destination would be out from the source.

So, to help this, Hop-by-Hop Choke packets are used. Over long distances or at high speeds
choke p y packets are not very effective. A more efficient the method is to send choke packets
hop-by-hop.

This requires each hop to reduce its transmission even before the choke packet arrives at the
source.

You might also like