0% found this document useful (0 votes)
346 views57 pages

3rd Module CN Notes

1) A router contains input ports, a switching fabric, output ports, and a routing processor. The input ports perform packet forwarding lookups to determine the correct output port. 2) The switching fabric connects the input ports to the output ports and allows packets to be transferred between them. Earlier routers used a shared memory and bus for switching, while modern routers use interconnection networks like crossbar switches. 3) The routing processor maintains routing tables and forwarding tables, which are distributed to the input ports for local lookups. This allows packets to be forwarded without involving the centralized routing processor.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
346 views57 pages

3rd Module CN Notes

1) A router contains input ports, a switching fabric, output ports, and a routing processor. The input ports perform packet forwarding lookups to determine the correct output port. 2) The switching fabric connects the input ports to the output ports and allows packets to be transferred between them. Earlier routers used a shared memory and bus for switching, while modern routers use interconnection networks like crossbar switches. 3) The routing processor maintains routing tables and forwarding tables, which are distributed to the input ports for local lookups. This allows packets to be forwarded without involving the centralized routing processor.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

5th 

sem, Computer Networks                                                                                        Module‐3 Network layer 
 

Module – 3

The Network layer

4.3 What’s Inside a Router?


Forwarding function—the actual transfer of packets from a router’s incoming links to the appropriate
outgoing links at that router. In Computer networking, forwarding and switching are often used
interchangeably by researchers and practitioners;
A high-level view of a generic router architecture is shown in Figure 4.6. Four router components can be
identified:

• Input ports. An input port performs several key functions. It performs the physical layer function of
terminating an incoming physical link at a router; this is shown in the leftmost box of the input port and the
rightmost box of the output port in Figure 4.6.
• An input port also performs link-layer functions needed to interoperate with the link layer at the
other side of the incoming link; this is represented by the middle boxes in the input and output
ports.
• Perhaps most crucially, the lookup function is also performed at the input port; this will occur in
the rightmost box of the input port.
• It is here that the forwarding table is consulted to determine the router output port to which an
arriving packet will be forwarded via the switching fabric.
• Control packets (for example, packets carrying routing protocol information) are forwarded from
an input port to the routing processor. Note that the term port here— referring to the physical
input and output router interfaces—is distinctly different from the software ports associated with
network applications.

• Switching fabric. The switching fabric connects the router’s input ports to its output ports. This
switching fabric is completely contained within the router— a network inside of a network router!

• Output ports. Output port stores packets received from the switching fabric and transmit these packets
on the outgoing link by performing the necessary link-layer and physical-layer functions.

• When a link is bidirectional (that is carries traffic in both directions), an output port will typically be
paired with the input port for that link on the same line card (a printed circuit board containing one
or more input ports, which is connected to the switching fabric).

• Routing processor.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 1 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
¾ The routing processor executes the routing protocols, maintains routing tables and attached link
state information, and computes the forwarding table for the router.

¾ It also performs the network management functions.

¾ A router’s input ports, output ports, and switching fabric together implement the forwarding
function and are almost always implemented in hardware, as shown in Figure 4.6.

¾ These forwarding functions are sometimes collectively referred to as the router forwarding
plane.

Input port processing:

¾ A more detailed view of input processing is given in Figure 4.7. As discussed above, the input
port’s line termination function and link-layer processing implement the physical and link layers for
that individual input link.
¾ The lookup performed in the input port is central to the router’s operation—it is here that the
router uses the forwarding table to look up the output port to which an arriving packet will be
forwarded via the switching fabric.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 2 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
¾ The forwarding table is computed and updated by the routing processor, with a shadow copy
typically stored at each input port. The forwarding table is copied from the routing processor to
the line cards over a separate bus (e.g., a PCI bus) indicated by the dashed line from the routing
processor to the input line cards in Figure 4.6.
¾ With a shadow copy, forwarding decisions can be made locally, at each input port, without
invoking the centralized routing processor on a per-packet basis and thus avoiding a centralized
processing bottleneck.
¾ Given the existence of a forwarding table, lookup is conceptually simple—we just search through
the forwarding table looking for the longest prefix match, as described.
 

¾ Once a packet’s output port has been determined via the lookup, the packet can be sent into the
switching fabric.
¾ In some designs, a packet may be temporarily blocked from entering the switching fabric if
packets from other input ports are currently using the fabric.
¾ A blocked packet will be queued at the input port and then scheduled to cross the fabric at a later
point in time.
¾ We’ll take a closer look at the blocking, queuing, and scheduling of packets (at both input ports
and output ports) in Section 4.3.4.
¾ Although “lookup” is arguably the most important action in input port processing, many other
actions must be taken: (1) physical- and link-layer processing must occur, as discussed above;
(2) the packet’s version number, checksum and time-to-live field and the latter two fields
rewritten; and (3) counters used for network management (such as the number of IP datagrams
received) must be updated.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 3 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
4.3.2 Switching
The switching fabric is at the very heart of a router, as it is through this fabric that the packets are actually
switched (that is, forwarded) from an input port to an output port. Switching can be accomplished in a
number of ways, as shown in Figure 4.8:
• Switching via memory. The simplest, earliest routers were traditional computers, with switching
between input and output ports being done under direct control of the CPU (routing processor). Input and
output ports functioned as traditional I/O devices in a traditional operating system.

¾ An input port with an arriving packet first signaled the routing processor via an interrupt. The
packet was then copied from the input port into processor memory. The routing processor then
extracted the destination address from the header, looked up the appropriate output port in the
forwarding table, and copied the packet to the output port’s buffers.

¾ In this scenario, if the memory bandwidth is such that B packets per second can be written into,
or read from, memory, then the overall forwarding throughput (the total rate at which packets are
transferred from input ports to output ports) must be less than B/2. Note also that two packets
cannot be forwarded at the same time, even if they have different destination ports, since only
one memory read/write over the shared system bus can be done at a time.

Switching via a bus. In this approach, an input port transfers a packet directly to the output port over a
shared bus, without intervention by the routing processor. This is typically done by having the input port
pre-pend a switch-internal label (header) to the packet indicating the local output port to which this packet
is being transferred and transmitting the packet onto the bus.
¾ The packet is received by all output ports, but only the port that matches the label will keep the
packet.
¾ The label is then removed at the output port, as this label is only used within the switch to cross
the bus.
¾ If multiple packets arrive to the router at the same time, each at a different input port, all but one
must wait since only one packet can cross the bus at a time.
¾ Because every packet must cross the single bus, the switching speed of the router is limited to
the bus speed; in our roundabout analogy, this is as if the roundabout could only contain one car
at a time.

Switching via an interconnection network. One way to overcome the bandwidth limitation of a single,
shared bus is to use a more sophisticated interconnection network, such as those that have been used in
the past to interconnect processors in multiprocessor computer architecture.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 4 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
¾ A crossbar switch is an interconnection network consisting of 2N buses that connect N input ports
to N output ports, as shown in Figure 4.8. Each vertical bus intersects each horizontal bus at a
crosspoint, which can be opened or closed at any time by the switch fabric controller (whose logic
is part of the switching fabric itself).
¾ When a packet arrives from port A and needs to be forwarded to port Y, the switch controller
closes the crosspoint at the intersection of busses Aand Y, and port Athen sends the packet onto
its bus, which is picked up (only) by bus Y. Note that a packet from port B can be forwarded to
port X at the same time, since the A-to-Y and B-to-X packets use different input and output
busses. Thus, unlike the previous two switching approaches, crossbar networks are capable of
forwarding multiple packets in parallel.
¾ However, if two packets from two different input ports are destined to the same output port, then
one will have to wait at the input, since only one packet can be sent over any given bus at a time.

4.3.3 Output Processing


Output port processing, shown in Figure 4.9, takes packets that have been stored in the output port’s
memory and transmits them over the output link. This includes selecting and de-queueing packets for
transmission, and performing the needed link layer and physical-layer transmission functions.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 5 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 

4.3.4 Where Does Queueing Occur?


¾ packet queues may form at both the input ports and the output ports.
¾ The location and extent of queuing (either at the input port queues or the output port queues) will
depend on the traffic load, the relative speed of the switching fabric, and the line speed.

¾ Suppose that the input and output line speeds (transmission rates) all have an identical
transmission rate of Rline packets per second, and that there are N input ports and N output ports.

¾ let’s assume that all packets have the same fixed length, and the packets arrive to input ports in
a synchronous manner.

¾ That is, the time to send a packet on any link is equal to the time to receive a packet on any link,
and during such an interval of time, either zero or one packet can arrive on an input link.

¾ Define the switching fabric transfer rate Rswitch as the rate at which packets can be moved from
input port to output port.

¾ If Rswitch is N times faster than Rline, then only negligible queuing will occur at the input ports. This
is because even in the worst case, where all N input lines are receiving packets, and all packets
are to be forwarded to the same output port, each batch of N packets (one packet per input port)
can be cleared through the switch fabric before the next batch arrives.

¾ Output port queuing is illustrated in Figure 4.10. At time t, a packet has arrived at each of the
incoming input ports, each destined for the uppermost outgoing port. Assuming identical line
speeds and a switch operating at three times the line speed, one time unit later (that is, in the
time needed to receive or send a packet), all three original packets have been transferred to the
outgoing port and are queued awaiting transmission. In the next time unit, one of these three
packets will have been transmitted over the outgoing link. In our example, two new packets have

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 6 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
arrived at the incoming side of the switch; one of these packets is destined for this uppermost
output port.

¾ Packet scheduling plays a crucial role in providing quality-of-service guarantees. Similarly, if


there is not enough memory to buffer an incoming packet, a decision must be made to either drop
the arriving packet (a policy known as drop-tail) or remove one or more already-queued packets
to make room for the newly arrived packet.
¾ In some cases, it may be advantageous to drop (or mark the header of) a packet before the buffer
is full in order to provide a congestion signal to the sender. A number of packet-dropping and -
marking policies are known as active queue management (AQM) algorithms.

¾ One of the most widely studied and implemented AQM algorithms are the Random Early
Detection (RED) algorithm. Under RED, a weighted average is maintained for the length of the
output queue.
¾ If the average queue length is less than a minimum threshold, minth, when a packet arrives, the
packet is admitted to the queue. Conversely, if the queue is full or the average queue length is
greater than a maximum threshold, maxth, when a packet arrives, the packet is marked or
dropped.
¾ Finally, if the packet arrives to find an average queue length in the interval [minth, maxth], the
packet is marked or dropped with a probability that is typically some function of the average
queue length, minth, and maxth.

If the switch fabric is not fast enough (relative to the input line speeds) to transfer all arriving packets
through the fabric without delay, then packet queuing can also occur at the input ports, as packets must
join input port queues to wait their turn to be transferred through the switching fabric to the output port. To
illustrate an important consequence of this queuing, consider a crossbar switching fabric and suppose
that(1) all link speeds are identical, (2) that one packet can be transferred from any one input port to a
given output port in the same amount of time it takes for a packet to be received on an input link, and (3)
packets are moved from a given input queue to their desired output queue in an FCFS manner. Multiple
packets can be transferred in parallel, as long as their output ports are different. However, if two packets
at the front of two input queues are destined for the same output queue, then one of the packets will
be blocked and must wait at the input queue—the switching fabric can transfer only one packet to a given
output port at a time.

Figure 4.11 shows an example in which two packets (darkly shaded) at the front of their input queues are
destined for the same upper-right output port. Suppose that the switch fabric chooses to transfer the

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 7 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
packet from the front of the upper-left queue. In this case, the darkly shaded packet in the lower-left
queue must wait. But not only must this darkly shaded packet wait, so too must the lightly shaded packet
that is queued behind that packet in the lower-left queue, even though there is no contention for the
middle-right output port (the destination for the lightly shaded packet). This phenomenon is known as
head-of-the-line (HOL) blocking in an input-queued switch—a queued packet in an input queue must
wait for transfer through the fabric (even though its output port is free) because it is blocked by another
packet at the head of the line.

4.3.5 The Routing Control Plane


The routing control plane fully resides and executes in a routing processor within the router. The network-
wide routing control plane is thus decentralized—with different pieces (e.g., of a routing algorithm)
executing at different routers and interacting by sending control messages to each other. Additionally,
router and switch vendors bundle their hardware data plane and software control plane together into
closed (but inter-operable) platforms in a vertically integrated product.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 8 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
4.4.4 IPv6
In the early 1990s, the Internet Engineering Task Force began an effort to develop a successor to the
IPv4 protocol. A prime motivation for this effort was the realization that the 32-bit IP address space was
beginning to be used up, with new subnets an IP nodes being attached to the Internet (and being
allocated unique IP addresses) at a breathtaking rate. To respond to this need for a large IP address
space, a new IP protocol, IPv6, was developed. The designers of IPv6 also took this opportunity to tweak
and augment other aspects of IPv4, based on the accumulated operational experience with IPv4.

IPv6 Datagram Format


The format of the IPv6 datagram is shown in Figure 4.24.

The most important changes introduced in IPv6 are evident in the datagram format:

• Expanded addressing capabilities. IPv6 increases the size of the IP address from 32 to 128 bits. This
ensures that the world won’t run out of IP addresses. Now, every grain of sand on the planet can be IP-
addressable. In addition to unicast and multicast addresses, IPv6 has introduced a new type of address,
called an anycast address, which allows a datagram to be delivered to any one of a group of hosts. (This
feature could be used, for example, to send an HTTP GET to the nearest of a number of mirror sites that
contain a given document.)

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 9 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
• A streamlined 40-byte header. As discussed below, a number of IPv4 fields have been dropped or made
optional. The resulting 40-byte fixed-length header allows for faster processing of the IP datagram. A new
encoding of options allows for more flexible options processing.

• Flow labeling and priority. IPv6 has an elusive definition of a flow. RFC 1752 and RFC 2460 state that
this allows “labeling of packets belonging to particular flows for which the sender requests special
handling, such as a nondefault quality of service or real-time service.” For example, audio and video
transmission might likely be treated as a flow.

On the other hand, the more traditional applications, such as file transfer and e-mail, might not be treated
as flows. It is possible that the traffic carried by a high-priority user (for example, someone paying for
better service for their traffic) might also be treated as a flow.

What is clear, however, is that the designers of IPv6 foresee the eventual need to be able to differentiate
among the flows, even if the exact meaning of a flow has not yet been determined. The IPv6 header also
has an 8-bit traffic class field. This field, like the TOS field in IPv4, can be used to give priority to certain
datagrams within a flow, or it can be used to give priority to datagrams from certain applications (for
example, ICMP) over datagrams from other applications (for example, network news). As noted above, a
comparison of Figure 4.24 with Figure 4.13 reveals the simpler, more streamlined structure of the IPv6
datagram.

The following fields are defined in IPv6:

• Version. This 4-bit field identifies the IP version number. Not surprisingly, IPv6 carries a value of 6 in
this field. Note that putting a 4 in this field does not create a valid IPv4 datagram.

• Traffic class. This 8-bit field is similar in spirit to the TOS field we saw in IPv4.

• Flow label. As discussed above, this 20-bit field is used to identify a flow of Datagrams.

• Payload length. This 16-bit value is treated as an unsigned integer giving the number of bytes in the
IPv6 datagram following the fixed-length, 40-byte datagram
header.

• Next header. This field identifies the protocol to which the contents (data field) of this datagram will be
delivered (for example, to TCP or UDP). The field uses the same values as the protocol field in the IPv4
header.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 10 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 

• Hop limit. The contents of this field are decremented by one by each router that forwards the datagram.
If the hop limit count reaches zero, the datagram is
discarded.

• Source and destination addresses. The various formats of the IPv6 128-bit address are described in
RFC 4291.

• Data. This is the payload portion of the IPv6 datagram. When the datagram reaches its destination, the
payload will be removed from the IP datagram and passed on to the protocol specified in the next header
field.

The discussion above identified the purpose of the fields that are included in the IPv6 datagram.
Comparing the IPv6 datagram format in Figure 4.24 with the IPv4 datagram format that we saw in Figure
4.13, we notice that several fields appearing in the IPv4 datagram are no longer present in the IPv6
datagram:

• Fragmentation/Reassembly. IPv6 does not allow for fragmentation and reassembly at intermediate
routers; these operations can be performed only by the source and destination. If an IPv6 datagram
received by a router is too large to be forwarded over the outgoing link, the router simply drops the
datagram and sends a “Packet Too Big” ICMP error message (see below) back to the sender.

• Header checksum. Because the transport-layer (for example, TCP and UDP) and link-layer (for
example, Ethernet) protocols in the Internet layers perform checksumming, the designers of IP probably
felt that this functionality was sufficiently redundant in the network layer that it could be removed.

• Options. An options field is no longer a part of the standard IP header. However, it has not gone
away. Instead, the options field is one of the possible next headers pointed to from within the IPv6
header. That is, just as TCP or UDP protocol headers can be the next header within an IP packet,
so too can an options field. The removal of the options field results in a fixed-length, 40- byte IP
header.

Transitioning from IPv4 to IPv6


¾ How will the public Internet, which is based on IPv4, be transitioned to IPv6? The problem is that
while new IPv6-capable systems can be made backward compatible, that is, can send, route,
and receive IPv4 datagrams, already deployed IPv4-capable systems are not capable of
handling IPv6 datagrams.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 11 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
¾ Probably the most straightforward way to introduce IPv6-capable nodes is a dual-stack
approach, where IPv6 nodes also have a complete IPv4 implementation. Such a node, referred
to as an IPv6/IPv4 node in RFC 4213, has the ability to send and receive both IPv4 and IPv6
datagrams.
¾ When interoperating with an IPv4 node, an IPv6/IPv4 node can use IPv4 datagrams; when
interoperating with an IPv6 node, it can speak IPv6. IPv6/IPv4 nodes must have both IPv6 and
IPv4 addresses. They must furthermore be able to determine whether another node is IPv6-
capable or IPv4-only.

Tunneling
An alternative to the dual-stack approach, also discussed in RFC 4213, is known as tunneling.
¾ Suppose two IPv6 nodes (for example, B and E in Figure 4.25)
ƒ want to interoperate using IPv6 datagrams but are connected to each other by
intervening IPv4 routers.
ƒ intervening set of IPv4 routers between two IPv6 routers as a tunnel, as
illustrated in Figure 4.26.
¾ With tunneling, the IPv6 node on the sending side of the tunnel (for example, B) takes the entire
IPv6 datagram and puts it in the data (payload) field of an IPv4 datagram.
¾ This IPv4 datagram is then addressed to the IPv6 node on the receiving side of the tunnel (for
example, E) and sent to the first node in the tunnel (for example, C).
¾ The intervening IPv4 routers in the tunnel route this IPv4 datagram among themselves, just as
they would any other datagram, blissfully unaware that the IPv4 datagram itself contains a
complete IPv6 datagram. The IPv6 node on the receiving side of the tunnel eventually receives
the IPv4 datagram (it is the destination of the IPv4 datagram!), determines that the IPv4 datagram
contains an IPv6 datagram, extracts the IPv6 datagram, and then routes the IPv6 datagram
exactly as it would if it had received the IPv6 datagram from a directly connected IPv6 neighbor

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 12 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 

4.4.5 A Brief Foray into IP Security


The services provided by an IPsec session include:

• Cryptographic agreement. Mechanisms that allow the two communicating hosts to agree on
cryptographic algorithms and keys.

• Encryption of IP datagram payloads. When the sending host receives a segment from the transport
layer, IPsec encrypts the payload. The payload can only be decrypted by IPsec in the receiving host.

• Data integrity. IPsec allows the receiving host to verify that the datagram’s header fields and encrypted
payload were not modified while the datagram was en route from source to destination.

• Origin authentication. When a host receives an IPsec datagram from a trusted source (with a trusted
key—see Chapter 8), the host is assured that the source IP address in the datagram is the actual source
of the datagram. When two hosts have an IPsec session established between them, all TCP and UDP

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 13 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
segments sent between them will be encrypted and authenticated. IPsec therefore provides blanket
coverage, securing all communication between the two hosts for all network applications.

4.5.1 The Link-State (LS) Routing Algorithm


Recall that in a link-state algorithm, the network topology and all link costs areknown, that is, available as
input to the LS algorithm. In practice this is accomplished by having each node broadcast link-state
packets to all other nodes in the network, with each link-state packet containing the identities and costs of
its attached links. In practice (for example, with the Internet’s OSPF routing protocol, discussed in Section
4.6.1) this is often accomplished by a link-state broadcast then run the LS algorithm and compute the
same set of least-cost paths as every other node.
The link-state routing algorithm we present below is known as Dijkstra’s algorithm.

. Let us define the following notation:


• D(v): cost of the least-cost path from the source node to destination v as of this
iteration of the algorithm.
• p(v): previous node (neighbor of v) along the current least-cost path from the
source to v.
• N_ : subset of nodes; v is in N_ if the least-cost path from the source to v is definitively
known.
The global routing algorithm consists of an initialization step followed by a loop. The number of times the
loop is executed is equal to the number of nodes in the network. Upon termination, the algorithm will have
calculated the shortest paths from the source node u to every other node in the network.
Link-State (LS) Algorithm for Source Node u

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 14 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
As an example, let’s consider the network in Figure 4.27

and compute the least-cost paths from u to all possible destinations. A tabular summary of the
algorithm’s computation is shown in Table 4.3, where each line in the table gives the values of the
algorithm’s variables at the end of the iteration. Let’s consider the few first steps in detail.

• In the initialization step, the currently known least-cost paths from u to its directly attached neighbors, v,
x, and w, are initialized to 2, 1, and 5, respectively. Note in particular that the cost to w is set to 5 (even
though we will soon see that a lesser-cost path does indeed exist) since this is the cost of the direct (one
hop) link from u to w. The costs to y and z are set to infinity because they are not directly connected to u.

• In the first iteration, we look among those nodes not yet added to the set N_ and find that node with the
least cost as of the end of the previous iteration. That node is x, with a cost of 1, and thus x is added to
the set N_. Line 12 of the LS algorithm is then performed to update D(v) for all nodes v, yielding the
results shown in the second line (Step 1) in Table 4.3. The cost of the path to v is unchanged. The cost of
the path to w (which was 5 at the end of the initialization) through node x is found to have a cost of 4.
Hence this lower-cost path is selected and w’s predecessor along the shortest path from u is set to x.
Similarly, the cost to y (through x) is computed to be 2, and the table is updated accordingly.

• In the second iteration, nodes v and y are found to have the least-cost paths (2), and we break the tie
arbitrarily and add y to the set N_ so that N_ now contains u, x, and y. The cost to the remaining nodes
not yet in N_, that is, nodes v, w, and z, are updated via line 12 of the LS algorithm, yielding the results
shown in the third row in the Table 4.3.
• And so on. . . .

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 15 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 

When the LS algorithm terminates, we have, for each node, its predecessor
along the least-cost path from the source node.

4.5.2 The Distance-Vector (DV) Routing Algorithm


Whereas the LS algorithm is an algorithm using global information, the distancevector (DV) algorithm is
iterative, asynchronous, and distributed. It is distributed in that each node receives some information from
one or more of its directly attached neighbors, performs a calculation, and then distributes the results of
its calculation back to its neighbors. It is iterative in that this process continues on until no more
information is exchanged between neighbors. (Interestingly, the algorithm is also self-terminating—there
is no signal that the computation should stop; it just stops.) The algorithm is asynchronous in that it does
not require all ofthe nodes to operate in lockstep with each other.
Before we present the DV algorithm, it will prove beneficial to discuss an important relationship that exists
among the costs of the least-cost paths. Let dx(y) be the cost of the least-cost path from node x to node y.
Then the least costs are related by the celebrated Bellman-Ford equation, namely,

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 16 
 
5th sem, Computer Networks                                                                                        Module‐3 Network layer 
 
dx(y) = minv{c(x,v) + dv(y)}, (4.1)
where the minv in the equation is taken over all of x’s neighbors.

Dr Prakasha S  & Sudha V, Dept of ISE, RNSIT   Page 17 
 

You might also like