0% found this document useful (0 votes)
13 views

CN vtu edge module 4

The transport layer enhances communication by providing reliable data transport between processes, utilizing protocols like TCP for reliable connections and UDP for faster, connectionless communication. It offers services such as error detection, flow control, and connection management, while also supporting socket programming for application development. The transport layer operates over the network layer, addressing complexities like connection establishment, error control, and flow management to ensure efficient data transmission.

Uploaded by

charanrajb282004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

CN vtu edge module 4

The transport layer enhances communication by providing reliable data transport between processes, utilizing protocols like TCP for reliable connections and UDP for faster, connectionless communication. It offers services such as error detection, flow control, and connection management, while also supporting socket programming for application development. The transport layer operates over the network layer, addressing complexities like connection establishment, error control, and flow management to ensure efficient data transmission.

Uploaded by

charanrajb282004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Computer Networks

MODULE 4
THE TRANSPORT LAYER
Introductions
The network layer serves as a crucial intermediary in facilitating end-to-end packet delivery within
a network infrastructure, employing either datagrams or virtual circuits to ensure efficient data
traversal. It acts as a conduit, coordinating the movement of packets across various network nodes,
thereby enabling communication between source and destination systems. By managing routing
and addressing, the network layer ensures that data packets reach their intended recipients reliably
and in a timely manner.

Building upon the foundation laid by the network layer, the transport layer enhances
communication capabilities by providing a more sophisticated and reliable means of data transport
between processes on source and destination machines. This layer extends beyond mere packet
delivery, incorporating mechanisms for error detection, flow control, and congestion management
to guarantee the integrity and efficiency of data transmission. By encapsulating data into segments
or messages, the transport layer establishes logical connections between applications/process
running on different hosts, enabling them to exchange information seamlessly.

Figure: Environment of the transport layer.

Two prominent protocols that operate within the transport layer are TCP (Transmission Control
Protocol) and UDP (User Datagram Protocol). While both protocols facilitate communication
between applications, they exhibit distinct characteristics and serve different purposes. TCP offers
reliable, connection-oriented communication, ensuring that data is delivered accurately and in
the correct order through mechanisms such as acknowledgment, retransmission, and
sequencing. In contrast, UDP provides a simpler, connectionless communication model,
prioritizing speed and efficiency over reliability. UDP is often favored for real-time applications
or scenarios where occasional packet loss is acceptable, while TCP is preferred for applications
requiring reliable data transmission, such as file transfers or web browsing. Together, TCP and
UDP play vital roles in enabling diverse communication needs within networked environments,
1
Computer Networks

each contributing its unique strengths to the overarching goal of efficient and dependable data
transport.

Transport Layer Services

The transport layer aims to deliver efficient, reliable, and cost-effective data transmission services
to users, typically processes operating in the application layer. It relies on the network layer's
services to achieve its objectives, leveraging underlying network infrastructure for packet routing
and delivery.

Figure 1.

Location of Transport Entity: The transport entity can be situated in various locations, including
the operating system kernel, library packages, separate user processes, or even on the network
interface card.

Types of Transport Services: Two primary types of transport services exist: connection-oriented
and connectionless transport services, mirroring their counterparts in the network layer. The key
distinction lies in where the code executes: the transport layer code operates on users' machines,
while the network layer primarily runs on routers.

Enhanced Reliability: The existence of the transport layer enables a more reliable service than
the underlying network. While the network service models real networks, which can be unreliable,
the connection-oriented transport service aims for reliability on top of an unreliable network. The
transport layer isolates upper layers from the technology, design, and imperfections of the network,
providing a more predictable and consistent communication environment.

Transport Service Primitives


The transport layer provides operations to application programs through transport service
primitives, facilitating communication between different processes.

2
Computer Networks

Primitives Usage Example:


• Consider an application scenario with a server and remote clients:
1. The server initiates communication by executing a LISTEN primitive, which blocks
until a client arrives.
2. When a client wishes to communicate, it executes a CONNECT primitive, prompting
the transport entity to send a packet to the server with a transport layer message in the
payload.
3. Segments (transport layer) are encapsulated within packets (network layer), which, in
turn, are contained in frames (data link layer).
Client-Server - Connection Establishment:
• In connection establishment:
• The client's CONNECT call triggers the sending of a CONNECTION REQUEST
segment to the server.
• Upon reception, the server checks if it's blocked on a LISTEN and, if so, unblocks itself
and sends a CONNECTION ACCEPTED segment back to the client, establishing the
connection.
Connection Release Variants:
• Disconnection can occur in two variants: asymmetric and symmetric.
• Asymmetric disconnection allows either transport user to issue a DISCONNECT
primitive, resulting in a DISCONNECT segment sent to the remote transport entity,
thereby releasing the connection.
• In symmetric disconnection, each direction is closed independently, with
DISCONNECT initiated when one side has no more data to send but is still willing to
accept data from its partner.

3
Computer Networks

Berkeley Sockets/Socket Programming:


Berkeley Sockets, a set of transport primitives used for TCP (Transmission Control Protocol), were
initially introduced as part of the Berkeley UNIX 4.2BSD software distribution in 1983. Since
their release, they have gained widespread popularity and are now extensively utilized for Internet
programming across various operating systems, particularly UNIX-based systems. Additionally, a
socket-style API named 'winsock' exists for Windows systems.
Socket Primitives:
The socket primitives, offer enhanced features and flexibility. These primitives include:
1. SOCKET: Creates a new communication endpoint.
2. BIND: Associates a local address with a socket.
3. LISTEN: Announces willingness to accept connections and specifies the queue size.
4. ACCEPT: Passively establishes an incoming connection.
5. CONNECT: Actively attempts to establish a connection.
6. SEND: Sends data over the connection.
7. RECEIVE: Receives data from the connection.
8. CLOSE: Releases the connection.
Server-Side Execution:

4
Computer Networks

Servers execute the first four primitives in sequence. The SOCKET primitive creates a new
endpoint and allocates table space for it within the transport entity. The BIND primitive assigns
network addresses to sockets, allowing remote clients to connect. LISTEN allocates space to queue
incoming calls, while ACCEPT blocks until an incoming connection request arrives.
Client-Side Execution:
On the client side, a socket is first created using the SOCKET primitive. However, BIND is not
necessary since the server does not require a specific address. The CONNECT primitive actively
initiates the connection process and blocks the caller until it completes. Once established, both
sides can use SEND and RECEIVE to transmit and receive data over the connection.
Connection Release:
Connection release with sockets is symmetric. Once both sides execute a CLOSE primitive, the
connection is released.

Popularity and Standardization:


Berkeley Sockets have gained widespread popularity and are considered the de facto standard for
abstracting transport services to applications. The socket API, often used with the TCP protocol,
provides a reliable byte stream service. While TCP is commonly used, other protocols could be
implemented using the same API, providing flexibility to transport service users.

5
Computer Networks

Compatibility with Various Transport Protocols:


Sockets can also be employed with transport protocols that provide a message stream rather than
a byte stream and may or may not include congestion control mechanisms. For instance, protocols
like DCCP (Datagram Congestion Controlled Protocol) offer congestion control similar to TCP
but operate on a connectionless basis, akin to UDP.
Potential Limitations and Evolution:
• Despite its strengths, the socket API may not be the ultimate solution for transport
interfaces, especially in scenarios where applications work with a group of related streams.
• Applications like web browsers, which request multiple objects from the same server, may
find the socket API's structure suboptimal as it handles each stream separately, placing the
burden of managing the set on the application.
• Newer protocols and interfaces, such as SCTP (Stream Control Transmission Protocol) and
SST (Structured Stream Transport), have been developed to address these limitations more
effectively. These protocols offer features like support for groups of related streams, a mix
of connection-oriented and connectionless traffic, and multiple network paths.
• These newer protocols may require slight modifications to the socket API to fully leverage
their benefits, and their success will ultimately depend on their adoption and effectiveness
in addressing the evolving needs of transport services.

ELEMENTS OF TRANSPORT PROTOCOLS


The transport service is realized through a transport protocol employed between two transport
entities. This protocol is responsible for managing various aspects of communication, akin to data
link protocols studied previously.
Similar to data link protocols, transport protocols address issues such as error control, sequencing,
and flow control, among others. Despite similarities, significant differences exist between
transport and data link protocols due to the distinct environments in which they operate.
• While data link protocols manage communication between two routers directly via a
physical channel, transport protocols operate over the entire network, leading to several
implications.
1. Need for Explicit Addressing
2. Complexity of Connection Establishment
3. Potential for Packet Delay and Duplication
4. Buffering and Flow Control Complexity
1. Need for Explicit Addressing:
Unlike data link protocols where routers communicate directly with specific counterparts, the
transport layer requires explicit addressing of destinations. Addressing mechanisms play a crucial
role in facilitating communication between application processes in networked environments.
6
Computer Networks

Various strategies, such as stable TSAP addresses, portmappers, and proxy servers, are employed
to manage the complexities of address discovery and connection establishment.
When an application process intends to establish a connection with a remote process, it must
specify the destination process. This applies to both connection-oriented and connectionless
transport services.
Transport addresses, known as Transport Service Access Points (TSAPs), are defined for this
purpose. On the Internet, these endpoints are referred to as ports, serving as entry points for
communication. TSAPs represent specific endpoints in the transport layer, while Network Service
Access Points (NSAPs) denote endpoints in the network layer. IP addresses are examples of
NSAPs.
• Example Scenario:
• Consider a scenario where a mail server process on host 2 attaches itself to TSAP 1522
to await incoming calls. Meanwhile, an application process on host 1 attaches itself to
TSAP 1208 and issues a CONNECT request to establish a connection with TSAP 1522
on host 2.
• The subsequent steps involve data transmission, response acknowledgment, and
connection release.

Figure: TCP addressing


• Address Discovery Challenges:
• A challenge arises in how the user process on host 1 identifies the TSAP address of the
mail server (TSAP 1522). One approach involves servers maintaining stable TSAP
addresses listed in well-known files.
• However, this approach may not suffice for user processes wanting to communicate
with unknown or transient services.
• Portmapper Solution:

7
Computer Networks

• An alternative scheme involves a special process called a portmapper. Users can


connect to the portmapper, request TSAP addresses corresponding to service names,
and establish connections accordingly.
• Newly created services must register themselves with the portmapper, providing their
service name and TSAP for future reference.
• Initial Connection Protocol:
• In the initial connection protocol, rather than having every server listen at a well-known
TSAP, each machine offering services has a proxy server (e.g., inetd on UNIX systems)
that listens for connection requests.
• Users initiate CONNECT requests specifying the TSAP address of the desired service.
If no server is available, the connection is established with the process server, which
spawns the requested server upon request arrival.
2. Complexity of Connection Establishment:
Connection establishment in the transport layer is more complicated compared to the data link
layer. While establishing connections over physical channels is straightforward, the process
becomes intricate in the transport layer. Connection establishment in networks poses challenges
due to various network phenomena like prone to packet loss, delay, corruption, and duplication.
Protocols need to implement robust mechanisms to handle such scenarios and ensure reliable
communication. Techniques such as packet lifetime restriction and sequence number management
play crucial roles in mitigating the impact of delayed duplicates on network operations.
1. Addressing Delayed Duplicates:
• One approach to address delayed duplicates involves using throwaway transport
addresses, where each connection generates a new address. However, this approach
complicates the initial connection setup.
• Another method involves assigning a unique identifier to each connection, enabling
entities to identify and reject duplicates. However, this approach requires
maintaining historical information indefinitely, posing scalability challenges.
2. Managing Packet Lifetime:
• Packet lifetime can be restricted to a maximum using techniques such as restricted
network design, hop counters, or timestamping.
• By bounding packet lifetimes, it becomes possible to reject delayed duplicate
segments reliably.
3. Rejecting Delayed Duplicates:
• To reject delayed duplicates effectively, segments are labeled with sequence
numbers that won't be reused within a defined time period (T).
• This method ensures that delayed duplicates of old packets are discarded,
preventing confusion at the destination.
8
Computer Networks

4. Clock-Based Solution:
• An alternative proposed by Tomlinson involves equipping each host with a time-
of-day clock, which operates as a binary counter.
• The clock's count, which exceeds the number of bits in sequence numbers,
continues running even if the host crashes, ensuring reliable sequence number
management.
5. Buffering and Flow Control Complexity:
• Both buffering and flow control are essential in both data link and transport layers, but
the transport layer's complexity lies in managing a larger and varying number of
connections with fluctuating bandwidth.
• Unlike data link protocols, which may allocate a fixed number of buffers to each line,
the transport layer's numerous connections and variable bandwidth may require a
different approach to buffer allocation.
6. Three-way handshake:
Tomlinson (1975) introduced the three-way handshake as a solution to the problem of
establishing connections reliably in the presence of delayed duplicate control segments.
1. In this protocol, one peer verifies with the other that the connection request is current. The
typical setup procedure involves the initiating host sending a CONNECTION REQUEST
segment containing a sequence number (x) to the receiving host.
2. The receiving host responds with an ACK segment acknowledging the sequence number
(x) and announcing its own initial sequence number (y).
3. Finally, the initiating host acknowledges the receiving host's choice of an initial sequence
number in the first data segment it sends.

Figure: (a) Normal operation. (b) Old duplicate CONNECTION REQUEST appearing out of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicate ACK.

• The three-way handshake works effectively in the presence of delayed duplicate control
segments by ensuring that any delayed duplicates do not cause damage.
• If a delayed duplicate CONNECTION REQUEST segment arrives, the receiving host
verifies with the initiating host before proceeding with the connection setup.
9
Computer Networks

• In the worst-case scenario where both a delayed CONNECTION REQUEST and an ACK
are floating around, the protocol still ensures that the connection is not established
accidentally.
• TCP utilizes this three-way handshake for connection establishment. Additionally, within
a connection, a timestamp is used to extend the sequence number to prevent wrapping
within the maximum packet lifetime, as described in RFC 1323 (PAWS).

7. Connection Release
Releasing a connection can be more complex than establishing one, with various potential
pitfalls. Two styles of connection termination exist: asymmetric release and symmetric
release.
• Asymmetric release, akin to how the telephone system operates, terminates the connection
when one party hangs up.
• Symmetric release treats the connection as two separate unidirectional connections,
requiring each to be released independently.

Abrupt disconnection with loss of data

(a) Normal case of three-way handshake. (b) Final ACK lost. (c) Response lost. (d) Response lost and
subsequent DRs lost.

Asymmetric release can lead to abrupt termination and potential data loss. For example, if
one party sends data while the other issues a DISCONNECT segment, data may be lost
when the connection is released. To avoid data loss, a more sophisticated release protocol
may be needed. Symmetric release allows each direction of the connection to be released

10
Computer Networks

independently, enabling a host to continue receiving data even after sending a


DISCONNECT segment.

Error Control and Flow Control:


Error control ensures data delivery with desired reliability. Flow control prevents fast transmitter
from overwhelming slow receiver.

Error Control Mechanisms:


• Error-Detecting Code: Frame carries CRC or checksum to check for correct reception.
• Automatic Repeat reQuest (ARQ): Frame carries sequence number; sender retransmits
until successful receipt acknowledged.

Flow Control Mechanisms:


• Maximum Number of Outstanding Frames: Sender limits number of unacknowledged
frames; pauses if receiver doesn't acknowledge quickly.
• Stop-and-Wait Protocol: Maximum one packet outstanding; sender waits for
acknowledgment before sending next packet.
• Sliding Window Protocol: Combines features of ARQ and flow control; supports
bidirectional data transfer.
• Larger windows enable pipelining, enhancing performance on long, fast links.

Difference in Function:
• Link layer checksum protects frame while crossing single link.
• Transport layer checksum protects segment while crossing entire network path, providing
end-to-end check.
• End-to-end check essential for correctness according to end-to-end argument.
• Link layer checks valuable for improving performance but not essential for correctness.
Difference in Degree:
• Wireless links often have small bandwidth-delay product, allowing only one frame
outstanding at a time.
• Small window size sufficient for good performance.
• Wired and optical fiber links have low error rates, allowing end-to-end retransmissions to
repair residual frame loss.
• TCP connections often have larger bandwidth-delay product, requiring larger sliding
windows.
Buffering Considerations:
• Hosts may need substantial buffering for sliding windows, at both sender and receiver.
• Sender buffers transmitted but unacknowledged segments for potential retransmission.
• Receiver may dynamically acquire buffers for incoming segments, potentially discarding
if unavailable.
• Permanent harm avoided by sender's ability to retransmit lost segments.
Trade-off in Buffering:
• For low-bandwidth bursty traffic, dynamic buffer acquisition may be reasonable.
• For high-bandwidth traffic like file transfer, dedicated window of buffers at receiver
preferred for maximum speed.
11
Computer Networks

• TCP employs strategy of dedicating full window of buffers at receiver.

Dynamic Buffer Allocation:


• Adjusting buffer allocations dynamically is crucial as traffic patterns change and
connections open and close.
• The transport protocol should allow a sending host to request buffer space at the
receiver, either per connection or collectively for all connections between hosts.
• Alternatively, the receiver can inform the sender about reserved buffer space.
• TCP implements dynamic buffer management, decoupling buffering from
acknowledgements and using a variable-sized window.
Example Scenario:
• Illustrate dynamic window management in a datagram network with 4-bit sequence
numbers.
• Sender requests eight buffers but receives only four. It sends three segments, one of
which is lost, causing a deadlock due to improper buffer allocation handling.
Preventing Deadlock:
• To prevent deadlock, hosts should periodically send control segments with
acknowledgement and buffer status on each connection.
• This ensures that deadlock situations, caused by lost control segments, are resolved
eventually.
Changing Bottlenecks:
• While buffer space was once a bottleneck, falling memory prices have mitigated this
issue.
• Another bottleneck arises from the network's carrying capacity, limiting the maximum
flow between hosts.
• A mechanism is needed to limit transmissions based on the network's capacity rather
than the receiver's buffering capacity.
Dynamic Sliding Window:
• Belsnes (1975) proposed a sliding window flow-control scheme where the sender
adjusts the window size dynamically to match the network's carrying capacity.
• The sender's window size should be proportional to the network's capacity and adjusted
frequently to track changes in carrying capacity, as done in TCP.

Multiplexing in Transport Layer:


Multiplexing involves sharing multiple conversations over same connections, virtual circuits, and
physical links. It plays a role in various layers of the network architecture, including the transport
layer.
Multiplexing is required when only one network address is available on a host, and all transport
connections on that host must use it. It helps distinguish which process should receive incoming
segments when multiple transport connections share the same network connection.
Demultiplexing is the process of directing incoming data packets to the appropriate application
or process based on their destination addresses or ports.

12
Computer Networks

For example, if a host is running a web server (listening on port 80) and an email server (listening
on port 25), the demultiplexer ensures that HTTP requests destined for port 80 are routed to the
web server application, while SMTP emails destined for port 25 are routed to the email server
application.
• The inverse/demultiplexing multiplexing includes protocols like SCTP (Stream Control
Transmission Protocol), which can utilize multiple network interfaces for a single
connection.

13
Computer Networks

• TCP typically uses a single network endpoint, whereas protocols like SCTP support
multiplexing by utilizing multiple network interfaces.

Crash Recovery
Recovery from crashes is a critical issue in networked systems, especially for long-lived
connections such as large software or media downloads. Recovery from network or router crashes
is relatively straightforward within hosts as the transport entities expect lost segments and handle
them using retransmissions.
• Recovering from host crashes poses more challenges, particularly when clients need to
continue working even if servers crash and reboot.
• An example scenario illustrates the difficulty: a client sending a long file to a server using a
stop-and-wait protocol. If the server crashes midway, it loses track of its previous state upon
reboot.
• The server may attempt to recover by broadcasting a crash announcement and requesting
clients to inform it of their connection status. Clients, based on their state (S1 or S0), decide
whether to retransmit the most recent segment.
• Naive approaches to recovery encounter difficulties due to the asynchronous nature of events,
such as acknowledgments and writes, leading to missed or duplicate segments.
Various programming combinations for the server and client (acknowledge first or write first,
retransmit strategies) fail to ensure proper recovery in all situations, resulting in protocol failures
despite different configurations. The complexity of crash recovery underscores the need for robust
protocols and careful consideration of asynchronous events to ensure reliable communication in
networked systems.
Server Events: Sending Acknowledgement (A), Writing to Output Process (W), Crashing (C)
Orderings: AC(W), AWC, C(AW), C(WA), WAC, WC(A)

Outcome for Each Combination:


Client/ Server Retransmit in S1
Always Retransmit Never Retransmit Retransmit in S0
Strategies
First Acknowledge,
OK DUP OK DUP
then Write (AC(W))
First Write, then
DUP OK LOST OK
Acknowledge (AWC)
Crash before
Acknowledgement OK DUP LOST OK
(C(AW))
Crash before Write
LOST OK OK DUP
(C(WA))
Write before
OK LOST DUP OK
Acknowledge (WAC)
Write before Crash
LOST OK OK DUP
(WC(A))

14
Computer Networks

This table summarizes the different combinations and their outcomes. Each strategy has scenarios
where the protocol functions correctly, generates duplicate messages, or loses messages. The
complexity of the interactions demonstrates the challenge of achieving transparent recovery from
host crashes at the transport layer.
Recovery from a layer N crash can only be done by layer N + 1 if enough status information is
retained to reconstruct the previous state.

Congestion Control at Transport Layer


Congestion control in the transport layer is crucial for maintaining network stability and
performance by preventing network congestion and ensuring fair resource allocation. It involves
various techniques to regulate the flow of data between sender and receiver, dynamically adjusting
transmission rates based on network conditions such as packet loss, delay, and throughput.
Effective congestion control mechanisms help optimize network utilization, minimize packet loss,
and maintain smooth data transmission, enhancing overall network reliability and efficiency. These
techniques include:
1. Desirable Bandwidth Allocation
2. Regulating the Sending Rate- AIMD (Additive Increase Multiplicative Decrease)
3. Wireless Issues in congestion control
In network communication, desirable bandwidth allocation is essential to ensure fair and efficient
utilization of available resources. This involves distributing bandwidth among different users or
applications in a manner that optimizes overall network performance while addressing individual
needs. To achieve this, regulating the sending rate is crucial, and one common approach is AIMD
(Additive Increase Multiplicative Decrease). AIMD dynamically adjusts the sending rate based on
network conditions, incrementing it gradually under favorable conditions and reducing it
multiplicatively in response to congestion signals like packet loss. However, wireless networks
introduce additional challenges to congestion control due to factors like varying signal strength,
interference, and mobility. These issues necessitate adaptive congestion control mechanisms
tailored to wireless environments, such as adjusting transmission power, rate control, and mobility-
aware routing algorithms, to ensure stable and efficient data transmission.
1. Desirable Bandwidth Allocation
Goal of Congestion Control is achieving an optimal allocation of bandwidth among transport
entities and optimal allocation ensures:
• Utilization of available bandwidth without causing congestion.
• Fairness among competing transport entities.
• Rapid adaptation to changes in traffic demands.
Efficiency and Power in Bandwidth Allocation: the goodput initially increases with load but
saturates as the network approaches capacity due to occasional traffic bursts causing losses at
buffers. Optimal performance occurs when bandwidth allocation is balanced to prevent rapid
increases in delay, just below the onset of congestion.

15
Computer Networks

Kleinrock (1979) proposed the metric of "power," where power initially increases with load but
peaks and decreases as delay rises rapidly. The load corresponding to the highest power signifies
an efficient allocation for transport entities.
Max-Min Fairness: Congestion control mechanisms dynamically allocate bandwidth to
competing connections based on demand. Adopting max-min fairness ensures that increasing
bandwidth for one flow cannot worsen the situation for others. In max-min fairness, bandwidth
allocation is such that increasing one flow's bandwidth reduces the bandwidth available to other
flows.
Example: In Fig. 6-20, flows A, B, C, and D are allocated bandwidth ensuring no flow can increase
its bandwidth without reducing others'.

• Three flows compete for the bottom-left link between routers R4 and R5, resulting in each
flow receiving 1/3 of the link's capacity.
• Flow A competes with B on the link from R2 to R3. Since B has an allocation of 1/3, A
receives the remaining 2/3 of the link's capacity.
• Spare capacity exists on other links, but reallocating it to any flow would decrease the
capacity allocated to another lower-priority flow.
• For instance, if more bandwidth on the link between R2 and R3 is allocated to flow B, flow
A's capacity would decrease.
• This allocation maintains fairness as increasing bandwidth for one flow would decrease the
bandwidth for others, ensuring a max-min fair distribution.
Max-min allocations require global knowledge of the network. Flows' rates are slowly increased
until reaching bottlenecks, ensuring fair distribution of available capacity.
Convergence in Congestion Control: Bandwidth needs fluctuate over time due to factors like
browsing web pages or downloading large videos, requiring the network to adapt continuously. A
good congestion control algorithm should swiftly converge to the ideal operating point and adapt
as it evolves.
Example:
• Fig. 6-21 illustrates a bandwidth allocation that changes over time and converges quickly.

16
Computer Networks

• Initially, flow 1 monopolizes all bandwidth. When flow 2 starts after one second, the
allocation swiftly adjusts to allocate half the bandwidth to each flow.
• At 4 seconds, a third flow joins but utilizes only 20% of the bandwidth, prompting flows 1
and 2 to adjust and each receive 40% of the bandwidth.
• When flow 2 leaves at 9 seconds, flow 1 quickly captures 80% of the bandwidth.
• Throughout, total allocated bandwidth remains close to 100%, ensuring full network
utilization while providing equal treatment to competing flows without excessive
bandwidth usage.

Regulating the Sending Rate


Flow control occurs when the receiver lacks sufficient buffering capacity, while congestion arises
from insufficient network capacity as shown in figure.

1. Dual Solutions: Transport protocols need to address both flow control and congestion
control, employing variable-sized window solutions for flow control and congestion
control algorithms for network capacity issues.
17
Computer Networks

2. Feedback Mechanisms: The method of regulating sending rates depends on the feedback
provided by the network, which can be explicit or implicit, precise or imprecise.
a. XCP (eXplicit Congestion Protocol): Explicit and precise, where routers inform
sources of the rate at which they may send.
b. TCP with ECN (Explicit Congestion Notification): Explicit but imprecise, using
packet markings to signal congestion without specifying how much to slow down.
c. FAST TCP: Implicit and precise, utilizing round-trip delay as a signal to avoid
congestion.
d. Compound TCP and CUBIC TCP: Implicit and imprecise, relying on packet loss
as an indication of congestion.

3. Congestion Control Protocols:


Protocol Signal Explicit? Precise?

XCP Rate to use Yes Yes

TCP with ECN Congestion warning Yes No

FAST TCP End-to-end delay No Yes

Compound TCP Packet loss & end-to-end delay No Yes

CUBIC TCP Packet loss No No

TCP Packet loss No No

• XCP: Explicitly signals the rate to use, providing precise feedback.


• TCP with ECN: Provides a congestion warning explicitly but not precisely, as it doesn't
specify the extent of rate adjustment.
• FAST TCP: Uses end-to-end delay implicitly, offering precise feedback without explicit
signaling.
• Compound TCP: Utilizes both packet loss and end-to-end delay, providing imprecise but
combined feedback on network conditions.
• CUBIC TCP: Responds solely to packet loss implicitly, without explicit or precise
signaling.
• TCP: Like CUBIC TCP, reacts to packet loss implicitly without precise feedback.

AIMD (Additive Increase Multiplicative Decrease):


Chiu and Jain (1989) advocated for Additive Increase Multiplicative Decrease (AIMD) as the
appropriate control law for achieving an efficient and fair operating point in binary congestion
feedback scenarios.
• They constructed a graphical representation for the case of two connections competing for
a single link's bandwidth, as depicted in Fig. 6-24.
• The x-axis represents the bandwidth allocated to user 1, and the y-axis represents the
bandwidth allocated to user 2.

18
Computer Networks

• The dotted fairness line represents equal bandwidth allocation to both users, while the
dotted efficiency line represents the maximum capacity of the link.
• When the sum of allocations reaches 100%, indicating full link utilization, a congestion
signal is given to both users.
• As both users incrementally increase their bandwidth allocations over time, eventually, the
operating point crosses the efficiency line, triggering a congestion signal from the network.
• Simply reducing allocations additively would lead to oscillation along an additive line, as
depicted in Fig. 6-24.
• While this behavior keeps the operating point close to efficiency, it may not ensure fairness.
• As both users incrementally increase their bandwidth allocations over time, eventually, the
operating point crosses the efficiency line, triggering a congestion signal from the network.
• Simply reducing allocations additively would lead to oscillation along an additive line, as
depicted in Fig. 6-24.
• While this behavior keeps the operating point close to efficiency, it may not ensure fairness.

Figure: Sawtooth behavior of AIMD


The sawtooth behavior of AIMD is an effective mechanism for achieving both efficiency and
fairness in network congestion control. It allows senders to explore the available bandwidth by
increasing their rates incrementally while ensuring that they respond appropriately to congestion
events by reducing their rates proportionally. This dynamic adaptation helps maintain stability and
optimal performance in network communication.

19
Computer Networks

If the window size is denoted as W and the round-trip time (RTT) is RTT, the equivalent rate can
be calculated as RTT/W. Adjusting the window size effectively controls the rate at which packets
are sent, indirectly influencing the sending rate.

Wireless Network
Wireless Network Challenges:
1. Packet loss is frequently used as a congestion signal in transport protocols like TCP, but
wireless networks often experience packet loss due to transmission errors.
2. The throughput of TCP connections increases inversely with the square root of the packet
loss rate, meaning that high throughput requires very low levels of packet loss.
3. Wireless LANs such as 802.11 commonly have frame loss rates of at least 10%,
significantly higher than what TCP can efficiently handle.
4. The sender might be unaware that the path includes a wireless link, complicating
congestion control efforts.
5. When a loss occurs, only one mechanism should take action, either addressing a
transmission error or responding to a congestion signal, to prevent unnecessarily slow
performance over wireless links.
6. Internet paths often feature a mix of wired and wireless segments, presenting a challenge
as there's no standardized method for the sender to detect the types of links in the path.

Wireless Network Solutions:


1. Link layer retransmissions, operating at the microsecond to millisecond scale for wireless
links like 802.11, occur much faster than loss timers in transport protocols, which typically
operate on the millisecond to second timescale.
2. This significant difference in timescales enables wireless links to promptly detect and
address frame losses through retransmissions before the transport layer infers packet loss.
3. The masking strategy, primarily reliant on link layer retransmissions, generally allows
most transport protocols to function effectively over various wireless links.

20
Computer Networks

4. However, for wireless links with long round-trip times, Forward Error Correction (FEC)
may be necessary to mitigate losses, or non-loss signals may need to be utilized for
congestion control.
Forward Error Correction (FEC)

Forward Error Correction (FEC) is a technique used to enhance the reliability of data
transmission by adding redundant information to the original data stream. This redundancy
enables the receiver to detect and correct errors that may occur during transmission,
without the need for retransmissions from the sender. Here's how FEC works for the
original stream, redundancy, packet loss, and reconstructed stream:
• Original Stream: The original data stream consists of the information intended for
transmission from the sender to the receiver. This stream contains the actual data that needs
to be transmitted.
• Redundancy: FEC adds redundant information to the original data stream before
transmission. This redundant information is derived from the original data through
mathematical algorithms. The redundant data is carefully crafted to provide error
correction capabilities at the receiver end.
• Packet Loss: During transmission, packets may get lost or corrupted due to various factors
such as noise, interference, or network congestion. In the event of packet loss, the
redundant information added by FEC allows the receiver to reconstruct the original data
stream, even if some packets are missing or corrupted.
• Reconstructed Stream: Using the redundant information received along with the original
data packets, the receiver can reconstruct the original data stream. By analyzing the
redundant data and applying error correction algorithms, the receiver can detect and correct
errors, filling in the gaps left by lost or corrupted packets. The result is a reconstructed data
stream that closely matches the original data transmitted by the sender, even in the presence
of packet loss.

21
Computer Networks

The Internet Transport Protocols: UDP & TCP


• Two Main Protocols in the Transport Layer: The Internet features two primary protocols in
the transport layer, each serving different purposes: a connectionless protocol and a
connection-oriented one. These protocols work together to facilitate communication between
networked applications.
• Connectionless Protocol (UDP): UDP (User Datagram Protocol) is the connectionless
protocol in the transport layer. It operates by simply sending packets between applications
without establishing a connection. UDP provides minimal services beyond basic packet
delivery, allowing applications to implement their own protocols on top of it as needed.
• Connection-Oriented Protocol (TCP): TCP (Transmission Control Protocol) is the
connection-oriented protocol counterpart to UDP. Unlike UDP, TCP handles various aspects
of communication comprehensively. It establishes connections, ensures reliability through
mechanisms like retransmissions, and manages flow control and congestion control on behalf
of the applications that utilize it.

The User Datagram Protocol (UDP)


• It allows applications to transmit encapsulated IP datagrams without establishing a connection
beforehand. UDP segments comprise an 8-byte header followed by the payload. The header
includes source and destination ports, which serve as identifiers for the endpoints within the
source and destination machines. These ports facilitate demultiplexing, directing incoming
packets to the correct application processes. The source port aids in replying to the sender by
specifying the process to receive the response.

Figure: UDP Header.


1. Header Structure: UDP segments consist of an 8-byte header followed by the payload.
The header contains fields for source port, destination port, length, and an optional
checksum.
2. Port Usage: Ports serve as endpoints within the source and destination machines, allowing
the transport layer to deliver segments to the correct application. Ports act as "mailboxes"
that applications rent to receive packets.
3. Length Field: The UDP length field includes both the 8-byte header and the data. The
minimum length is 8 bytes, covering the header, while the maximum length is limited by
the size of IP packets.
4. Checksum: UDP provides an optional checksum for extra reliability. It covers the header,
data, and a conceptual IP pseudoheader. The checksum algorithm involves adding up all

22
Computer Networks

16-bit words in one's complement and taking the one's complement of the sum. A computed
checksum of 0 indicates a valid segment.
5. Pseudoheader: The UDP checksum computation includes a pseudoheader that contains
IPv4 addresses of the source and destination machines, the protocol number for UDP, and
the byte count for the UDP segment. This aids in detecting misdelivered packets but
violates the protocol hierarchy since IP addresses belong to the IP layer.
6. Functionalities: UDP does not provide flow control, congestion control, or retransmission
of bad segments. These tasks are left to user processes. UDP's primary functions include
providing an interface to the IP protocol, demultiplexing multiple processes using ports,
and optional end-to-end error detection.
7. Use Cases: UDP is suitable for applications that require precise control over packet flow,
error control, or timing. It is commonly used in client-server situations where short requests
and replies are exchanged, such as in the Domain Name System (DNS), where a client
sends a request to a DNS server and expects a short reply back.
Overall, UDP offers a lightweight, efficient, and simple communication mechanism for
applications that prioritize speed and simplicity over reliability and error handling.

Remote Procedure Call:


Remote Procedure Call (RPC) is a concept that allows programs to call procedures located on
remote hosts, making network interactions resemble local function calls. This approach simplifies
network programming by abstracting away the details of networking, making it more familiar and
intuitive for developers.
Here's a breakdown of the key points about RPC from the provided text:
1. Conceptual Similarity to Function Calls: RPC is akin to making a function call in a
programming language. A client program can call procedures located on remote hosts as if
they were local procedures, passing parameters and receiving results.
2. Basis and Development: RPC was pioneered by Birrell and Nelson in 1984. It enables
processes on one machine to invoke procedures located on another machine, with
parameters passed from caller to callee and results returned.

23
Computer Networks

3. Client-Server Model: In RPC, the calling process is termed the client, while the called
process is termed the server.
4. Stub Procedures: RPC involves the use of stub procedures, both on the client and server
side, to hide the complexities of remote communication. The client stub represents the
server procedure in the client's address space, and vice versa.
5. RPC Process: The RPC process involves several steps:
• The client calls the client stub, which internally marshals the parameters into a
message.
• The message is sent from the client machine to the server machine by the operating
system.
• The server stub unpacks the parameters and calls the server procedure.
• The server procedure executes and returns results back to the client in a similar
fashion.
6. Challenges and Solutions:
• Passing pointer parameters between client and server can be problematic due to
different address spaces. Techniques like call-by-copy-restore are used to overcome
this limitation.
• Weakly typed languages like C pose challenges in marshaling parameters,
especially when parameter sizes are not explicitly defined.
• Deduction of parameter types can be difficult, especially in languages like C with
flexible parameter specifications.
• Global variables lose their shared nature when procedures are moved to remote
machines, impacting communication.
7. Implementation and Transport Protocols: RPC can be implemented using UDP as a base
protocol, with requests and replies sent as UDP packets. However, additional mechanisms
are needed for reliability, handling large messages, and managing concurrent requests.
8. Idempotent Operations: RPC operations must consider idempotency, ensuring that
repeated executions yield the same result. Operations like DNS requests are idempotent,
but others with side-effects may require stronger semantics, possibly necessitating the use
of TCP for communication.

Real-Time Transport Protocols


1. Real-time Transport Protocol (RTP)
2. The Real-time Transport Control Protocol (RTCP)

24
Computer Networks

1. Real-time Transport Protocol (RTP)


The primarily functions as a transport protocol, providing facilities for the transmission of real-
time multimedia data, but RTP is typically implemented in the application layer. This is because
RTP operates in user space, alongside the multimedia application, handling tasks such as
multiplexing and packetization of audio and video streams.

Figure: (a)The position of RTP in the protocol stack. (b) Packet nesting.

Packet nesting in the context of RTP involves encapsulating RTP packets within UDP (User
Datagram Protocol) packets, which are then further encapsulated within IP (Internet Protocol)
packets for transmission over the network. This nesting ensures that the RTP packets, containing
the multimedia data, are appropriately formatted and transported across the network, ultimately
reaching the intended destination for playback as shown in figure.
1. RTP for Real-time Multimedia: RTP (Real-time Transport Protocol) was developed to
provide a generic real-time transport protocol for multimedia applications like internet
radio, telephony, and video streaming, reducing the need for reinventing similar protocols.
2. Protocol Implementation: RTP typically operates over UDP in user space, with the RTP
library handling multiplexing and encoding of audio, video, and text streams into RTP
packets, which are then encapsulated in UDP packets by the operating system for
transmission over the network.
3. Protocol Layering: Due to its implementation in user space and its role in providing
transport facilities, RTP blurs the distinction between application and transport layer
protocols, often described as a transport protocol implemented in the application layer.
4. Functionalities: RTP facilitates the transmission of multimedia data packets and ensures
timely playback at the receiver end, contributing to the overall protocol stack by providing
transport capabilities alongside user-level multiplexing and encoding.
5. Contribution to Multimedia Applications: RTP's widespread adoption in multimedia
applications underscores its importance in providing reliable and timely delivery of audio
and video data over networks, contributing to a seamless user experience.
6. Multiplexing and Transmission: RTP multiplexes real-time data streams onto UDP
packets, which can be sent to single or multiple destinations (unicasting or multicasting).
Routers treat RTP packets as standard UDP traffic unless specific IP quality-of-service
features are enabled.

25
Computer Networks

7. Packet Numbering and Loss Handling: Each RTP packet is numbered sequentially to aid
receivers in detecting missing packets. Upon packet loss, receivers can choose appropriate
actions such as skipping video frames or approximating missing audio values, as
retransmission is impractical without acknowledgements or retransmission requests.
8. Payload and Encoding: RTP payloads can contain multiple samples and may be encoded
in various formats defined by application-specific profiles. RTP provides a header field for
specifying encoding, allowing flexibility in how encoding is performed.
9. Timestamping and Synchronization: RTP allows sources to associate timestamps with
packet samples, enabling receivers to buffer and play samples at precise intervals relative
to the start of the stream. Timestamps aid in reducing network delay variation (jitter) and
synchronizing multiple streams, facilitating scenarios like synchronized video and audio
playback from different physical devices.
RTP Header Structure: The RTP header consists of three 32-bit words and potential extensions.
Fields include version, padding, extension, contributing sources count, marker bit, payload type,
sequence number, timestamp, synchronization source identifier, and contributing source
identifiers, facilitating packet sequencing, timing, and stream identification as shown in figure.
Let's delve into the details of each field:

Figure: RTP Header.

1. Version (V): The version field indicates the version of the RTP protocol being used.
Currently, the version is set to 2, which is the most widely deployed version. It's worth
noting that future versions may utilize the remaining code points.
2. Padding (P): The padding bit (P) is set to indicate that the RTP packet has been padded to
ensure a multiple of 4 bytes. The last byte of the padding field specifies the number of
padding bytes added, allowing the receiver to properly interpret the packet.
3. Extension (X): The extension bit (X) indicates whether an extension header is present in
the RTP packet. If set, the extension header follows the standard RTP header and provides
additional information or functionalities. However, the format and meaning of the
extension header are not standardized, providing flexibility for future requirements.

26
Computer Networks

4. Contributing Sources Count (CC): This field specifies the number of contributing
sources present in the RTP packet, ranging from 0 to 15. Contributing sources are typically
used in scenarios involving mixers, where multiple streams are combined. The count
indicates the number of sources contributing to the packet.
5. Marker Bit (M): The marker bit (M) is an application-specific flag that can be used to
mark significant events within the multimedia data stream. Its interpretation depends on
the specific application and may indicate the start of a video frame, audio segment, or other
relevant events.
6. Payload Type: The payload type field specifies the encoding algorithm used for the data
payload within the RTP packet. It indicates the format of the multimedia data, such as
uncompressed audio, MP3, H.264 video, etc. The payload type is crucial for the receiver
to interpret and decode the data correctly.
7. Sequence Number: The sequence number field is a monotonically increasing counter that
increments with each RTP packet sent. It aids the receiver in detecting lost or out-of-order
packets and ensures proper sequencing of the data stream.
8. Timestamp: The timestamp field is generated by the source to indicate the time at which
the first sample in the packet was captured. It allows the receiver to synchronize the
playback of multimedia data by compensating for network delays and jitter. Timestamps
are relative to the start of the stream, with only the differences between timestamps being
significant.
9. Synchronization Source Identifier (SSRC): The SSRC field identifies the
synchronization source to which the RTP packet belongs. It is used to multiplex and
demultiplex multiple data streams onto a single stream of UDP packets. Each stream is
assigned a unique SSRC identifier for identification and synchronization purposes.
10. Contributing Source Identifiers (CSRC): If multiple contributing sources are present in
the RTP packet (indicated by the CC field), the CSRC field lists the identifiers of these
contributing sources. This information is particularly useful in scenarios involving mixers,
where multiple audio or video streams are combined.

The Real-time Transport Control Protocol (RTCP)


RTCP, complements RTP by managing feedback, synchronization, and user interface aspects
without transporting any media samples:
1. Feedback Mechanism: RTCP provides feedback on network properties like delay, jitter,
bandwidth, and congestion to sources. This data aids encoding algorithms in adjusting data
rates dynamically for optimal quality, such as switching encoding algorithms based on
network conditions.
2. Bandwidth Regulation: In multicast scenarios, RTCP reports are distributed to all
participants, potentially consuming substantial bandwidth. To mitigate this, RTCP senders

27
Computer Networks

limit report rates to a fraction of the media bandwidth collectively, using estimates of
participants and media bandwidth.
3. Interstream Synchronization: RTCP addresses synchronization challenges arising from
different streams using disparate clocks, granularities, and drift rates. It helps maintain
synchronization among multiple streams.
4. Source Naming: RTCP facilitates source identification by assigning names, typically in
ASCII text format. This enables receivers to display information about active participants,
enhancing user experience.

Playout with Buffering and Jitter Control: Streaming multimedia


• Media information must be played out at the right time upon reaching the receiver.
• Packets experience different transit times, causing jitter, which can lead to media artifacts if
played out immediately.
• Buffers are used at the receiver to mitigate jitter by storing incoming packets.
Example scenario: Packets arrive with varying delays, buffered on the client until playback.
Playback begins after a delay to allow for buffering, ensuring smooth play. Delayed packets
may cause gaps in playback, which can be addressed by skipping or pausing playback.

Figure: Buffering and playout.

28
Computer Networks

A. Playback Point Selection:


• The playback point determines how long to wait before playing out buffered media. The
selection of the playback point depends on the level of jitter in the connection. High-jitter
connections require a further playback point to capture most packets. Applications can measure
jitter by comparing RTP timestamps and arrival times.
• Playback points may need to be adapted dynamically based on changing network conditions.
• Glitches in playback can occur if the adaptation is not handled smoothly.
B. Managing Absolute Delay:
• Excessive delay impacts live applications negatively. Propagation delay cannot be reduced if
a direct path is already in use.
• To reduce absolute delay, the playback point can be pulled in, accepting more late-arriving
packets.
• Alternatively, improving network quality, e.g., using expedited forwarding differentiated
service, can reduce jitter and delay.
High jitter can lead to a degraded quality of service, especially in real-time applications, High jitter
can exacerbate packet loss, particularly if the network is already congested and make it challenging
to accurately monitor network performance and diagnose issues as show in figure.

29
Computer Networks

THE INTERNET TRANSPORT PROTOCOLS: TCP


Transmission Control Protocol (TCP), providing reliable end-to-end byte stream communication
over an unreliable internetwork. TCP was designed to address the need for reliable, sequenced
delivery of data over internetworks. TCP is responsible for ensuring reliable data delivery by
managing data transmission, handling congestion, and retransmitting lost data. It also deals with
issues such as out-of-order delivery and reassembling data into the correct sequence. While IP
provides the basic mechanism for packet delivery, it does not guarantee proper delivery or indicate
the optimal transmission rate. TCP complements IP by ensuring reliable and orderly data delivery,
thereby providing the performance and reliability required by most applications.
• Sockets and Port Numbers: TCP communication is facilitated through endpoints called
sockets, each identified by a unique combination of an IP address and a 16-bit port number.
Port numbers below 1024 are reserved for standard services (well-known ports), while those
from 1024 through 49151 can be registered for use by unprivileged users.
• Well-Known Ports: Some examples of well-known ports and their associated protocols
include FTP (File Transfer Protocol) on ports 20 and 21, SSH (Secure Shell) on port 22, SMTP
(Simple Mail Transfer Protocol) on port 25, HTTP (Hypertext Transfer Protocol) on port 80,
and HTTPS (HTTP over SSL/TLS) on port 443, among others.
• Port Assignment and Management: Applications can choose their own ports, but commonly,
a single daemon (such as inetd in UNIX systems) listens on multiple ports and delegates
incoming connections to appropriate daemons based on the port used. This approach optimizes
system resources by activating daemons only when needed.
• TCP Connection Characteristics: All TCP connections are full duplex, allowing bidirectional
communication simultaneously. They are also point-to-point, meaning each connection
involves exactly two endpoints. However, TCP does not support multicasting or broadcasting.
• TCP Connection as Byte Stream: TCP treats connections as byte streams rather than message
streams. This means that message boundaries are not preserved end-to-end. Data sent in
separate write operations may be delivered to the receiver in various sized chunks, and the
receiver cannot determine the original message boundaries solely from the received data.
• Sequence Numbers: Every byte on a TCP connection has its own 32-bit sequence number.
This feature is crucial for maintaining the order and integrity of data transmission.
• TCP Segments: Data exchange in TCP occurs in the form of segments, each consisting of a
fixed 20-byte header (plus an optional part) followed by zero or more data bytes. The size of
segments is determined by TCP software, considering factors like IP payload size and
Maximum Transfer Unit (MTU).
• MTU and Path MTU Discovery: TCP segments must fit within the MTU at the sender and
receiver to avoid fragmentation. Modern TCP implementations perform Path MTU Discovery
to adjust segment size dynamically and prevent fragmentation, improving performance.

30
Computer Networks

• Sliding Window Protocol: TCP uses the sliding window protocol with a dynamic window
size for flow control. Senders transmit segments and start a timer, while receivers send back
acknowledgements indicating the next expected sequence number and remaining window size.
• Handling Network Issues: TCP must deal with challenges such as out-of-order arrival of
segments and retransmissions due to timeouts. TCP implementations optimize performance by
carefully managing retransmissions and keeping track of received bytes using sequence
numbers.
The TCP Segment Header

1. Source Port: This field identifies the source port number, which is a 16-bit number
representing the sender's port on the local host. Together with the sender's IP address, it
forms a unique endpoint for the connection.
2. Destination Port: Similar to the source port, this field identifies the destination port
number, which is a 16-bit number representing the receiver's port on the remote host.
Together with the receiver's IP address, it forms the destination endpoint for the connection.
3. Sequence Number: This 32-bit field indicates the sequence number of the first data byte
in the segment. It is used to maintain the correct order of data transmission and reception.
4. Acknowledgment Number: Also a 32-bit field, it indicates the next sequence number that
the sender of the segment expects to receive from the receiver. This field is used for
acknowledging received data and facilitating flow control.
5. Data Offset: This 4-bit field specifies the size of the TCP header in 32-bit words. Since the
header is fixed-format, this field is used to determine where the data begins in the segment.
6. Reserved: This 6-bit field is reserved for future use and must be set to zero.
7. Control Bits: These 6 flags, each occupying 1 bit, control various aspects of the TCP
connection:

31
Computer Networks

• CWR and ECE: signal congestion when ECN (Explicit Congestion Notification)
is used, with ECE indicating congestion to the TCP sender and CWR signaling the
TCP sender to reduce congestion
• URG: Indicates urgent data in the segment.
• ACK: Indicates that the Acknowledgment Number field is valid.
• PSH: Indicates that the data should be pushed to the receiving application.
• RST: Indicates a reset request to terminate the connection abruptly due to various
reasons such as host crashes or invalid segments.
• SYN: Synchronizes sequence numbers to initiate a connection. SYN = 1 indicating
a connection request and SYN = 1 and ACK = 1 indicating a connection acceptance,
distinguishing between connection requests and accepted connections.
• FIN: Indicates the end of data transmission.
8. Window Size: This 16-bit field specifies the size of the receive window, which is the
amount of data the sender can transmit before receiving further acknowledgment from the
receiver.
9. Checksum: This 16-bit field is used for error checking of the TCP header and data. It
ensures the integrity of the transmitted data.
10. Urgent Pointer: If the URG flag is set, this 16-bit field indicates the offset from the
sequence number indicating urgent data in the segment.
11. Options: This field, if present, contains additional header options, such as timestamps,
maximum segment size (MSS), window scale factor, etc. Options are of variable length,
filled to a multiple of 32 bits by padding with zeros, and can extend to 40 bytes to
accommodate the longest TCP header.
12. Data: TCP segments can carry up to 65,495 data bytes, calculated by subtracting the sizes
of the IP header (20 bytes) and the TCP header (20 bytes) from the maximum payload size
allowed by IP (65,535 bytes). Internet hosts are required to accept TCP segments of up to
556 bytes, calculated by adding the maximum segment size (MSS) of 536 bytes to the TCP
header size of 20 bytes.
TCP Connection Establishment
The connections in TCP are established using the three-way handshake.
• Server Side:
• The server passively waits for an incoming connection by executing the LISTEN and
ACCEPT primitives in that order.
• It specifies either a specific source or waits for connections from any source.
• Client Side:
• The client executes a CONNECT primitive, specifying the destination IP address and
port, the maximum TCP segment size it accepts, and optionally user data.
32
Computer Networks

• The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off,
waiting for a response.
• Connection Establishment Process:
• Upon receiving the SYN segment at the destination, the TCP entity checks if there's a
process listening on the specified port.
• If no process is listening, the destination sends a reply with the RST bit on to reject the
connection.
• If a process is listening, it receives the incoming TCP segment and can accept or reject
the connection.
• If accepted, an acknowledgment segment is sent back to the client.
• Sequence of Events:
• Normal Case: SYN segment sent, acknowledgment received (Fig. 6-37(a)).
• Simultaneous Connection Attempt: Both hosts attempt to establish a connection
simultaneously, resulting in a single established connection (Fig. 6-37(b)).

TCP Connection Release:


• TCP connections, although full duplex, are best understood as a pair of simplex
connections.
• To release a connection, either party sends a TCP segment with the FIN bit set, indicating
no more data to transmit.
• Upon acknowledgment of the FIN, that direction is shut down for new data, but data may
continue flowing in the other direction.
• Connection release requires four TCP segments in total (one FIN and one ACK for each
direction), but it's possible for the first ACK and second FIN to be in the same segment,
reducing the count to three.

33
Computer Networks

TCP Connection Management Modeling:


• TCP connection management can be represented using a finite state machine with 11
states.
• Each connection starts in the CLOSED state and transitions to either LISTEN or SYN
SENT based on passive or active open requests.
• Connection release can be initiated by either side, and upon completion, the state returns
to CLOSED.
• The finite state machine includes states such as LISTEN, SYN RCVD, SYN SENT,
ESTABLISHED, FIN WAIT 1, FIN WAIT 2, TIME WAIT, CLOSING, CLOSE WAIT,
and LAST ACK.
• Transitions between states are triggered by various events/actions, including user-initiated
system calls, segment arrivals (SYN, FIN, ACK, RST), or timeouts.
• The diagram illustrates the sequence of events for both client and server perspectives,
detailing the establishment and release of TCP connections through the three-way
handshake and data transmission phases.

34

You might also like