CN vtu edge module 4
CN vtu edge module 4
MODULE 4
THE TRANSPORT LAYER
Introductions
The network layer serves as a crucial intermediary in facilitating end-to-end packet delivery within
a network infrastructure, employing either datagrams or virtual circuits to ensure efficient data
traversal. It acts as a conduit, coordinating the movement of packets across various network nodes,
thereby enabling communication between source and destination systems. By managing routing
and addressing, the network layer ensures that data packets reach their intended recipients reliably
and in a timely manner.
Building upon the foundation laid by the network layer, the transport layer enhances
communication capabilities by providing a more sophisticated and reliable means of data transport
between processes on source and destination machines. This layer extends beyond mere packet
delivery, incorporating mechanisms for error detection, flow control, and congestion management
to guarantee the integrity and efficiency of data transmission. By encapsulating data into segments
or messages, the transport layer establishes logical connections between applications/process
running on different hosts, enabling them to exchange information seamlessly.
Two prominent protocols that operate within the transport layer are TCP (Transmission Control
Protocol) and UDP (User Datagram Protocol). While both protocols facilitate communication
between applications, they exhibit distinct characteristics and serve different purposes. TCP offers
reliable, connection-oriented communication, ensuring that data is delivered accurately and in
the correct order through mechanisms such as acknowledgment, retransmission, and
sequencing. In contrast, UDP provides a simpler, connectionless communication model,
prioritizing speed and efficiency over reliability. UDP is often favored for real-time applications
or scenarios where occasional packet loss is acceptable, while TCP is preferred for applications
requiring reliable data transmission, such as file transfers or web browsing. Together, TCP and
UDP play vital roles in enabling diverse communication needs within networked environments,
1
Computer Networks
each contributing its unique strengths to the overarching goal of efficient and dependable data
transport.
The transport layer aims to deliver efficient, reliable, and cost-effective data transmission services
to users, typically processes operating in the application layer. It relies on the network layer's
services to achieve its objectives, leveraging underlying network infrastructure for packet routing
and delivery.
Figure 1.
Location of Transport Entity: The transport entity can be situated in various locations, including
the operating system kernel, library packages, separate user processes, or even on the network
interface card.
Types of Transport Services: Two primary types of transport services exist: connection-oriented
and connectionless transport services, mirroring their counterparts in the network layer. The key
distinction lies in where the code executes: the transport layer code operates on users' machines,
while the network layer primarily runs on routers.
Enhanced Reliability: The existence of the transport layer enables a more reliable service than
the underlying network. While the network service models real networks, which can be unreliable,
the connection-oriented transport service aims for reliability on top of an unreliable network. The
transport layer isolates upper layers from the technology, design, and imperfections of the network,
providing a more predictable and consistent communication environment.
2
Computer Networks
3
Computer Networks
4
Computer Networks
Servers execute the first four primitives in sequence. The SOCKET primitive creates a new
endpoint and allocates table space for it within the transport entity. The BIND primitive assigns
network addresses to sockets, allowing remote clients to connect. LISTEN allocates space to queue
incoming calls, while ACCEPT blocks until an incoming connection request arrives.
Client-Side Execution:
On the client side, a socket is first created using the SOCKET primitive. However, BIND is not
necessary since the server does not require a specific address. The CONNECT primitive actively
initiates the connection process and blocks the caller until it completes. Once established, both
sides can use SEND and RECEIVE to transmit and receive data over the connection.
Connection Release:
Connection release with sockets is symmetric. Once both sides execute a CLOSE primitive, the
connection is released.
5
Computer Networks
Various strategies, such as stable TSAP addresses, portmappers, and proxy servers, are employed
to manage the complexities of address discovery and connection establishment.
When an application process intends to establish a connection with a remote process, it must
specify the destination process. This applies to both connection-oriented and connectionless
transport services.
Transport addresses, known as Transport Service Access Points (TSAPs), are defined for this
purpose. On the Internet, these endpoints are referred to as ports, serving as entry points for
communication. TSAPs represent specific endpoints in the transport layer, while Network Service
Access Points (NSAPs) denote endpoints in the network layer. IP addresses are examples of
NSAPs.
• Example Scenario:
• Consider a scenario where a mail server process on host 2 attaches itself to TSAP 1522
to await incoming calls. Meanwhile, an application process on host 1 attaches itself to
TSAP 1208 and issues a CONNECT request to establish a connection with TSAP 1522
on host 2.
• The subsequent steps involve data transmission, response acknowledgment, and
connection release.
7
Computer Networks
4. Clock-Based Solution:
• An alternative proposed by Tomlinson involves equipping each host with a time-
of-day clock, which operates as a binary counter.
• The clock's count, which exceeds the number of bits in sequence numbers,
continues running even if the host crashes, ensuring reliable sequence number
management.
5. Buffering and Flow Control Complexity:
• Both buffering and flow control are essential in both data link and transport layers, but
the transport layer's complexity lies in managing a larger and varying number of
connections with fluctuating bandwidth.
• Unlike data link protocols, which may allocate a fixed number of buffers to each line,
the transport layer's numerous connections and variable bandwidth may require a
different approach to buffer allocation.
6. Three-way handshake:
Tomlinson (1975) introduced the three-way handshake as a solution to the problem of
establishing connections reliably in the presence of delayed duplicate control segments.
1. In this protocol, one peer verifies with the other that the connection request is current. The
typical setup procedure involves the initiating host sending a CONNECTION REQUEST
segment containing a sequence number (x) to the receiving host.
2. The receiving host responds with an ACK segment acknowledging the sequence number
(x) and announcing its own initial sequence number (y).
3. Finally, the initiating host acknowledges the receiving host's choice of an initial sequence
number in the first data segment it sends.
Figure: (a) Normal operation. (b) Old duplicate CONNECTION REQUEST appearing out of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicate ACK.
• The three-way handshake works effectively in the presence of delayed duplicate control
segments by ensuring that any delayed duplicates do not cause damage.
• If a delayed duplicate CONNECTION REQUEST segment arrives, the receiving host
verifies with the initiating host before proceeding with the connection setup.
9
Computer Networks
• In the worst-case scenario where both a delayed CONNECTION REQUEST and an ACK
are floating around, the protocol still ensures that the connection is not established
accidentally.
• TCP utilizes this three-way handshake for connection establishment. Additionally, within
a connection, a timestamp is used to extend the sequence number to prevent wrapping
within the maximum packet lifetime, as described in RFC 1323 (PAWS).
7. Connection Release
Releasing a connection can be more complex than establishing one, with various potential
pitfalls. Two styles of connection termination exist: asymmetric release and symmetric
release.
• Asymmetric release, akin to how the telephone system operates, terminates the connection
when one party hangs up.
• Symmetric release treats the connection as two separate unidirectional connections,
requiring each to be released independently.
(a) Normal case of three-way handshake. (b) Final ACK lost. (c) Response lost. (d) Response lost and
subsequent DRs lost.
Asymmetric release can lead to abrupt termination and potential data loss. For example, if
one party sends data while the other issues a DISCONNECT segment, data may be lost
when the connection is released. To avoid data loss, a more sophisticated release protocol
may be needed. Symmetric release allows each direction of the connection to be released
10
Computer Networks
Difference in Function:
• Link layer checksum protects frame while crossing single link.
• Transport layer checksum protects segment while crossing entire network path, providing
end-to-end check.
• End-to-end check essential for correctness according to end-to-end argument.
• Link layer checks valuable for improving performance but not essential for correctness.
Difference in Degree:
• Wireless links often have small bandwidth-delay product, allowing only one frame
outstanding at a time.
• Small window size sufficient for good performance.
• Wired and optical fiber links have low error rates, allowing end-to-end retransmissions to
repair residual frame loss.
• TCP connections often have larger bandwidth-delay product, requiring larger sliding
windows.
Buffering Considerations:
• Hosts may need substantial buffering for sliding windows, at both sender and receiver.
• Sender buffers transmitted but unacknowledged segments for potential retransmission.
• Receiver may dynamically acquire buffers for incoming segments, potentially discarding
if unavailable.
• Permanent harm avoided by sender's ability to retransmit lost segments.
Trade-off in Buffering:
• For low-bandwidth bursty traffic, dynamic buffer acquisition may be reasonable.
• For high-bandwidth traffic like file transfer, dedicated window of buffers at receiver
preferred for maximum speed.
11
Computer Networks
12
Computer Networks
For example, if a host is running a web server (listening on port 80) and an email server (listening
on port 25), the demultiplexer ensures that HTTP requests destined for port 80 are routed to the
web server application, while SMTP emails destined for port 25 are routed to the email server
application.
• The inverse/demultiplexing multiplexing includes protocols like SCTP (Stream Control
Transmission Protocol), which can utilize multiple network interfaces for a single
connection.
13
Computer Networks
• TCP typically uses a single network endpoint, whereas protocols like SCTP support
multiplexing by utilizing multiple network interfaces.
Crash Recovery
Recovery from crashes is a critical issue in networked systems, especially for long-lived
connections such as large software or media downloads. Recovery from network or router crashes
is relatively straightforward within hosts as the transport entities expect lost segments and handle
them using retransmissions.
• Recovering from host crashes poses more challenges, particularly when clients need to
continue working even if servers crash and reboot.
• An example scenario illustrates the difficulty: a client sending a long file to a server using a
stop-and-wait protocol. If the server crashes midway, it loses track of its previous state upon
reboot.
• The server may attempt to recover by broadcasting a crash announcement and requesting
clients to inform it of their connection status. Clients, based on their state (S1 or S0), decide
whether to retransmit the most recent segment.
• Naive approaches to recovery encounter difficulties due to the asynchronous nature of events,
such as acknowledgments and writes, leading to missed or duplicate segments.
Various programming combinations for the server and client (acknowledge first or write first,
retransmit strategies) fail to ensure proper recovery in all situations, resulting in protocol failures
despite different configurations. The complexity of crash recovery underscores the need for robust
protocols and careful consideration of asynchronous events to ensure reliable communication in
networked systems.
Server Events: Sending Acknowledgement (A), Writing to Output Process (W), Crashing (C)
Orderings: AC(W), AWC, C(AW), C(WA), WAC, WC(A)
14
Computer Networks
This table summarizes the different combinations and their outcomes. Each strategy has scenarios
where the protocol functions correctly, generates duplicate messages, or loses messages. The
complexity of the interactions demonstrates the challenge of achieving transparent recovery from
host crashes at the transport layer.
Recovery from a layer N crash can only be done by layer N + 1 if enough status information is
retained to reconstruct the previous state.
15
Computer Networks
Kleinrock (1979) proposed the metric of "power," where power initially increases with load but
peaks and decreases as delay rises rapidly. The load corresponding to the highest power signifies
an efficient allocation for transport entities.
Max-Min Fairness: Congestion control mechanisms dynamically allocate bandwidth to
competing connections based on demand. Adopting max-min fairness ensures that increasing
bandwidth for one flow cannot worsen the situation for others. In max-min fairness, bandwidth
allocation is such that increasing one flow's bandwidth reduces the bandwidth available to other
flows.
Example: In Fig. 6-20, flows A, B, C, and D are allocated bandwidth ensuring no flow can increase
its bandwidth without reducing others'.
• Three flows compete for the bottom-left link between routers R4 and R5, resulting in each
flow receiving 1/3 of the link's capacity.
• Flow A competes with B on the link from R2 to R3. Since B has an allocation of 1/3, A
receives the remaining 2/3 of the link's capacity.
• Spare capacity exists on other links, but reallocating it to any flow would decrease the
capacity allocated to another lower-priority flow.
• For instance, if more bandwidth on the link between R2 and R3 is allocated to flow B, flow
A's capacity would decrease.
• This allocation maintains fairness as increasing bandwidth for one flow would decrease the
bandwidth for others, ensuring a max-min fair distribution.
Max-min allocations require global knowledge of the network. Flows' rates are slowly increased
until reaching bottlenecks, ensuring fair distribution of available capacity.
Convergence in Congestion Control: Bandwidth needs fluctuate over time due to factors like
browsing web pages or downloading large videos, requiring the network to adapt continuously. A
good congestion control algorithm should swiftly converge to the ideal operating point and adapt
as it evolves.
Example:
• Fig. 6-21 illustrates a bandwidth allocation that changes over time and converges quickly.
16
Computer Networks
• Initially, flow 1 monopolizes all bandwidth. When flow 2 starts after one second, the
allocation swiftly adjusts to allocate half the bandwidth to each flow.
• At 4 seconds, a third flow joins but utilizes only 20% of the bandwidth, prompting flows 1
and 2 to adjust and each receive 40% of the bandwidth.
• When flow 2 leaves at 9 seconds, flow 1 quickly captures 80% of the bandwidth.
• Throughout, total allocated bandwidth remains close to 100%, ensuring full network
utilization while providing equal treatment to competing flows without excessive
bandwidth usage.
1. Dual Solutions: Transport protocols need to address both flow control and congestion
control, employing variable-sized window solutions for flow control and congestion
control algorithms for network capacity issues.
17
Computer Networks
2. Feedback Mechanisms: The method of regulating sending rates depends on the feedback
provided by the network, which can be explicit or implicit, precise or imprecise.
a. XCP (eXplicit Congestion Protocol): Explicit and precise, where routers inform
sources of the rate at which they may send.
b. TCP with ECN (Explicit Congestion Notification): Explicit but imprecise, using
packet markings to signal congestion without specifying how much to slow down.
c. FAST TCP: Implicit and precise, utilizing round-trip delay as a signal to avoid
congestion.
d. Compound TCP and CUBIC TCP: Implicit and imprecise, relying on packet loss
as an indication of congestion.
18
Computer Networks
• The dotted fairness line represents equal bandwidth allocation to both users, while the
dotted efficiency line represents the maximum capacity of the link.
• When the sum of allocations reaches 100%, indicating full link utilization, a congestion
signal is given to both users.
• As both users incrementally increase their bandwidth allocations over time, eventually, the
operating point crosses the efficiency line, triggering a congestion signal from the network.
• Simply reducing allocations additively would lead to oscillation along an additive line, as
depicted in Fig. 6-24.
• While this behavior keeps the operating point close to efficiency, it may not ensure fairness.
• As both users incrementally increase their bandwidth allocations over time, eventually, the
operating point crosses the efficiency line, triggering a congestion signal from the network.
• Simply reducing allocations additively would lead to oscillation along an additive line, as
depicted in Fig. 6-24.
• While this behavior keeps the operating point close to efficiency, it may not ensure fairness.
19
Computer Networks
If the window size is denoted as W and the round-trip time (RTT) is RTT, the equivalent rate can
be calculated as RTT/W. Adjusting the window size effectively controls the rate at which packets
are sent, indirectly influencing the sending rate.
Wireless Network
Wireless Network Challenges:
1. Packet loss is frequently used as a congestion signal in transport protocols like TCP, but
wireless networks often experience packet loss due to transmission errors.
2. The throughput of TCP connections increases inversely with the square root of the packet
loss rate, meaning that high throughput requires very low levels of packet loss.
3. Wireless LANs such as 802.11 commonly have frame loss rates of at least 10%,
significantly higher than what TCP can efficiently handle.
4. The sender might be unaware that the path includes a wireless link, complicating
congestion control efforts.
5. When a loss occurs, only one mechanism should take action, either addressing a
transmission error or responding to a congestion signal, to prevent unnecessarily slow
performance over wireless links.
6. Internet paths often feature a mix of wired and wireless segments, presenting a challenge
as there's no standardized method for the sender to detect the types of links in the path.
20
Computer Networks
4. However, for wireless links with long round-trip times, Forward Error Correction (FEC)
may be necessary to mitigate losses, or non-loss signals may need to be utilized for
congestion control.
Forward Error Correction (FEC)
Forward Error Correction (FEC) is a technique used to enhance the reliability of data
transmission by adding redundant information to the original data stream. This redundancy
enables the receiver to detect and correct errors that may occur during transmission,
without the need for retransmissions from the sender. Here's how FEC works for the
original stream, redundancy, packet loss, and reconstructed stream:
• Original Stream: The original data stream consists of the information intended for
transmission from the sender to the receiver. This stream contains the actual data that needs
to be transmitted.
• Redundancy: FEC adds redundant information to the original data stream before
transmission. This redundant information is derived from the original data through
mathematical algorithms. The redundant data is carefully crafted to provide error
correction capabilities at the receiver end.
• Packet Loss: During transmission, packets may get lost or corrupted due to various factors
such as noise, interference, or network congestion. In the event of packet loss, the
redundant information added by FEC allows the receiver to reconstruct the original data
stream, even if some packets are missing or corrupted.
• Reconstructed Stream: Using the redundant information received along with the original
data packets, the receiver can reconstruct the original data stream. By analyzing the
redundant data and applying error correction algorithms, the receiver can detect and correct
errors, filling in the gaps left by lost or corrupted packets. The result is a reconstructed data
stream that closely matches the original data transmitted by the sender, even in the presence
of packet loss.
21
Computer Networks
22
Computer Networks
16-bit words in one's complement and taking the one's complement of the sum. A computed
checksum of 0 indicates a valid segment.
5. Pseudoheader: The UDP checksum computation includes a pseudoheader that contains
IPv4 addresses of the source and destination machines, the protocol number for UDP, and
the byte count for the UDP segment. This aids in detecting misdelivered packets but
violates the protocol hierarchy since IP addresses belong to the IP layer.
6. Functionalities: UDP does not provide flow control, congestion control, or retransmission
of bad segments. These tasks are left to user processes. UDP's primary functions include
providing an interface to the IP protocol, demultiplexing multiple processes using ports,
and optional end-to-end error detection.
7. Use Cases: UDP is suitable for applications that require precise control over packet flow,
error control, or timing. It is commonly used in client-server situations where short requests
and replies are exchanged, such as in the Domain Name System (DNS), where a client
sends a request to a DNS server and expects a short reply back.
Overall, UDP offers a lightweight, efficient, and simple communication mechanism for
applications that prioritize speed and simplicity over reliability and error handling.
23
Computer Networks
3. Client-Server Model: In RPC, the calling process is termed the client, while the called
process is termed the server.
4. Stub Procedures: RPC involves the use of stub procedures, both on the client and server
side, to hide the complexities of remote communication. The client stub represents the
server procedure in the client's address space, and vice versa.
5. RPC Process: The RPC process involves several steps:
• The client calls the client stub, which internally marshals the parameters into a
message.
• The message is sent from the client machine to the server machine by the operating
system.
• The server stub unpacks the parameters and calls the server procedure.
• The server procedure executes and returns results back to the client in a similar
fashion.
6. Challenges and Solutions:
• Passing pointer parameters between client and server can be problematic due to
different address spaces. Techniques like call-by-copy-restore are used to overcome
this limitation.
• Weakly typed languages like C pose challenges in marshaling parameters,
especially when parameter sizes are not explicitly defined.
• Deduction of parameter types can be difficult, especially in languages like C with
flexible parameter specifications.
• Global variables lose their shared nature when procedures are moved to remote
machines, impacting communication.
7. Implementation and Transport Protocols: RPC can be implemented using UDP as a base
protocol, with requests and replies sent as UDP packets. However, additional mechanisms
are needed for reliability, handling large messages, and managing concurrent requests.
8. Idempotent Operations: RPC operations must consider idempotency, ensuring that
repeated executions yield the same result. Operations like DNS requests are idempotent,
but others with side-effects may require stronger semantics, possibly necessitating the use
of TCP for communication.
24
Computer Networks
Figure: (a)The position of RTP in the protocol stack. (b) Packet nesting.
Packet nesting in the context of RTP involves encapsulating RTP packets within UDP (User
Datagram Protocol) packets, which are then further encapsulated within IP (Internet Protocol)
packets for transmission over the network. This nesting ensures that the RTP packets, containing
the multimedia data, are appropriately formatted and transported across the network, ultimately
reaching the intended destination for playback as shown in figure.
1. RTP for Real-time Multimedia: RTP (Real-time Transport Protocol) was developed to
provide a generic real-time transport protocol for multimedia applications like internet
radio, telephony, and video streaming, reducing the need for reinventing similar protocols.
2. Protocol Implementation: RTP typically operates over UDP in user space, with the RTP
library handling multiplexing and encoding of audio, video, and text streams into RTP
packets, which are then encapsulated in UDP packets by the operating system for
transmission over the network.
3. Protocol Layering: Due to its implementation in user space and its role in providing
transport facilities, RTP blurs the distinction between application and transport layer
protocols, often described as a transport protocol implemented in the application layer.
4. Functionalities: RTP facilitates the transmission of multimedia data packets and ensures
timely playback at the receiver end, contributing to the overall protocol stack by providing
transport capabilities alongside user-level multiplexing and encoding.
5. Contribution to Multimedia Applications: RTP's widespread adoption in multimedia
applications underscores its importance in providing reliable and timely delivery of audio
and video data over networks, contributing to a seamless user experience.
6. Multiplexing and Transmission: RTP multiplexes real-time data streams onto UDP
packets, which can be sent to single or multiple destinations (unicasting or multicasting).
Routers treat RTP packets as standard UDP traffic unless specific IP quality-of-service
features are enabled.
25
Computer Networks
7. Packet Numbering and Loss Handling: Each RTP packet is numbered sequentially to aid
receivers in detecting missing packets. Upon packet loss, receivers can choose appropriate
actions such as skipping video frames or approximating missing audio values, as
retransmission is impractical without acknowledgements or retransmission requests.
8. Payload and Encoding: RTP payloads can contain multiple samples and may be encoded
in various formats defined by application-specific profiles. RTP provides a header field for
specifying encoding, allowing flexibility in how encoding is performed.
9. Timestamping and Synchronization: RTP allows sources to associate timestamps with
packet samples, enabling receivers to buffer and play samples at precise intervals relative
to the start of the stream. Timestamps aid in reducing network delay variation (jitter) and
synchronizing multiple streams, facilitating scenarios like synchronized video and audio
playback from different physical devices.
RTP Header Structure: The RTP header consists of three 32-bit words and potential extensions.
Fields include version, padding, extension, contributing sources count, marker bit, payload type,
sequence number, timestamp, synchronization source identifier, and contributing source
identifiers, facilitating packet sequencing, timing, and stream identification as shown in figure.
Let's delve into the details of each field:
1. Version (V): The version field indicates the version of the RTP protocol being used.
Currently, the version is set to 2, which is the most widely deployed version. It's worth
noting that future versions may utilize the remaining code points.
2. Padding (P): The padding bit (P) is set to indicate that the RTP packet has been padded to
ensure a multiple of 4 bytes. The last byte of the padding field specifies the number of
padding bytes added, allowing the receiver to properly interpret the packet.
3. Extension (X): The extension bit (X) indicates whether an extension header is present in
the RTP packet. If set, the extension header follows the standard RTP header and provides
additional information or functionalities. However, the format and meaning of the
extension header are not standardized, providing flexibility for future requirements.
26
Computer Networks
4. Contributing Sources Count (CC): This field specifies the number of contributing
sources present in the RTP packet, ranging from 0 to 15. Contributing sources are typically
used in scenarios involving mixers, where multiple streams are combined. The count
indicates the number of sources contributing to the packet.
5. Marker Bit (M): The marker bit (M) is an application-specific flag that can be used to
mark significant events within the multimedia data stream. Its interpretation depends on
the specific application and may indicate the start of a video frame, audio segment, or other
relevant events.
6. Payload Type: The payload type field specifies the encoding algorithm used for the data
payload within the RTP packet. It indicates the format of the multimedia data, such as
uncompressed audio, MP3, H.264 video, etc. The payload type is crucial for the receiver
to interpret and decode the data correctly.
7. Sequence Number: The sequence number field is a monotonically increasing counter that
increments with each RTP packet sent. It aids the receiver in detecting lost or out-of-order
packets and ensures proper sequencing of the data stream.
8. Timestamp: The timestamp field is generated by the source to indicate the time at which
the first sample in the packet was captured. It allows the receiver to synchronize the
playback of multimedia data by compensating for network delays and jitter. Timestamps
are relative to the start of the stream, with only the differences between timestamps being
significant.
9. Synchronization Source Identifier (SSRC): The SSRC field identifies the
synchronization source to which the RTP packet belongs. It is used to multiplex and
demultiplex multiple data streams onto a single stream of UDP packets. Each stream is
assigned a unique SSRC identifier for identification and synchronization purposes.
10. Contributing Source Identifiers (CSRC): If multiple contributing sources are present in
the RTP packet (indicated by the CC field), the CSRC field lists the identifiers of these
contributing sources. This information is particularly useful in scenarios involving mixers,
where multiple audio or video streams are combined.
27
Computer Networks
limit report rates to a fraction of the media bandwidth collectively, using estimates of
participants and media bandwidth.
3. Interstream Synchronization: RTCP addresses synchronization challenges arising from
different streams using disparate clocks, granularities, and drift rates. It helps maintain
synchronization among multiple streams.
4. Source Naming: RTCP facilitates source identification by assigning names, typically in
ASCII text format. This enables receivers to display information about active participants,
enhancing user experience.
28
Computer Networks
29
Computer Networks
30
Computer Networks
• Sliding Window Protocol: TCP uses the sliding window protocol with a dynamic window
size for flow control. Senders transmit segments and start a timer, while receivers send back
acknowledgements indicating the next expected sequence number and remaining window size.
• Handling Network Issues: TCP must deal with challenges such as out-of-order arrival of
segments and retransmissions due to timeouts. TCP implementations optimize performance by
carefully managing retransmissions and keeping track of received bytes using sequence
numbers.
The TCP Segment Header
1. Source Port: This field identifies the source port number, which is a 16-bit number
representing the sender's port on the local host. Together with the sender's IP address, it
forms a unique endpoint for the connection.
2. Destination Port: Similar to the source port, this field identifies the destination port
number, which is a 16-bit number representing the receiver's port on the remote host.
Together with the receiver's IP address, it forms the destination endpoint for the connection.
3. Sequence Number: This 32-bit field indicates the sequence number of the first data byte
in the segment. It is used to maintain the correct order of data transmission and reception.
4. Acknowledgment Number: Also a 32-bit field, it indicates the next sequence number that
the sender of the segment expects to receive from the receiver. This field is used for
acknowledging received data and facilitating flow control.
5. Data Offset: This 4-bit field specifies the size of the TCP header in 32-bit words. Since the
header is fixed-format, this field is used to determine where the data begins in the segment.
6. Reserved: This 6-bit field is reserved for future use and must be set to zero.
7. Control Bits: These 6 flags, each occupying 1 bit, control various aspects of the TCP
connection:
31
Computer Networks
• CWR and ECE: signal congestion when ECN (Explicit Congestion Notification)
is used, with ECE indicating congestion to the TCP sender and CWR signaling the
TCP sender to reduce congestion
• URG: Indicates urgent data in the segment.
• ACK: Indicates that the Acknowledgment Number field is valid.
• PSH: Indicates that the data should be pushed to the receiving application.
• RST: Indicates a reset request to terminate the connection abruptly due to various
reasons such as host crashes or invalid segments.
• SYN: Synchronizes sequence numbers to initiate a connection. SYN = 1 indicating
a connection request and SYN = 1 and ACK = 1 indicating a connection acceptance,
distinguishing between connection requests and accepted connections.
• FIN: Indicates the end of data transmission.
8. Window Size: This 16-bit field specifies the size of the receive window, which is the
amount of data the sender can transmit before receiving further acknowledgment from the
receiver.
9. Checksum: This 16-bit field is used for error checking of the TCP header and data. It
ensures the integrity of the transmitted data.
10. Urgent Pointer: If the URG flag is set, this 16-bit field indicates the offset from the
sequence number indicating urgent data in the segment.
11. Options: This field, if present, contains additional header options, such as timestamps,
maximum segment size (MSS), window scale factor, etc. Options are of variable length,
filled to a multiple of 32 bits by padding with zeros, and can extend to 40 bytes to
accommodate the longest TCP header.
12. Data: TCP segments can carry up to 65,495 data bytes, calculated by subtracting the sizes
of the IP header (20 bytes) and the TCP header (20 bytes) from the maximum payload size
allowed by IP (65,535 bytes). Internet hosts are required to accept TCP segments of up to
556 bytes, calculated by adding the maximum segment size (MSS) of 536 bytes to the TCP
header size of 20 bytes.
TCP Connection Establishment
The connections in TCP are established using the three-way handshake.
• Server Side:
• The server passively waits for an incoming connection by executing the LISTEN and
ACCEPT primitives in that order.
• It specifies either a specific source or waits for connections from any source.
• Client Side:
• The client executes a CONNECT primitive, specifying the destination IP address and
port, the maximum TCP segment size it accepts, and optionally user data.
32
Computer Networks
• The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off,
waiting for a response.
• Connection Establishment Process:
• Upon receiving the SYN segment at the destination, the TCP entity checks if there's a
process listening on the specified port.
• If no process is listening, the destination sends a reply with the RST bit on to reject the
connection.
• If a process is listening, it receives the incoming TCP segment and can accept or reject
the connection.
• If accepted, an acknowledgment segment is sent back to the client.
• Sequence of Events:
• Normal Case: SYN segment sent, acknowledgment received (Fig. 6-37(a)).
• Simultaneous Connection Attempt: Both hosts attempt to establish a connection
simultaneously, resulting in a single established connection (Fig. 6-37(b)).
33
Computer Networks
34