Please Enable the Desktop mode for better view experience

Unit 04 Computer network Complete Notes

Computer Networks: Deep Dive – Unit IV

Computer Networks – Unit IV

Semester 06

Explore Transport Layer: Protocols, Flow, & Congestion.

Explore More

Computer Networks: Deep Dive Notes (Unit IV)

Unit IV: Transport Layer

Process-to-Process Delivery

Define: Process-to-Process Delivery

Process-to-process delivery (प्रक्रिया-से-प्रक्रिया वितरण) is a core responsibility of the Transport Layer. It involves ensuring that data originating from a specific application program (process) on the source host is delivered to the correct specific application program (process) on the destination host. It uses port numbers to distinguish between multiple applications running simultaneously on the same device.

Key Points of Process-to-Process Delivery:

  • Application Identification: Distinguishes between multiple applications running on the same host.
  • Port Numbers: Uses logical port numbers (e.g., 0-65535) to identify unique application processes.
  • Transport Layer Role: A fundamental service provided by protocols like TCP and UDP.
  • End-to-End Logical Communication: Creates a logical communication path between sending and receiving application processes.
  • Demultiplexing/Multiplexing: Handles both demultiplexing (delivering incoming segments to correct application) and multiplexing (combining data from multiple applications into one stream).
Components:
  • Port Numbers: 16-bit integers assigned to application processes.
  • Socket Address: Combination of IP address and port number (`IP_address:port_number`).
  • Multiplexer: At the sender, combines data from multiple application processes into a single transport layer stream.
  • Demultiplexer: At the receiver, distributes incoming transport layer segments to the correct application process based on port number.
Applications:
  • ✨ Web Browsing (HTTP requests go to specific web server port, e.g., 80 or 443).
  • ✨ Email communication (SMTP uses port 25, POP3 uses port 110, IMAP uses port 143).
  • ✨ File Transfer (FTP uses ports 20 & 21).
  • ✨ DNS queries (typically UDP port 53).
  • ✨ Online gaming sessions connecting specific game processes.
Advantages:
  • 👍 Enables multiple applications to run concurrently on the same host and use the network.
  • 👍 Ensures that data is delivered to the *correct* application, not just the correct host.
  • 👍 Simplifies application development by providing a clear interface for network communication.
Disadvantages:
  • 👎 Requires proper management of port numbers to avoid conflicts.
  • 👎 Some security concerns if unused ports are left open or exposed.
  • 👎 Application errors (e.g., invalid port usage) can lead to communication failures.

Transport Layer Protocols: UDP, TCP

Define: Transport Layer Protocols

Transport Layer Protocols (ट्रांसपोर्ट लेयर प्रोटोकॉल) are communication rules used at the Transport Layer of the TCP/IP model. Their primary function is to provide logical end-to-end communication services (process-to-process delivery) between application processes running on different hosts. They offer varying levels of reliability, flow control, and error control mechanisms depending on the application’s needs.

Key Points of Transport Layer Protocols:

  • End-to-End Communication: Facilitate logical communication between processes on source and destination hosts.
  • Port Addressing: Use port numbers to identify specific application processes.
  • Segmentation: Break down application-layer messages into smaller segments for transmission.
  • Reliability & Flow Control: Some (like TCP) offer guarantees for ordered and error-free delivery and manage data flow; others (like UDP) do not.
  • Protocol Multiplexing/Demultiplexing: Manage multiple application streams over a single network connection.

Types of Transport Layer Protocols:

1. UDP (User Datagram Protocol – यूज़र डेटाग्राम प्रोटोकॉल)

Define: UDP (User Datagram Protocol) is a connectionless, unreliable transport layer protocol in the TCP/IP suite. It provides a bare-bones, low-overhead method for transmitting data packets (datagrams) without establishing a connection beforehand, retransmitting lost packets, or guaranteeing delivery order. It’s often called “fire-and-forget” due to its minimal features.

  • Connectionless: Does not establish a connection; sends data packets independently.
  • Unreliable: No guarantees for delivery, order, or duplication.
  • Low Overhead: Very simple header and minimal processing, making it fast.
  • Fast: Favored by applications that prioritize speed and efficiency over absolute reliability.
  • Multiplexing/Demultiplexing: Only provides basic port addressing for process-to-process delivery.
UDP Segment Header (Simplified):

    +-----------------+-----------------+
    | Source Port (16 bits)  | Destination Port (16 bits)  |
    +-----------------+-----------------+
    | Length (16 bits)     | Checksum (16 bits)           |
    +-----------------+-----------------+
    |            Data (Payload)           |
    +-------------------------------------+
            

*Diagram suggestion: UDP header structure highlighting Source Port, Dest Port, Length, Checksum.*

Applications:
  • DNS (Domain Name System): Primarily uses UDP for quick queries due to small request/response sizes.
  • VoIP (Voice over IP) & Video Streaming: Tolerates packet loss for real-time performance.
  • ✨ Online Gaming: Prioritizes low latency over guaranteed packet delivery.
  • ✨ SNMP (Simple Network Management Protocol): For network monitoring with basic messaging.
  • ✨ Broadcast/Multicast applications where sending to many efficiently is key.
Advantages:
  • 👍 Low overhead and very fast data transfer due to minimal features.
  • 👍 No connection setup/teardown time, ideal for small, quick transactions.
  • 👍 Highly efficient for real-time applications that can tolerate some packet loss.
Disadvantages:
  • 👎 No guarantee of delivery, order, or avoidance of duplicate packets.
  • 👎 Does not provide flow control or congestion control, potentially leading to network congestion.
  • 👎 Error checking is minimal (checksum is optional), offering little reliability.
2. TCP (Transmission Control Protocol – ट्रांसमिशन कंट्रोल प्रोटोकॉल)

Define: TCP (Transmission Control Protocol) is a reliable, connection-oriented transport layer protocol in the TCP/IP suite. It provides full-duplex, stream-oriented data transfer by establishing a virtual connection (a session) between two application processes. TCP ensures ordered, error-free delivery of data, provides flow control, and performs sophisticated congestion control.

  • Connection-Oriented: Establishes a virtual connection using a three-way handshake before data transfer.
  • Reliable: Guarantees delivery, order, and no duplication through acknowledgments and retransmissions.
  • Stream-Oriented: Treats data as a continuous stream of bytes, not as individual packets.
  • Full-Duplex: Allows simultaneous bidirectional data flow.
  • Flow Control & Congestion Control: Manages sender’s rate to prevent receiver overflow and network congestion.
TCP Segment Header (Simplified):

    +-----------------+-----------------+
    | Source Port (16 bits)  | Destination Port (16 bits)  |
    +-----------------+-----------------+
    | Sequence Number (32 bits)                             |
    +-------------------------------------------------------+
    | Acknowledgment Number (32 bits)                       |
    +-------------------------------------------------------+
    | HLEN | Res. |FLAGS| Window Size (16 bits)             |
    +------+------+-----+-----------------------------------+
    | Checksum (16 bits) | Urgent Pointer (16 bits)          |
    +-----------------+-----------------+-------------------+
    |                Options (Variable)                     |
    +-------------------------------------------------------+
    |                      Data (Payload)                     |
    +-------------------------------------------------------+
            

*Diagram suggestion: TCP header structure, highlighting Sequence Number, ACK Number, Window Size, Flags.*

Applications:
  • Web Browsing (HTTP/HTTPS): For reliable transmission of web pages.
  • ✨ Email (SMTP, POP3, IMAP): Ensures email delivery without loss or corruption.
  • ✨ File Transfer Protocol (FTP): For reliable and ordered file transfers.
  • ✨ SSH (Secure Shell) & Telnet: For reliable remote terminal access.
  • ✨ Database connections: Ensures data integrity during transactions.
Advantages:
  • 👍 Guarantees reliable, ordered, and error-free data delivery to applications.
  • 👍 Provides sophisticated flow control, preventing receiver buffer overflow.
  • 👍 Implements robust congestion control, adapting to network conditions and preventing congestion collapse.
Disadvantages:
  • 👎 Adds significant overhead (header size, connection setup, acknowledgments, retransmissions), increasing latency.
  • 👎 Slower than UDP due to its extensive reliability and control features.
  • 👎 Not suitable for real-time applications that can tolerate some loss but require very low latency.

TCP Segment

Define: TCP Segment

A TCP Segment (टीसीपी सेगमेंट) is the basic unit of data exchanged between the Transport Layer and the Network Layer using TCP. It consists of a TCP header, which carries control information for reliable data transfer and connection management, followed by a payload of application data. The Transport Layer encapsulates application data into TCP segments before passing them to the Network Layer for IP routing.

Key Points of TCP Segment:

  • Basic Data Unit: The fundamental unit of data exchanged by the TCP protocol.
  • Header + Data: Composed of a TCP header (control info) and the application payload (data).
  • Reliability & Control: The header fields contain information vital for TCP’s reliable, flow-controlled, and congestion-controlled data transfer.
  • Varying Size: Segment size varies depending on the amount of application data and Maximum Segment Size (MSS) negotiations.
  • Sent in IP Datagram: Each TCP segment is then encapsulated within an IP datagram (packet) at the Network Layer for routing.
Detailed Components of TCP Segment Header:
  • Source Port (16 bits): Identifies the application process sending the data on the source host.
  • Destination Port (16 bits): Identifies the application process intended to receive the data on the destination host.
  • Sequence Number (32 bits): Identifies the byte number of the first byte of data in the current segment. Used for ordered delivery and reassembly.
  • Acknowledgment Number (32 bits): Contains the next expected sequence number from the other party. It’s a cumulative ACK, acknowledging all bytes up to this number.
  • Data Offset/HLEN (4 bits): Indicates the number of 32-bit words in the TCP header, showing where the data begins.
  • Reserved (6 bits): Reserved for future use; set to 0.
  • Flags (6 bits):
    • URG: Urgent Pointer field is significant.
    • ACK: Acknowledgment Number field is significant (ACK is active).
    • PSH: Push function (requests immediate data delivery to application).
    • RST: Reset the connection (terminate abruptly).
    • SYN: Synchronize sequence numbers (used during connection establishment).
    • FIN: No more data from sender (used to close connection gracefully).
  • Window Size (16 bits): Specifies the number of bytes the sender is willing to receive from the remote host, used for flow control.
  • Checksum (16 bits): Used for error detection across the TCP header and data, and parts of the IP header.
  • Urgent Pointer (16 bits): If URG flag is set, this indicates the offset from the sequence number where urgent data ends.
  • Options (Variable length): Optional fields like Maximum Segment Size (MSS), Selective ACK options, Window Scale factor.

Figure: TCP Segment Header ki Detailed Sanrachna.

TCP Segment Header Format.
Applications:
  • ✨ Transporting web page data via HTTP/HTTPS.
  • ✨ Encapsulating email messages (SMTP, POP3, IMAP).
  • ✨ Sending and receiving files via FTP.
  • ✨ Underlying data unit for secure shell (SSH) sessions.
  • ✨ Database transactions requiring reliable data transfer.
Advantages:
  • 👍 Essential for TCP’s reliable, ordered, and error-free data transfer.
  • 👍 Contains all necessary information for connection management, flow control, and congestion control.
  • 👍 Its sequence numbers and ACK mechanisms ensure data integrity and delivery.
Disadvantages:
  • 👎 Adds 20 bytes (plus options) of overhead to each segment, increasing network traffic.
  • 👎 Processing all header fields (flags, sequence/ack numbers) adds latency.
  • 👎 Can be relatively large for small application data units, reducing efficiency for chat-like applications.

TCP Connection

Define: TCP Connection

A TCP connection (टीसीपी कनेक्शन) is a logical, full-duplex, point-to-point virtual communication link established between two application processes running on different hosts, specifically by the Transmission Control Protocol (TCP) at the Transport Layer. It provides a reliable and ordered byte-stream service, making higher layers abstract away network unreliabilities.

Key Points of TCP Connection:

  • Logical Link: Not a physical cable, but a virtual pathway between processes.
  • Full-Duplex: Data can flow in both directions simultaneously between the two connected processes.
  • Point-to-Point: Involves exactly two communicating entities (client and server processes).
  • Connection-Oriented: Requires a three-way handshake to establish a connection before any data is sent.
  • Reliable Stream: Guarantees that data sent will arrive in order, complete, and without duplication.

Phases of TCP Connection:

1. Connection Establishment (कनेक्शन स्थापना – Three-Way Handshake)

Define: TCP connection establishment is a three-step process, known as the three-way handshake, used to synchronize sequence numbers and establish parameters for the reliable communication between two TCP hosts. It involves the exchange of SYN (Synchronize) and ACK (Acknowledgment) segments.

  • SYN-SYN/ACK-ACK: Client sends SYN, Server replies with SYN-ACK, Client sends ACK.
  • Sequence Number Sync: Both client and server exchange their initial sequence numbers (ISN).
  • Parameter Negotiation: TCP options like Maximum Segment Size (MSS) can be negotiated.
  • Client State: Moves from CLOSED -> SYN-SENT -> ESTABLISHED.
  • Server State: Moves from CLOSED -> LISTEN -> SYN-RECEIVED -> ESTABLISHED.
Example:

1. Client wants to open connection, sends SYN segment with its Initial Sequence Number (ISN_client) to server.
2. Server receives SYN, sends SYN-ACK segment with its ISN_server, and ACK (ISN_client + 1) to client.
3. Client receives SYN-ACK, sends ACK (ISN_server + 1) to server. Both are now ESTABLISHED and can send data.

Figure: TCP Three-Way Handshake se Connection Establishment.

TCP Three-Way Handshake diagram.
Applications:
  • ✨ All TCP-based applications like HTTP/HTTPS (web browsing), FTP (file transfer).
  • ✨ Email client connections (SMTP, POP3, IMAP).
  • ✨ SSH (Secure Shell) sessions for remote access.
  • ✨ Any database application establishing a connection.
  • ✨ Building reliable data transfer mechanisms from scratch.
Advantages:
  • 👍 Ensures both sender and receiver are ready and synchronized before data transfer.
  • 👍 Negotiates initial sequence numbers, guaranteeing reliable and ordered delivery.
  • 👍 Prevents accidental data segments from previous connections interfering with new ones.
Disadvantages:
  • 👎 Adds latency: Requires three full round-trips before data can be sent, increasing connection setup time.
  • 👎 Vulnerable to SYN flood attacks, where malicious actors flood a server with SYN requests to exhaust resources.
  • 👎 Increases overhead due to handshake packets.
2. Data Transfer (डेटा ट्रांसफर)

Define: Once a TCP connection is established via the three-way handshake, the data transfer phase begins. During this phase, application data is segmented, encapsulated in TCP segments, and reliably exchanged between the client and server processes using sequence numbers, acknowledgments, windows for flow control, and timers for retransmission.

  • Data Flow: Both client and server can send and receive data simultaneously (full-duplex).
  • Reliable & Ordered: Sequence numbers ensure in-order delivery; ACKs and retransmissions ensure no data loss.
  • Sliding Window: Flow control is managed by a receiver window that dictates how much data the sender can transmit without awaiting an ACK.
  • Pipelining: Sender can send multiple segments before waiting for ACK.
  • Congestion Control: TCP continuously adjusts the sending rate to prevent network congestion.
Example:

You browse a webpage. After TCP handshake, your browser sends an HTTP GET request within a TCP segment. The web server then responds with TCP segments containing parts of the HTML webpage data, and your browser acknowledges receipt of each segment.

Applications:
  • ✨ Streaming high-quality video content (e.g., Netflix where reliability matters).
  • ✨ Transferring large files over FTP.
  • ✨ Browsing content on the World Wide Web.
  • ✨ Online banking transactions to ensure data integrity.
  • ✨ Real-time collaboration on documents (e.g., Google Docs).
Advantages:
  • 👍 Guarantees reliable and in-order delivery of data, abstracting network unreliability.
  • 👍 Provides sophisticated flow control, preventing receivers from being overwhelmed.
  • 👍 Implements robust congestion control, crucial for global internet stability.
Disadvantages:
  • 👎 Inherent latency and overhead due to acknowledgments and retransmissions.
  • 👎 Can experience delays due to congestion control mechanisms reducing send rates.
  • 👎 Less suitable for highly delay-sensitive applications that can tolerate minor data loss.
3. Connection Termination (कनेक्शन समापन – Four-Way Handshake)

Define: TCP connection termination is typically a four-step process used to gracefully close a TCP connection between two hosts, ensuring that all data from both sides has been properly sent and acknowledged. It is a full-duplex tear-down process where each side can independently close its sending direction.

  • FIN-ACK-FIN-ACK: Four segments are exchanged for graceful closure.
  • Full-Duplex Closure: Each side of the connection must independently initiate closure.
  • Graceful Shutdown: Ensures all data in transit is delivered before closing.
  • Client/Server State: Moves through various states (FIN_WAIT_1, TIME_WAIT, CLOSE_WAIT, LAST_ACK).
  • Avoids Data Loss: Prevents any remaining buffered data from being discarded prematurely.
Example:

1. Client sends FIN segment (no more data from client).
2. Server receives FIN, sends ACK to client, then may send its own remaining data.
3. Server eventually sends FIN segment (no more data from server).
4. Client receives Server’s FIN, sends final ACK to server, then enters TIME_WAIT before closing completely. Server enters CLOSED after receiving final ACK.

Figure: TCP Connection Termination ka Four-Way Handshake.

TCP Four-Way Handshake diagram.
Applications:
  • ✨ Ensuring proper session termination in web browsers.
  • ✨ Graceful shutdown of file transfers to confirm all bytes received.
  • ✨ Closing secure shell (SSH) or database connections.
  • ✨ Any application where maintaining data integrity until the very end of communication is crucial.
  • ✨ Preventing lingering connections from consuming resources.
Advantages:
  • 👍 Guarantees a clean and graceful termination of the connection from both ends.
  • 👍 Ensures that all data buffered or in transit from both sides is delivered before the connection closes.
  • 👍 Helps prevent data loss during connection termination.
Disadvantages:
  • 👎 Adds overhead (four segments) compared to an abrupt reset (RST flag).
  • 👎 The `TIME_WAIT` state can hold up ports on busy servers, delaying reuse.
  • 👎 More complex state management for the TCP stack.

Flow Control and Error Control

Define: Flow Control (प्रवाह नियंत्रण)

Flow Control (प्रवाह नियंत्रण) is a mechanism implemented at the Transport Layer (primarily by TCP) to manage the data transmission rate between a sender and a receiver, ensuring that the sender does not overwhelm the receiver’s buffer capacity. It prevents faster senders from swamping slower receivers, thus avoiding packet loss due to buffer overflow.

Key Points of Flow Control:

  • Receiver Buffer Management: Relies on the receiver’s available buffer space.
  • Window Mechanism: TCP uses a ‘receiver window’ advertised by the receiver to inform the sender of available buffer.
  • Sender Rate Adjustment: Sender adjusts its transmission rate based on the receiver’s advertised window.
  • Preventing Data Loss: Avoids packet loss that would occur if the receiver’s buffers filled up.
  • Peer-to-Peer Control: It’s a local mechanism between the two communicating ends of a TCP connection.
Mechanism (TCP’s Sliding Window Flow Control):
  • Receiver Window: The receiver indicates how much buffer space it has available (the ‘Receiver Window’ or ‘rwnd’ size) in the ACK segments it sends back.
  • Sender’s Window: The sender maintains its own ‘send window’, whose size is capped by the smallest of the Congestion Window (cwnd) and the Receiver Window (rwnd). This effectively limits the amount of unacknowledged data the sender can have in transit.
  • Zero Window: If the receiver’s buffer fills up, it can advertise a zero window, forcing the sender to stop transmitting data until buffer space becomes available.
Example:

Imagine filling a bottle with water. If the tap is running too fast (sender), and the bottle (receiver’s buffer) is small, water will overflow. Flow control involves the bottle telling the tap to slow down, or the tap only opening fully when there’s space.

Applications:
  • ✨ Any TCP-based data transfer (web browsing, file transfers, email).
  • ✨ Ensuring reliable communication between devices with differing processing speeds.
  • ✨ Preventing application layer processes from dropping data due to full buffers.
Advantages:
  • 👍 Prevents data loss due to receiver buffer overflow.
  • 👍 Adapts to varying processing capabilities of receivers.
  • 👍 Ensures optimal utilization of receiver resources without overwhelming it.
Disadvantages:
  • 👎 Can reduce throughput if the receiver frequently advertises small windows.
  • 👎 Only controls flow between two communicating end hosts, not network congestion.
  • 👎 Vulnerable to malicious advertising of false window sizes.

Define: Error Control (त्रुटि नियंत्रण)

Error control (त्रुटि नियंत्रण) is a mechanism primarily implemented at the Transport Layer (by TCP) to ensure reliable data delivery from the source application process to the destination application process. It handles lost segments, corrupted segments, out-of-order segments, and duplicate segments through the use of acknowledgments, sequence numbers, timers, and retransmission strategies.

Key Points of Error Control:

  • Reliable Delivery: Guarantees that all segments sent are ultimately received, correctly, and in order.
  • Loss/Corruption Handling: Detects and recovers from segments lost or corrupted during transmission (e.g., due to noisy links or network congestion).
  • Duplication & Ordering: Handles duplicate segments and ensures that segments are delivered to the application layer in their original sequence.
  • ARQ (Automatic Repeat Request): TCP uses ARQ protocols that involve acknowledgments, timers, and retransmissions.
  • Sequence Numbers & ACKs: Fundamental tools for managing segment order and confirming reception.
Mechanisms (TCP’s Reliable Data Transfer):
  • Sequence Numbers: TCP assigns a unique sequence number to each byte of data it sends. Segments contain the sequence number of their first byte. This ensures proper reassembly at the receiver.
  • Acknowledgments (ACKs): The receiver sends acknowledgment (ACK) segments back to the sender, indicating the sequence number of the next byte it expects to receive. ACKs are cumulative, confirming receipt of all bytes up to that number.
  • Timers and Retransmission: The sender sets a timer for each segment it sends. If an ACK is not received before the timer expires, the segment is presumed lost and retransmitted.
  • Duplicate ACKs: If a sender receives multiple duplicate ACKs for the same segment, it can infer that the segment immediately following the acknowledged one might be lost (fast retransmit).
  • Checksum: TCP calculates a checksum over its header and data. If the checksum fails at the receiver, the segment is silently dropped, leading to a timeout and retransmission by the sender.
Example:

When you download a file over TCP, if a part of the file (a segment) gets corrupted or lost during transmission (e.g., due to network interference), TCP’s error control mechanisms ensure that the sender detects this (via no ACK or corrupted checksum/NAK) and retransmits that missing segment until it is successfully received and acknowledged. The application doesn’t see any missing bytes.

Applications:
  • ✨ File downloads and uploads (FTP).
  • ✨ Web page loading (HTTP/HTTPS).
  • ✨ Email transfer (SMTP/POP3/IMAP).
  • ✨ Secure Shell (SSH) and Telnet sessions.
  • ✨ Online banking and financial transactions.
Advantages:
  • 👍 Guarantees complete, accurate, and in-order delivery of all data bytes to the application.
  • 👍 Makes the underlying network’s unreliability transparent to higher layers and applications.
  • 👍 Crucial for applications where data integrity is paramount.
Disadvantages:
  • 👎 Adds significant overhead (acknowledgments, retransmissions, timers) and latency compared to unreliable protocols.
  • 👎 Not suitable for applications that prioritize very low delay over perfect reliability (e.g., live video, VoIP).
  • 👎 Retransmissions consume network bandwidth, which can contribute to congestion if errors are frequent.

TCP Transmission Policy

Define: TCP Transmission Policy

TCP Transmission Policy (टीसीपी ट्रांसमिशन पॉलिसी) refers to the set of rules and algorithms that TCP uses to decide *when* and *how much* data to send, taking into account reliability (acknowledgments, retransmissions), flow control (receiver’s buffer limits), and congestion control (network’s capacity limits). It determines the overall sending behavior of TCP, balancing efficiency, reliability, and network fairness.

Key Points of TCP Transmission Policy:

  • Reliability: Ensures all data is delivered and acknowledged.
  • Flow Control: Avoids overwhelming the receiver.
  • Congestion Control: Adapts to network conditions to prevent congestion collapse.
  • Sliding Window based: Uses a sliding window mechanism, limited by both receiver’s advertised window and congestion window.
  • Dynamic: The sending rate is continuously adjusted based on network feedback.
Key Aspects of TCP Transmission Policy:
  • 1. Sliding Window (Sender’s Window Management):
    • ● TCP’s sender maintains a “send window” that defines the range of bytes that can be transmitted. The size of this window dynamically adjusts based on:
      • ● The receiver’s advertised window (for flow control – how much receiver can buffer).
      • ● The congestion window (for congestion control – how much network can handle).
    • ● The actual window size used is the minimum of these two. This ensures the sender does not exceed either the receiver’s capacity or the network’s capacity.
  • 2. Acknowledgment (ACK) Strategy:
    • ● TCP relies on cumulative acknowledgments, meaning an ACK for sequence number N implicitly acknowledges all bytes up to N-1.
    • ● This allows for efficient confirmation of multiple segments at once, improving throughput.
    • ● Modern TCP versions also use ‘Selective Acknowledgments (SACK)’ to indicate specifically which segments were received, further optimizing retransmissions.
  • 3. Retransmission Strategy (Timeout and Fast Retransmit):
    • ● TCP uses timers for each segment. If an acknowledgment for a segment is not received before its timer expires, the segment is considered lost and retransmitted.
    • Fast Retransmit: If the sender receives three duplicate acknowledgments (meaning the receiver got subsequent data but is waiting for a missing segment), it immediately retransmits the missing segment without waiting for a timeout, speeding up recovery from single packet losses.
  • 4. Nagle’s Algorithm:
    • ● This algorithm aims to reduce the number of small segments sent over the network, thereby improving efficiency, especially for interactive applications.
    • ● If there’s unacknowledged data in flight, Nagle’s Algorithm holds small new data from the application until either the previously sent data is acknowledged, or a full segment (MSS) can be sent.
    • Impact: Reduces congestion but can sometimes introduce small delays for interactive traffic.
  • 5. Delayed ACK (Acknowledgment) Algorithm:
    • ● To reduce the number of ACKs sent, TCP’s delayed ACK algorithm specifies that a receiver does not immediately send an ACK for every incoming segment. Instead, it waits for a short period (e.g., 50-200 ms) to see if it can “piggyback” the ACK with an outgoing data segment for the sender.
    • ● If no outgoing data segment is available, or if the next segment for acknowledgment arrives, the ACK is sent separately.
    • Impact: Improves network efficiency by combining ACKs with data or other ACKs, but can slightly increase latency in some cases.
Applications:
  • ✨ Core functionality of all TCP-based Internet applications (Web, FTP, Email, SSH).
  • ✨ Managing data transfer on high-latency or high-loss networks.
  • ✨ Implementing network applications that require guaranteed reliability.
  • ✨ Basis for file sharing and large data transfers over the Internet.
  • ✨ Enhancing the user experience for applications requiring stable connectivity.
Advantages:
  • 👍 Ensures high reliability, ordered delivery, and absence of data loss or duplication.
  • 👍 Dynamically adapts sending rate based on network conditions (congestion control).
  • 👍 Optimizes channel utilization through pipelining and efficient acknowledgment schemes.
Disadvantages:
  • 👎 Introduces latency due to delays in acknowledgments and potential retransmissions.
  • 👎 Complexity of implementation and management for these various intertwined policies.
  • 👎 Nagle’s algorithm and delayed ACKs can sometimes impact real-time interactive performance.

Principles of Congestion Control

Define: Principles of Congestion Control

Principles of Congestion Control (भीड़ नियंत्रण के सिद्धांत) refer to the fundamental strategies and algorithms used by transport layer protocols (primarily TCP) to manage and prevent network congestion. Congestion occurs when too many data packets are trying to use a network segment’s capacity, leading to router buffer overflows, packet loss, and degraded network performance. The goal of congestion control is to regulate the sender’s data injection rate to match the network’s current capacity, ensuring network stability and fairness among users.

Key Principles of Congestion Control:

  • Resource Allocation: How to fairly allocate limited network resources (bandwidth, buffer space) among competing data flows.
  • Detection: How to detect the onset of congestion (e.g., via packet loss, increased RTT).
  • Reaction/Prevention: How to reduce the sending rate once congestion is detected (reaction), or predictively avoid it (prevention).
  • Fairness: Ensure all flows get a fair share of network bandwidth.
  • Efficiency: Maximize network throughput while maintaining stability.
Goals of Congestion Control:
  • Avoid Congestion: Implement strategies to prevent congestion from occurring in the first place.
  • Recover from Congestion: If congestion occurs, gracefully reduce traffic to bring the network back to normal operation.
  • Efficiency: Maximize network throughput and resource utilization.
  • Fairness: Ensure all competing flows receive an equitable share of the available bandwidth.
  • Rapid Adaptation: Respond quickly to dynamic changes in network conditions.
High-Level Approaches to Congestion Control:
1. End-to-End Congestion Control (एंड-टू-एंड भीड़ नियंत्रण)

Define: End-to-end congestion control (TCP primarily) is performed by the end hosts themselves (sender and receiver), without direct assistance from intermediate network devices like routers. Senders infer network congestion by observing implicit signals such as packet loss (due to timeouts or duplicate ACKs) or increasing Round Trip Times (RTTs) for acknowledgments, and then adjust their transmission rate accordingly.

  • No Router Assistance: Routers do not explicitly tell hosts about congestion.
  • Implicit Signals: Hosts infer congestion from packet loss, duplicate ACKs, RTT changes.
  • Sender-Controlled: Senders adjust their transmission rates autonomously.
  • Common for TCP: The core mechanism of TCP congestion control (Additive Increase, Multiplicative Decrease).
  • Reactive: Tends to react to congestion rather than prevent it proactively.
Mechanism:

Sender maintains a ‘congestion window’ (cwnd). – **Increase strategy:** Sender probes for more available bandwidth. – **Decrease strategy:** Sender reduces its window size when congestion is detected. – **Retransmissions:** Indicate packet loss, leading to a window reduction. – **Timeout/Duplicate ACKs:** Act as key signals for inferring congestion.

Applications:
  • ✨ TCP’s ubiquitous congestion control across the entire Internet.
  • ✨ Ensuring stable operation of web browsing, file transfers, and email.
  • ✨ Foundation for ensuring network stability even in an inherently best-effort IP environment.
Advantages:
  • 👍 Highly scalable; works globally without requiring changes to millions of routers.
  • 👍 Flexible and adaptive to varying network conditions without central control.
  • 👍 Robust in the face of diverse traffic patterns.
Disadvantages:
  • 👎 Reacts to congestion rather than preventing it (leading to some packet loss before reduction).
  • 👎 May not always achieve optimal performance for all applications due to its general nature.
  • 👎 Performance can be poor for highly lossy networks.
2. Network-Assisted Congestion Control (नेटवर्क-सहायता भीड़ नियंत्रण)

Define: Network-assisted congestion control involves active participation from intermediate network devices (like routers and switches) to assist end-hosts in managing congestion. Routers can explicitly signal congestion to senders, or use scheduling and discarding algorithms to manage traffic proactively. This approach aims for more explicit and often faster congestion avoidance.

  • Explicit Feedback: Routers actively send signals to senders about congestion status.
  • Active Queue Management (AQM): Routers proactively drop packets or mark them when queues are building up.
  • Router Discarding/Scheduling: Routers manage packet queues (e.g., RED – Random Early Detection) and scheduling policies (e.g., fair queuing).
  • More Proactive: Aims to avoid congestion before buffer overflows occur.
  • Better Resource Control: Allows for finer control over network resources by the infrastructure.
Mechanism:

Routers may send: – **Explicit Congestion Notification (ECN):** Router marks packets when it detects incipient congestion (queues filling), rather than dropping them. Receiver then relays this mark to sender, which reduces its rate. – **Backward Explicit Congestion Notification (BECN):** Routers send explicit messages directly to senders to slow down. – **Fair Queueing:** Router ensures each flow gets a fair share of bandwidth and avoids monopolization by greedy flows.

Example:

A router configured with RED (Random Early Detection) starts dropping packets randomly as its queue length increases, even before the queue is full. This subtly signals congestion to senders (TCP assumes loss means congestion) leading them to reduce their rates earlier, preventing full buffer overflows and synchronization problems.

Applications:
  • ✨ Enterprise networks with advanced QoS (Quality of Service) requirements.
  • ✨ ISPs implementing active queue management (e.g., RED gateways) to improve overall network performance.
  • ✨ Networks where specific performance guarantees or fairness levels are critical.
  • ✨ Data centers and cloud infrastructure for optimized traffic management.
  • ✨ Certain types of multimedia or real-time communication where explicit congestion signals are beneficial.
Advantages:
  • 👍 Can achieve more efficient congestion avoidance, leading to less packet loss.
  • 👍 Allows for faster and more precise responses to network congestion.
  • 👍 Enables network administrators to implement specific QoS policies and fairness.
Disadvantages:
  • 👎 Requires routers and other network devices to be explicitly configured and upgraded.
  • 👎 Can be complex to implement and manage across large, distributed networks.
  • 👎 End hosts must be able to understand and react to explicit congestion signals.

TCP Congestion Control

Define: TCP Congestion Control

TCP Congestion Control (टीसीपी भीड़ नियंत्रण) is a set of sophisticated algorithms used by the Transmission Control Protocol to prevent network congestion (packet overload in routers and links). It achieves this by dynamically adjusting the sender’s transmission rate (its ‘congestion window’) based on inferred network conditions, primarily using packet loss and Round Trip Time (RTT) as signals to prevent network collapse and ensure fair bandwidth sharing among all TCP flows.

Key Principles of TCP Congestion Control:

  • Inferring Congestion: TCP assumes that packet loss (via timeout or duplicate ACKs) or significantly increased RTT indicates network congestion.
  • Congestion Window (cwnd): The sender maintains a variable ‘congestion window’ (cwnd), which limits the number of unacknowledged bytes it can have in flight, independently of the receiver’s window.
  • Rate Adjustment: TCP’s sending rate is effectively the minimum of the congestion window and the receiver’s advertised window.
  • Fairness: Aims to distribute bandwidth fairly among all active TCP connections.
  • Self-regulating: Works autonomously on each TCP connection, without central network control.

Phases/Algorithms of TCP Congestion Control (Traditional TCP Tahoe/Reno):

1. Slow Start (धीमी शुरुआत)

Define: Slow Start is the initial phase of TCP’s congestion control. When a TCP connection begins, the sender’s congestion window (cwnd) is initialized to a small value (typically 1 or 2 MSS – Maximum Segment Size). The cwnd then exponentially increases by 1 MSS for every acknowledgment (ACK) received (doubling approximately every RTT), allowing TCP to rapidly probe for available network bandwidth.

  • Exponential Growth: cwnd doubles per RTT (grows by MSS per ACK).
  • Initial Probing: Aims to quickly find out how much bandwidth is available without causing immediate congestion.
  • ssthresh: A ‘slow start threshold’ (ssthresh) variable limits this exponential growth phase.
  • Rapid Expansion: Allows the TCP connection to ramp up its sending rate very quickly.
  • Per-ACK Increase: cwnd increases after each segment is acknowledged.
Example:

If initial cwnd = 1 MSS: send 1 segment. Get ACK -> cwnd = 2 MSS. Send 2 segments. Get ACKs -> cwnd = 4 MSS. Send 4 segments… This continues until cwnd reaches `ssthresh` (slow start threshold) or a loss occurs.

Figure: TCP Congestion Control ka Slow Start Phase.

TCP Congestion Control - Slow Start Phase diagram.
Applications:
  • ✨ Every new TCP connection initiation for web pages, file downloads, etc.
  • ✨ Re-initiation of a TCP connection after a long timeout.
  • ✨ Initial phase of data transfer for any TCP-based application.
Advantages:
  • 👍 Allows for rapid initial probing of available network bandwidth.
  • 👍 Helps a new TCP flow quickly ramp up to a substantial sending rate.
  • 👍 Essential for efficiently starting short-lived TCP connections.
Disadvantages:
  • 👎 Can be aggressive and might contribute to initial network congestion if many flows start simultaneously.
  • 👎 Does not work well if network has significant initial loss.
  • 👎 Ends abruptly once ssthresh is reached or a loss event occurs.
2. Congestion Avoidance (भीड़ से बचाव)

Define: Congestion Avoidance is the phase in TCP congestion control that takes over after Slow Start (when cwnd reaches ssthresh or a loss event occurs). In this phase, the sender increases its congestion window (cwnd) linearly by 1 MSS per Round Trip Time (RTT), or by `MSS^2 / cwnd` per ACK. This additive increase (AIMD part of AIMD) is a conservative approach designed to avoid congestion by slowly probing for more bandwidth without triggering losses.

  • Linear Growth: cwnd increases by 1 MSS per RTT.
  • After Slow Start: Kicks in when cwnd > ssthresh.
  • Additive Increase: Slow, cautious increase in sending rate.
  • Probes for Loss: Slowly increases rate, waiting for loss as an explicit signal for network capacity limits.
  • Resilience to Minor Losses: Designed to maintain throughput even with some minor packet loss.
Example:

If cwnd = 10 MSS. For the next RTT, 10 segments are sent. If all ACKs received, cwnd becomes 11 MSS for the next RTT. This slow, steady increase helps avoid overflowing network buffers prematurely.

Applications:
  • ✨ During the main data transfer phase of most TCP connections.
  • ✨ Managing stable long-lived TCP flows on the Internet.
  • ✨ Essential for preventing congestion collapse in highly utilized networks.
  • ✨ Ensuring fairness among multiple competing TCP flows.
Advantages:
  • 👍 Much less aggressive than Slow Start, preventing rapid onset of congestion.
  • 👍 Balances efficiency (slowly increasing throughput) with stability (avoiding collapse).
  • 👍 Enables TCP to operate robustly under various network loads.
Disadvantages:
  • 👎 Slower to increase sending rate than Slow Start, especially on underutilized links.
  • 👎 Requires continuous monitoring of packet loss to trigger congestion reaction.
  • 👎 Can be inefficient in environments with very sparse packet loss (waits for cumulative loss).
3. Fast Retransmit and Fast Recovery (तेज रीट्रांसमिट और तेज रिकवरी)

Define: Fast Retransmit and Fast Recovery are TCP congestion control enhancements designed to improve performance by detecting packet loss more quickly and recovering more gracefully than waiting for a full timeout. These mechanisms react specifically to the reception of duplicate acknowledgments (DUP ACKs), which imply a packet loss but also suggest that other data is still arriving, indicating the network is not severely congested.

  • Faster Loss Detection: Detects packet loss before retransmission timer expires.
  • Duplicate ACKs: Triggered when sender receives three duplicate ACKs for the same segment.
  • Fast Retransmit: Immediately resends the missing segment (indicated by the DUP ACK) without waiting for a timeout.
  • Fast Recovery: A phase entered after Fast Retransmit. cwnd is reduced more gracefully (multiplicative decrease) than Slow Start’s aggressive reduction, aiming to continue data flow efficiently.
  • Optimizes Recovery: Improves throughput by avoiding unnecessary delays.
How it Works:

1. Fast Retransmit: Sender receives three duplicate ACKs for sequence number N. It immediately retransmits segment N.
2. Fast Recovery: Sender sets `ssthresh` to `cwnd/2` (current `cwnd`), then `cwnd` to `ssthresh` + (3 * MSS) and enters this phase. For each new duplicate ACK, `cwnd` is incremented by 1 MSS. When a new ACK arrives (acknowledging the retransmitted segment and possibly more), `cwnd` is set to `ssthresh` and congestion avoidance restarts linearly. If a timeout occurs, go to Slow Start from scratch.

Applications:
  • ✨ Standard in almost all modern TCP implementations.
  • ✨ Improves throughput on high-bandwidth, high-latency links prone to occasional packet loss.
  • ✨ Essential for efficient file transfers and multimedia streaming.
  • ✨ Enhances the overall robustness of TCP over the Internet.
  • ✨ Provides a faster recovery path than simple timeout retransmission.
Advantages:
  • 👍 Significantly reduces the recovery time from packet loss events.
  • 👍 Prevents drastic drops in throughput that would occur with full timeouts.
  • 👍 Allows network to utilize available bandwidth more effectively during recovery.
Disadvantages:
  • 👎 Does not apply to all loss scenarios (e.g., if multiple packets are lost such that no triple duplicate ACKs are generated, a timeout is still needed).
  • 👎 Can be less effective on very short flows that may not generate enough duplicate ACKs.
  • 👎 Adds complexity to the TCP congestion control algorithm implementation.

Figure: TCP Congestion Control’s Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery phases.

TCP Congestion Control Algorithm phases.

Quality of Service (QoS)

Define: Quality of Service (QoS)

Quality of Service (QoS – सेवा की गुणवत्ता) refers to a set of technologies and mechanisms used in networks to guarantee a certain level of performance for specific types of network traffic, rather than treating all traffic equally. It’s about managing network resources (like bandwidth, latency) to prioritize critical applications (e.g., voice, video) over less critical ones (e.g., file downloads), ensuring predictable and consistent delivery for latency-sensitive applications.

Key Principles of QoS:

  • Traffic Prioritization: Assigning higher priority to certain types of data (e.g., VoIP packets over email).
  • Resource Management: Managing bandwidth, buffer space, and processing time on routers/switches.
  • Performance Guarantees: Aiming to meet specific performance targets (e.g., max latency for VoIP, min bandwidth for video).
  • Congestion Avoidance: Preventing congestion that would degrade critical traffic.
  • User Experience: Crucial for ensuring satisfactory performance for real-time and business-critical applications.
QoS Parameters (What QoS aims to guarantee or improve):
  • 1. Bandwidth (बैंडविड्थ): The maximum data transfer rate of a network link. QoS can guarantee a minimum bandwidth allocation for critical services.
  • 2. Latency (लेटेंसी – Delay): The time it takes for a data packet to travel from source to destination. QoS aims to minimize latency for real-time applications.
  • 3. Jitter (जिटर): The variation in packet delay (inconsistent arrival times). QoS minimizes jitter to ensure smooth playback for streaming media and VoIP.
  • 4. Packet Loss (पैकेट हानि): The percentage of packets that fail to reach their destination. QoS aims to minimize packet loss for sensitive applications.

QoS Mechanisms/Models:

1. IntServ (Integrated Services – इंट्सर्व)

Define: IntServ (Integrated Services) is a QoS model that aims to provide explicit guarantees for specific applications by establishing a dedicated, reserved amount of network resources (bandwidth, buffer space) along the entire path from source to destination before data transmission begins. It is analogous to circuit switching at the network layer.

  • Hard Guarantees: Provides strict, quantitative guarantees for bandwidth, latency, and jitter.
  • Resource Reservation: Resources are explicitly reserved for each individual flow.
  • RSVP Protocol: Relies on the Resource ReSerVation Protocol (RSVP) for signaling reservations along the path.
  • Stateful Routers: Routers maintain ‘state’ for each individual reserved flow, leading to scalability issues.
  • Analogous to Circuit Switching: Creates virtual circuits with guaranteed resources.
How it Works:

1. An application sends a resource request (via RSVP) specifying its QoS requirements (e.g., X bandwidth, Y latency).
2. Each router along the path attempts to reserve the requested resources. If successful at all hops, the connection is established.
3. Once reserved, the router provides guaranteed resources to that flow. If any router cannot reserve, the connection setup fails.

Applications:
  • ✨ Mission-critical real-time applications in smaller, controlled networks.
  • ✨ Voice over IP (VoIP) and Video conferencing applications (in controlled environments).
  • ✨ Tactical military communication systems with strict performance needs.
  • ✨ Scientific grid computing applications requiring guaranteed resource allocation.
Advantages:
  • 👍 Provides the strongest, explicit guarantees of QoS (e.g., definite bandwidth, maximum latency).
  • 👍 Ensures predictable and consistent performance for prioritized applications.
  • 👍 Prevents congestion by pre-allocating resources.
Disadvantages:
  • 👎 Does not scale well for large networks like the Internet due to per-flow state management at every router (high overhead).
  • 👎 Complex to implement and manage across vast network infrastructure.
  • 👎 High overhead in signaling and managing individual resource reservations.
2. DiffServ (Differentiated Services – डिफ्सर्व)

Define: DiffServ (Differentiated Services) is a QoS model designed for scalability in large networks like the Internet. Instead of per-flow guarantees, it classifies network traffic into a limited number of ‘behavior aggregates’ (classes of service). Routers then provide different levels of service (different forwarding treatments) to these aggregates based on the ‘Differentiated Services Code Point’ (DSCP) marked in the IP packet header.

  • Scalable: Designed for large, IP-based networks; does not maintain per-flow state at core routers.
  • Traffic Aggregates: Classifies traffic into a few forwarding classes (e.g., Expedited Forwarding, Assured Forwarding).
  • DSCP Marking: Packets are marked with a 6-bit DSCP value in the IP header to indicate their service class.
  • Edge-to-Edge: QoS differentiation primarily happens at network edges; core routers only differentiate based on DSCP.
  • No Hard Guarantees: Provides ‘soft’ or probabilistic QoS rather than strict guarantees (unlike IntServ).
How it Works:

1. Classification & Marking: At the network edge, incoming packets are classified based on policies (e.g., source, application, port) and then marked with a DSCP value in their IP header.
2. Per-Hop Behavior (PHB): Each router (including core routers) provides a specific ‘Per-Hop Behavior’ (PHB) to packets based on their DSCP marking (e.g., Expedited Forwarding (EF) for low latency/jitter, Assured Forwarding (AF) for guaranteed bandwidth).
3. Congestion Management: Routers manage queues and allocate bandwidth differently based on DSCP values, prioritizing critical traffic.

Applications:
  • ✨ Main QoS model for Internet Service Providers (ISPs) to offer tiered services.
  • ✨ Large enterprise networks with multiple applications requiring different service levels.
  • ✨ VoIP and video conferencing deployment over wide area networks.
  • ✨ Data center networking for differentiating storage, management, and application traffic.
  • ✨ Cloud computing services that offer varying network performance tiers.
Advantages:
  • 👍 Highly scalable; well-suited for large, complex networks like the Internet.
  • 👍 Does not require routers to maintain per-flow state, reducing overhead.
  • 👍 Provides flexible and adaptable service differentiation without excessive signaling.
Disadvantages:
  • 👎 Provides ‘soft’ QoS (probabilistic assurance) rather than strict, guaranteed service levels.
  • 👎 Requires careful classification and marking policies at the network edge.
  • 👎 Less precise control than IntServ, as differentiation is applied to aggregates, not individual flows.

Figure: IntServ aur DiffServ Models ka Comparison.

IntServ vs DiffServ Comparison.
Scroll to Top