Transport
Mechanisms
If you are familiar
with TCP and UDP you probably know that both work at the Transport layer, and
that TCP is a reliable service and UDP is not. This means that application
developers have more options because they have a choice between the two
protocols when working with TCP/IP protocols. The Transport layer is
responsible for providing mechanisms for multiplexing upper-layer applications,
establishing sessions, and tearing down virtual circuits. It also hides details
of any network-dependent information from the higher layers by providing
transparent data transfer.
Flow
Control
Data integrity is
ensured at the Transport layer by maintaining flow control and by allowing
users to request reliable data transport between systems. Flow control prevents
a sending host on one side of the connection from overflowing the buffers in
the receiving host—an event that can result in lost data. Reliable data
transport employs a connection-oriented communications session between systems,
and the protocols involved ensure that the following will be achieved:
* The segments delivered are acknowledged
back to the sender upon their reception.
* Any segments not acknowledged are
retransmitted.
* Segments are sequenced back into their
proper order upon arrival at their destination.
* A manageable data flow is maintained in
order to avoid congestion, overloading, and data loss.
Fig-3
Connection-Oriented
Communication
In reliable transport
operation, a device that wants to transmit sets up a connection-oriented
communication with a remote device by creating a session. The transmitting
device first establishes a connection-oriented session with its peer system,
which is called a threeway handshake. Data is then transferred; when finished,
a call termination takes place to tear down the virtual circuit.
Fig. 3 depicts a
typical reliable session taking place between sending and receiving systems.
Looking at it, you can see that both hosts’ application programs begin by
notifying their individual operating systems that a connection is about to be
initiated. The two operating systems communicate by sending messages over the
network confirming that the transfer is approved and that both sides are ready
for it to take place. After all of this required synchronization takes place, a
connection is fully established and the data transfer begins.
Sometimes during a
transfer, congestion can occur because a high-speed computer is generating data
traffic a lot faster than the network can handle transferring. A bunch of
computers simultaneously sending datagrams through a single gateway or
destination can also botch things up nicely. In the latter case, a gateway or
destination can become congested even though no single source caused the
problem. In either case, the problem is basically akin to a freeway bottleneck
- too much traffic for too small a capacity.
The machine which
receives a flood of datagrams can stores them in a memory section called a
buffer. But this buffering action can only solve the problem if the datagrams
are part of a small burst. If not, and the datagram deluge continues, a
device’s memory will eventually be exhausted, its flood capacity will be
exceeded, and it will react by discarding any additional datagrams that arrive.
Because of the
transport function, network flood control systems really work quite well.
Instead of dumping resources and allowing data to be lost, the transport can
issue a “not ready” indicator to the sender or source of the flood. This
mechanism works kind of like a stoplight, signaling the sending device to stop
transmitting segment traffic to its overwhelmed peer. After the peer receiver
processes the segments already in its memory reservoir-its buffer-it sends out
a “ready” transport indicator. When the machine waiting to transmit the rest of
its datagrams receives this “go” indictor, it resumes its transmission.
A service is considered
connection-oriented if it has the following characteristics:
* A virtual circuit is set up (e.g., a
three-way handshake).
* It uses sequencing.
* It uses acknowledgments.
* It uses flow control.
Windowing
Ideally, data
throughput happens quickly and efficiently. And as you can imagine, it would be
slow if the transmitting machine had to wait for an acknowledgment after
sending each segment. But because there’s time available after the sender
transmits the data segment and before it finishes processing acknowledgments
from the receiving machine, the sender uses the break as an opportunity to
transmit more data. The quantity of data segments (measured in bytes) that the
transmitting machine is allowed to send without receiving an acknowledgment for
them is called a window.
Acknowledgments
Reliable data delivery
ensures the integrity of a stream of data sent from one machine to the other through
a fully functional data link. It guarantees that the data won’t be duplicated
or lost. This is achieved through something called positive acknowledgment with
retransmission - a technique that requires a receiving machine to communicate
with the transmitting source by sending an acknowledgment message back to the
sender when it receives data. The sender documents each segment it sends and
waits for this acknowledgment before sending the next segment. When it sends a
segment, the transmitting machine starts a timer and retransmits if it expires
before an acknowledgment is returned from the receiving end.
No comments:
Post a Comment