***Welcome to ashrafedu.blogspot.com * * * This website is maintained by ASHRAF***

Friday, November 27, 2020

SWITCHING - Circuit, Packet, Message Switching

Switched communication networks are those in which data transferred from source to destination is routed between various intermediate nodes. Switching is the technique by which nodes control or switch data to transmit it between specific points on a network.

In large networks, there can be multiple paths from sender to receiver. The switching technique will decide the best route for data transmission. Switching technique is used to connect the systems for making one-to-one communication.

Various switching techniques are-


1. Circuit Switching

The Circuit Switching technique establishes a dedicated path or channel between the sender and receiver for data transmission, and once a dedicated path is established then it does not terminate it until and unless the connection between the two data transmission point terminates.

Circuit switching in a network operates in a similar way as the telephone works. A complete end-to-end path must exist before the communication takes place.

In case of circuit switching technique, when any user wants to send the data, a request signal is sent to the receiver then the receiver sends back the acknowledgment to ensure the availability of the dedicated path. After receiving the acknowledgment, dedicated path transfers the data.

Circuit switching is used in public telephone network. It is used for voice transmission. Fixed data can be transferred at a time in circuit switching technology.

Communication through circuit switching has 3 phases:

  • Circuit establishment
  • Data transfer
  • Circuit Disconnect

Advantages of Circuit Switching:

  • Establishment of a dedicated channel
  • Improves data transmission rate
  • Improves data loss
  • Improves delay in the data flow

Disadvantages of Circuit Switching:

  • Establishing a dedicated channel sometimes takes a very long duration of time.
  • The amount of bandwidth required is more for establishing a dedicated channel.
  • Even if a channel is free, it cannot be used to transmit any other data from any other source.

2. Packet Switching

The packet switching technique transmits data through the network by breaking it down into several data packets for more efficient transfer and it also utilizes multiple vacant resources, these network devices direct or route the data packets to the destination where the receiving device then collects all of them and reassembles to get the proper orientation of the message.

The packet switching is a switching technique in which the message is sent in one go, but it is divided into smaller pieces, and they are sent individually.

The message splits into smaller pieces known as packets and packets are given a unique number to identify their order at the receiving end. Every packet contains some information in its headers such as source address, destination address and sequence number.

Packets will travel across the network, taking the shortest path as possible. All the packets are reassembled at the receiving end in correct order.

If any packet is missing or corrupted, then the message will be sent by receiver to resend the message.

If the correct order of the packets is reached, then the acknowledgement message will be sent by the receiver.




There are two approaches to Packet Switching:

Datagram Packet switching:

  • It is a packet switching technology in which packet is known as a datagram, is considered as an independent entity. Each packet contains the information about the destination and switch uses this information to forward the packet to the correct destination.
  • The packets are reassembled at the receiving end in correct order.
  • In Datagram Packet Switching technique, the path is not fixed.
  • Intermediate nodes take the routing decisions to forward the packets.
  • Datagram Packet Switching is also known as connectionless switching.

Virtual Circuit Switching

  • Virtual Circuit Switching is also known as connection-oriented switching.
  • In the case of Virtual circuit switching, a preplanned route is established before the messages are sent.
  • Call request and call accept packets are used to establish the connection between sender and receiver.
  • In this method the path is fixed for the duration of a logical connection.

Advantages of Packet Switching:

  • Cost-effective: In packet switching technique, switching devices do not require massive secondary storage to store the packets, so cost is minimized to some extent. Therefore, we can say that the packet switching technique is a cost-effective technique.
  • Reliable: If any node is busy, then the packets can be rerouted. This ensures that the Packet Switching technique provides reliable communication.
  • Efficient: Packet Switching is an efficient technique. It does not require any established path prior to the transmission, and many users can use the same communication channel simultaneously, hence makes use of available bandwidth very efficiently.

Disadvantages of Packet Switching:

  • Packet Switching technique cannot be implemented in those applications that require low delay and high-quality services.
  • The protocols used in a packet switching technique are very complex and requires high implementation cost.
  • If the network is overloaded or corrupted, then it requires retransmission of lost packets. It also lead to the loss of critical information if errors are nor recovered.

3. Message Switching

The Message Switching Technique was developed to act as an alternative to circuit switching, this was before packet switching was introduced.

Message Switching is a switching technique in which a message is transferred as a complete unit and routed through intermediate nodes at which it is stored and forwarded.

In Message Switching technique, there is no establishment of a dedicated path between the sender and receiver.

The destination address is appended to the message. Message Switching provides a dynamic routing as the message is routed through the intermediate nodes based on the information available in the message.

Message switches are programmed in such a way so that they can provide the most efficient routes. Each and every node stores the entire message and then forwards it to the next node. This type of network is known as store and forward network.

Message switching treats each message as an independent entity. These message switched data networks are also known as a hop-by-hop system.



Advantages of Message Switching

  • Data channels are shared among the communicating devices that improve the efficiency of using available bandwidth.
  • Traffic congestion can be reduced because the message is temporarily stored in the nodes.
  • Message priority can be used to manage the network.
  • The size of the message which is sent over the network can be varied. Therefore, it supports the data of unlimited size.

Disadvantages of Message Switching

  • The message switches must be equipped with sufficient storage to enable them to store the messages until the message is forwarded.
  • The Long delay can occur due to the storing and forwarding facility provided by the message switching technique.

Token Ring, Token Bus

Token Ring

Token ring (IEEE 802.5) is a communication protocol in a local area network (LAN) where all stations are connected in a ring topology and pass one or more tokens for channel acquisition. A token is a special frame of 3 bytes that circulates along the ring of stations. A station can send data frames only if it holds a token. The tokens are released on successful receipt of the data frame.

Token Passing Mechanism in Token Ring

If a station has a frame to transmit when it receives a token, it sends the frame and then passes the token to the next station; otherwise it simply passes the token to the next station. Passing the token means receiving the token from the preceding station and transmitting to the successor station. The data flow is unidirectional in the direction of the token passing. In order that tokens are not circulated infinitely, they are removed from the network once their purpose is completed. 

This is shown in the following diagram −


Token Bus

Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs. The physical media has a bus or a tree topology and uses coaxial cables. A virtual ring is created with the nodes/stations and the token is passed from one node to the next in a sequence along this virtual ring. Each node knows the address of its preceding station and its succeeding station. A station can only transmit data when it has the token. The working principle of token bus is similar to Token Ring.

Token Passing Mechanism in Token Bus

A token is a small message that circulates among the stations of a computer network providing permission to the stations for transmission. If a station has data to transmit when it receives a token, it sends the data and then passes the token to the next station; otherwise, it simply passes the token to the next station. 

This is depicted in the following diagram −


Differences between Token Ring and Token Bus

Token Ring

Token Bus

The token is passed over the physical ring formed by the stations and the coaxial cable network.

The token is passed along the virtual ring of stations connected to a LAN.

The stations are connected by ring topology, or sometimes star topology.

The underlying topology that connects the stations is either bus or tree topology.

It is defined by IEEE 802.5 standard.

It is defined by IEEE 802.4 standard.

The maximum time for a token to reach a station can be calculated here.

It is not feasible to calculate the time for token transfer.




Wednesday, November 25, 2020

CSMA/CD (carrier sense multiple access/collision detection)

In ETHERNET, CSMA is used on a tapped coaxial cable to which all the communicating devices are connected. On the coaxial cable, in addition to sensing carrier, it is possible for the transceivers to detect collisions. 

This variation of CSMA is referred to as carrier sense multiple access with collision detection (CSMA-CD).

CSMA/CD (carrier sense multiple access/collision detection) is a MAC (media access control) protocol. It defines how network devices respond when two devices attempt to use a data channel simultaneously and encounter a data collision. The CSMA/CD rules define how long the device should wait if a collision occurs.

CSMA/CD is a modification of pure carrier-sense multiple access (CSMA). CSMA/CD is used to improve CSMA performance by terminating transmission as soon as a collision is detected, thus shortening the time required before a retry can be attempted.

The collision detection technology detects collisions by sensing transmissions from other stations. On detection of a collision, the station stops transmitting, sends a jam signal, and then waits for a random time interval before retransmission.

The algorithm of CSMA/CD is:

  • When a frame is ready, the transmitting station checks whether the channel is idle or busy.
  • If the channel is busy, the station waits until the channel becomes idle.
  • If the channel is idle, the station starts transmitting and continually monitors the channel to detect collision.
  • If a collision is detected, the station starts the collision resolution algorithm.
  • The station resets the retransmission counters and completes frame transmission.

The algorithm of Collision Resolution is:

  • The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that all the other stations detect collision.
  • The station increments the retransmission counters.
  • If the maximum number of retransmission attempts is reached, then the station aborts transmission.
  • Otherwise, the station waits for a backoff period which is generally a function of the number of collisions and restart main algorithm.

This algorithm detects collisions but it does not reduce the number of collisions. It is not appropriate for large networks; performance degrades exponentially when more stations are added.




Carrier Sense Multiple Access (CSMA)

This method was developed to decrease the chances of collisions when two or more stations start sending their signals over the datalink layer. Carrier Sense multiple access requires that each station first check the state of the medium before sending.

It senses or listens whether the shared channel for transmission is busy or not, and transmits if the channel is not busy. Using CSMA protocols, more than one users or nodes send and receive data through a shared medium that may be a single cable or optical fiber connecting multiple nodes, or a portion of the wireless spectrum.

The versions of CSMA access modes are−


(i) I-persistent CSMA

• In this method, station that wants to transmit data continuously senses the channel to check whether the channel is idle or busy.

• If the channel is busy, the station waits until it becomes idle.

• When the station detects an idle-channel, it immediately transmits the frame with probability 1. Hence it is called I-persistent CSMA.

• This method has the highest chance of collision because two or more stations may find channel to be idle at the same time and transmit their frames.

• When the collision occurs, the stations wait a random amount of time and start allover again.


Drawback of I-persistent
• The propagation delay time greatly affects this protocol.

Let us suppose, just after the station I begins its transmission, station 2 also became ready to send its data and senses the channel. If the station I signal has not yet reached station 2, station 2 will sense the channel to be idle and will begin its transmission. This will result in collision.

Even if propagation delay time is zero, collision will still occur. If two stations became .ready in the middle of third station’s transmission, both stations will wait until the transmission of first station ends and then both will begin their transmission exactly simultaneously. This will also result in collision.

(ii) Non-persistent CSMA

• In this scheme, if a station wants to transmit a frame and it finds that the channel is busy (some other station is transmitting) then it will wait for fixed interval of time. After this time, it again checks the status of the channel and if the channel is free it will transmit.

• A station that has a frame to send senses the channel.

• If the channel is idle, it sends immediately.

• If the channel is busy, it waits a random amount of time and then senses the channel again.


Advantage of non-persistent

• It reduces the chance of collision because the stations wait a random amount of time. It is unlikely that two or more stations will wait for same amount of time and will retransmit at the same time.

Disadvantage of non-persistent

• It reduces the efficiency of network because the channel remains idle when there may be stations with frames to send. This is due to the fact that the stations wait a random amount of time after the collision.

(iii) p-persistent CSMA

• This method is used when channel has time slots such that the time slot duration is equal to or greater than the maximum propagation delay time.

• Whenever a station becomes ready to send, it senses the channel.

• If channel is busy, station waits until next slot.

• If channel is idle, it transmits with a probability p.

• With the probability q=l-p, the station then waits for the beginning of the next time slot.

• If the next slot is also idle, it either transmits or waits again with probabilities p and q.

• This process is repeated till either frame has been transmitted or another station has begun transmitting.

• In case of the transmission by another station, the station acts as though a collision has occurred and it waits a random amount of time and starts again.


Advantage of p-persistent

• It reduces the chance of collision and improves the efficiency of the network.



Ethernet

The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC).

Ethernet is a set of technologies and protocols that are used primarily in LANs. Ethernet can also be used in MANs and even WANs. It was first standardized in the 1980s as IEEE 802.3 standard.


Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10 Mbps. The varieties are commonly referred as 10BASE-X. Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and X is the type of medium used.

There are a number of versions of IEEE 802.3 protocol. The most popular ones are -

IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick single coaxial cable into which a connection can be tapped by drilling into the cable to the core. Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and 5 refers to the maximum segment length of 500m.

IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner variety where the segments of coaxial cables are connected by BNC connectors. The 2 refers to the maximum segment length of about 200m (185m to be precise).

IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses unshielded twisted pair (UTP) copper wires as physical layer medium. The further variations were given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.

IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber optic cables as medium of transmission.


Frame Format of Classic Ethernet

 

The main fields of a frame of classic Ethernet are -

  • Preamble: It is the starting field that provides alert and timing pulse for transmission. In case of classic Ethernet it is an 8 byte field and in case of IEEE 802.3 it is of 7 bytes.
  • Start of Frame Delimiter: It is a 1 byte field in a IEEE 802.3 frame that contains an alternating pattern of ones and zeros ending with two ones.
  • Destination Address: It is a 6 byte field containing physical address of destination stations.
  • Source Address: It is a 6 byte field containing the physical address of the sending station.
  • Length: It a 7 bytes field that stores the number of bytes in the data field.
  • Data: This is a variable sized field carries the data from the upper layers. The maximum size of data field is 1500 bytes.
  • Padding: This is added to the data to bring its length to the minimum requirement of 46 bytes.
  • CRC: CRC stands for cyclic redundancy check. It contains the error detection information.

Frame Format of IEEE 802.3 Ethernet


Preamble:  The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating Os and Is that alerts the receiving system to the coming frame and enables it to synchronize its input timing. The pattern provides only an alert and a timing pulse. The 56-bit pattern allows the stations to miss some bits at the beginning of the frame. The preamble is actually added at the physical layer and is not (formally) part of the frame.

Start frame delimiter (SFD): The second field (l byte: 10101011) signals the beginning of the frame. The SFD warns the station or stations that this is the last chance for synchronization. The last 2 bits is 11 and alerts the receiver that the next field is the destination address.

Destination address (DA): The DA field is 6 bytes and contains the physical address of the destination station or stations to receive the packet.

Source address (SA): The SA field is also 6 bytes and contains the physical address of the sender of the packet.

Length or type: This field is defined as a type field or length field. The original Ethernet used this field as the type field to define the upper-layer protocol using the MAC frame. The IEEE standard used it as the length field to define the number of bytes in the data field. Both uses are common today.

Data: This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46 and a maximum of 1500 bytes.

CRC: The last field contains error detection information, in this case a CRC-32.




IEEE 802

In 1985, the Computer Society of the IEEE started a project, called Project 802, to set standards to enable intercommunication among equipment from a variety of manufacturers.

IEEE 802 is a family of Institute of Electrical and Electronics Engineers (IEEE) standards for local area networks (LAN), personal area network (PAN), and metropolitan area networks (MAN). The IEEE 802 LAN/MAN Standards Committee (LMSC) maintains these standards.

The services and protocols specified in IEEE 802 map to the lower two layers (data link and physical) of the seven-layer Open Systems Interconnection (OSI) networking reference model.

IEEE 802 LAN/MAN  Standards:

IEEE 802.1     Standards for LAN/MAN bridging and management and remote media access control (MAC) bridging

IEEE 802.2     Standards for Logical Link Control (LLC) standards for connectivity

IEEE 802.3     Ethernet Standards for Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

IEEE 802.4     Standards for token passing bus access

IEEE 802.5     Standards for token ring access and for communications between LANs and MANs

IEEE 802.6     Standards for information exchange between systems

IEEE 802.7     Standards for broadband LAN cabling

IEEE 802.8     Fiber-optic connection

IEEE 802.9     Standards for integrated services, like voice and data

IEEE 802.10   Standards for LAN/MAN security implementations

IEEE 802.11  Wireless Networking – "WiFi"

IEEE 802.12   Standards for demand priority access method

IEEE 802.14   Standards for cable television broadband communications

IEEE 802.15.2            Bluetooth and Wi-Fi coexistence mechanism

IEEE 802.15.4            Wireless Sensor/Control Networks – "ZigBee"

IEEE 802.15.6            Wireless Body Area Network[16] (BAN) – (e.g. Bluetooth low energy)

IEEE 802.16   Wireless Networking – "WiMAX"

IEEE 802.24   Standards for Logical Link Control (LLC) standards for connectivity

The relationship of the 802 Standard to the traditional OSI model is


The IEEE has subdivided the data link layer into two sublayers:

- logical link control (LLC) and

- media access control (MAC).

Logical link control (LLC) : 

In IEEE Project 802, flow control, error control, and part of the framing duties are collected into one sublayer called the logical link control. Framing is handled in both the LLC sublayer and the MAC sublayer.

A single LLC protocol can provide interconnectivity between different LANs because it makes the MAC sublayer transparent.

LLC defines a protocol data unit (PDU). The header contains a control field used for flow and error control. The two other header fields define the upper-layer protocol at the source and destination that uses LLC. These fields are called the destination service access point (DSAP) and the source service access point (SSAP).

Media access control (MAC): 

Media access control defines the specific access method for each LAN. For example, it defines CSMA/CD as the media access method for Ethernet LANs and the tokenpassing method for Token Ring and Token Bus LANs.

In contrast to the LLC sublayer, the MAC sublayer contains a number of distinct modules; each defines the access method and the framing format specific to the corresponding LAN protocol.