Error Detection and Correction

     In data communication whenever data bits flow from one point to another point there is always the possibility that data bits will get corrupted or changed. There are many factors that can alter the shape of the signal which carries these data bits. Due to this the signal that carries 0 will represent 1 and the signal that carries 1 will represent 0.

     Let us understand this with an example. Let us say Sender sends data 1011, (refer above diagram) but when this data is being transmitted there is a noise that changes or alters the shape of the signal. Because of this, the signal that was representing 1 is now representing 0. Hence receiver will receive 1010 instead of 1011. So, in this case, we can see that send data and received data do not match, this mismatch between send data and received data is called an error in the data transmission.

Types of Errors in data Communication

There are basically two types of errors.

  • single bit error and
  • burst error.

1. Single Bit Error

     When only one bit of a data unit is changed from 1 to 0 or 0 to 1 it is called single-bit error. for example, Sender sends  0100 0001, this data is sent or transmitted as an ASCII character. It will represent character ‘A’. Suppose the receiver receives 0100 1001 which means the fourth bit is changed from 0 to 1 and therefore there is a single-bit error.  

     It is important to understand the impact single-bit error in data transmission. When the receiver receives 010010001, the impact is receiver receives character ‘E’ instead of character ‘A’. It is because this pattern of bits (010010001) represents the character ‘E’ in the ASCII character set.

2. Burst Error

    Burst error means two or more bits in each data unit are changed from 1 to 0 or 0 to 1. For example, if the sender sends 0100 0001 and the receiver receives 1100 1001; means two bits are in error.

     When we measure the size of the burst error, we must count the number of bits from the first corrupted bit to the last corrupted bit. In this case, 5 bits are in error even though three bits are not changed.  When we measure the size of the burst error, we must measure or count bits from the first corrupted bit to the last corrupted bit.

     Let me ask you one question, which of these two errors is the most frequently occurring error?  To understand this, assume that the sender sends data at 1 kbps.  (One-bit duration is one millisecond.) When will only one bit get corrupted?  One bit will get corrupted when the noise duration is also one millisecond.

 

     Suppose the sender sends data at 1 Mbps; that means the one-bit duration is one microsecond. Data is being sent at 1 million bits per second and therefore one-bit duration will be one microsecond When does only one bit will get corrupted? The answer is one bit will get corrupted when the noise duration is also 1 microsecond.
Which is very rare, I mean when there will be a noise that noise will be more than one microsecond, and therefore whenever it occurs it will affect or change more than one bit. Therefore, burst error is the most frequently occurring error in data transmission.

How Error Can Be Detected?

     To understand this let us assume that the sent data is 1011 and the received data is 1010. In this example, there is a single-bit error. But how receiver knows that there is a single-bit error? It is very easy to understand that unless the receiver knows what is sent by the sender, there is no way to detect an error in the received data. One error detection mechanism could be sending every data unit twice and when the receiver receives these two copies of a data unit. The receiver will perform a bit-for-bit comparison between these two copies. If they do not match, this will indicate the presence of an error in the received data.

     For example, if the sender must send 1011, the sender will send it in two copies 1011 1011. Let us say the receiver receives 1011 1010. There is a single-bit error, to detect this error receiver will perform a bit-for-bit comparison between these two copies. As the first bit does not match means there is an error in the received data.

     This system of error detection could be completely accurate, but it will be very slow.  Because when the sender wants to send 500 MB of data sender needs to send two copies to mean 1 GB of data. It will be very slow because on the receiver side receiver must do a bit-for-bit comparison between two copies, which will take a long time.  

Redundancy

     One solution is instead of sending the entire data for error detection, the sender will select a few bits that will be sent with the data for error detection (not the entire data). This technique is called redundancy. It is because in this technique only a few bits are sent with the original data as extra bits or redundant bits. These extra bits will help the receiver to detect an error in the received data. As soon as the accuracy of the transmission is determined by the receiver, these extra bits are discarded.

 

Based on the concept of redundancy there are three error detection methods.

  1. Simple parity check.
  2. Checksum check.
  3. Cyclic Redundancy Check.
error: Content is protected !!