Q&A

# Asynchronous connection question

+1
−0

Assume we have a channel and send 7-bit data(0100100) asynchronously with 1 parity bit(0).Suppose the start bit is 1.The end bits are in the same logic level with the idle state of the channel(0).How does the receiver understand the end bits?In my case the last 2 0s could have been the end bits but instead the receiver understand the last two 0s from 0100100(0) are actually part of the data and not the end bits.Help appreciated.

Why does this post require moderator attention?
Why should this post be closed?

+3
−0

Assuming you speak of UART, then the receiver must be told in advance the number of data bits, the number of parity bits and the number of stop bits. The start bit is a logic zero (low) and the stop bit(s) are logic ones (high), which is also the idle state.

The data bit stream itself isn't asynchronous, only the time between bytes is. Asynchronous in this context rather means "no separate clock signal provided".

The receiver needs to know not only if parity is used but also what baudrate is to be expected. It will clock from the edge of the start bit to where the stop bit(s) are expected. If an incorrect signal level is encountered where the stop bit(s) are supposed to be, then it is a so-called framing error and the UART controller will error flag.

There's a de facto standard nomenclature used to describe UART communication format:

baudrate/databits-partiy-stopbits

For example one would say 9600/8-N-1 and mean baudrate 9600bps, 8 data bits, no parity, 1 stop bit. In case of parity, the letter would be E or O for even or odd, although parity is mostly regarded as an obsolete form of error detection and not often used nowadays.

Why does this post require moderator attention?

+1
−0
How does the receiver understand the end bits?

It doesn't. The purpose of stop bits (what you seem to be calling "end bits") is to leave a guaranteed gap before the next start bit.

The leading edges of start bits are always idle to non-idle transitions. The line must therefore always be at idle before any new character. The stop bit(s) of the previous character guarantee that.

For modern digital communication, a single stop bit is sufficient. Early equipment that received asynchronous serial protocol used mechanical means for decoding, such as the common ASR-35 teletype back in the 1970s. Those mechanisms sometimes needed more than one bit time to reset to being ready for the start of the next character. It was common then to send two stop bits. Some operations that took more time, like a carriage after a long text line, might require 3 stop bits. There were algorithms to optimized the number of stop bits, taking into account the characteristics of the receiving equipment, like an ASR-35.

In my case the last 2 0s could have been the end bits but instead the receiver understand the last two 0s from 0100100(0) are actually part of the data

Right, because receivers don't "decode" stop bits. Transmitters and receivers have to agree ahead of time how many bits follow the start bit, and how long each of those bits take. Receivers then just grab the value of the line near the center of where each bit is supposed to be. Nothing in the data encodes where the data ends. Everything is relative to the leading edge of the start bit.

This is why you have to specify protocol details when using RS-232. You may have seen specs like "9600 baud, 8 data bits, no parity bit, 1 stop bit". This is often abbreviated as "9600 8-N-1". Your example uses 7-E-1, meaning 7 data bits, even parity, and 1 stop bit.

Note that the stop bits spec is a minimum. There can be any arbitrary additional gap between characters. The receiver can't distinguish between 8-N-1 with an additional 1-bit delay between two characters, and two characters sent with 8-N-2.

Why does this post require moderator attention? 