You have to decide what you really mean by "bandwidth". The analog bandwidth of the channel is quite different from the bit rate, which is different from the effective bit rate of the actual transferred data.
Let's take the common UART protocol of 9600 8-N-1 (9600 baud, 8 data bits, no parity, one stop bit) as an example. The raw bit rate is 9600 bits/second. Each byte includes one start bit and one stop bit, so you only get 8 data bits transferred for each 10 raw bits. That makes the effective data bandwidth 7680 b/s.
Analog bandwidth is difficult to correlate to data bandwidth. Shannon and Nyquist did quantify the maximum theoretical data rate a channel with finite bandwidth and signal to noise ratio can support. Maximum theoretical is useful to know where you're at, but real systems usually operate significantly below that.
To keep things simple, let's model your transmission channel as having a dominant single pole. Put another way, we'll pretend it acts like a simple R-C low pass filter. Let's see what rolloff frequency it would need to support the example 9600 stream.
Bits are ideally sampled in the middle of each bit time. Let's say the receiver simply thresholds the signal about half way value. That means at bare minimum, the signal needs to decay to ½ of its final value in ½ bit time. A bit time is 1/(9600 baud) = 104 µs. Half a bit time is therefore 52.1 µs. Decaying to ½ the final value takes 0.69 time constants. The time constant of the dominant pole must therefore be less than 75.1 µs. The reciprocal of that is the radians/s frequency, and that divided by 2π is the rolloff frequency in Hz, which is 2.12 kHz.
But before you claim that's the required bandwidth, remember all the assumptions and limitations built into this answer. That's the single-pole roloff frequency which is right on the guaranteed-to-fail edge. That's certainly not where you want to be operating. A more reasonable requirement might be that the signal reaches the ½ threshold in ¼ bit time. That allows for some margin at the midpoint of the bit, and/or some error in the sampling time. It should be obvious that half the decay time requires twice the bandwidth, or 4.2 kHz in this case. See how just one assumption changed the answer by a factor of 2?
And then, this was the single-pole roloff frequency, not necessarily "bandwidth" as everyone would measure it. Bandwidth is obvious when you have a filter that cuts off suddenly in frequency space, but no real-world transmission channel does that.
How much data you can get thru a channel also depends on the signal to noise ratio, which we haven't talked about at all. Suppose you run a much faster digital signal thru the same channel. Something will come out the other end, although it will be attenuated, with its high-frequency components proportionally more attenuated. Still, if you could measure the result with infinite accuracy, you could re-construct this much faster digital bit stream. The signal to noise ratio is essentially a limit on the "infinite accuracy" the receiver can measure the signal with. In the context of the above, eventually the signal gets so attenuated that it's buried in the noise, and you can no longer figure out what the original was.
So, there isn't something as simple as a single "bandwidth" value required by your signal.