Re: oversampling
On Sun, 4 Jan 1998, KC5TJA <kc5tja@topaz.axisinternet.com> wrote
>> This signal has _very well_ determined spectral characteristics. If we
>> forget about the local line noise for a while, a second of this (analog)
>> signal lives in a 2x(3600-300) = 6600 dimensional subspace of the space of
>> all analog functions one second long, whereby 3300 sines and 3300 cosines
>> serve as the basis. So, the signal is determined by 6600 numbers, which
>> may be the projections to the sines and cosines, but can be determined by
>> _any_ 6600 linearly independent projections to _that subspace_.
>
>This makes sense... :)
>
>> This is usually achieved with regular interval sampling, but in principle
>> can be achieved by _any_ sampling -- both in the spatial, in the
>> frequency, in the time domain or a combination thereof.
>
>hmmm...what's the difference between spatial and time domains? Perhaps
>this is where I'm getting confused...
>
Me too. And I've built these things!
They are also known as Sigma-Delta coders or Pulse Density
Modulators. There are plenty of references - but few good explanations.
Let me try to give one!
Look at it from the point of view of the D/A first. The best D/A is
a real low-pass filter made with capacitors and inductors (those things
wound with wire).
If you feed it lots of ones and zeros at at rate much faster than
its cut off frequency its output will settle at a level equal to the
average of the ratio of ones to zeros. If you change the ratio of ones
to zeros the output level changes. The rate that you can make the
output change level depends on the cut-off frequency of the low-pass
filter.
The accuracy to which you can set a level at the output of the
filter depends on the bit rate you are have available on the input side
to it. If you give it a 101010 sequence at double the cut-off frequency
or more you can get an output level at 50% of the '1' level. But if you
want to set the level at 51% of the '1' level you must raise the bit
rate 50 fold or more, otherwise the pattern you need to establish that
level will have a low frequency component and get through the filter.
You then have noise. So for lots of levels you need a very high
oversampling rate. In practice you will make a compromise between bit
rate and noise.
So why use them? Well real inductors don't produce distortion, and
are also damn difficult to overload. You CAN use op-amps or even
digital filters so long as you take the hazards of overload, non-
linearity and rounding error into account in your design.
But the filter can be very simple. This reduces cost. The anti-
aliasing filter for a normal D/A has to cut off very sharply and this
makes it an expensive analogue building block.
Now for the A/D. Assume it is producing nearly the right bitstream.
You can therefore low-pass filter these bits and compare them with the
actual waveform you are trying represent. OK. Do it with a binary
comparator! The output of the comparator is sampled at a regular clock
rat and we then have the 1 bit stream of 1s and 0s we presumed in the
first place. So we can use that! Voila we have a feedback loop that
pumps out the one bit code we want. Easy!
Well not quite so easy. The feedback loop contains a filter, a
binary comparator and a temporal sampler. Now you need a control loop
expert to both make it 'stable' and make the error at the input to the
comparator as small as possible to get a low error in that stream of
bits.
And a control loop with an 'infinite gain element' (the comparator)
with time non-linearity (the clock sampler) and a low-pass filter is
physically simple and fairly easy to simulate but pretty intractable
mathematically. Besides that the poor analogue engineer who takes on
the problem is confronted with massive noise from all those 1s and 0s
getting into circuitry that he normally assumes quiescent.
But a job well done produces a codec that is far more linear than
one that tries to set a huge number of equal levels (e.g. by a chain of
256 resistors). An 8 bit A/D has to set all 256 levels EQUALLY SPACED.
You can change a 1 bit A/D (with only two levels) to an 8 bit A/D
with 256 levels by some digital magic called decimation. You have no
problem getting equally spaced levels this way. In the process the
sampling rate is greatly reduced - for example from 4 MHz to 8 kHz for a
64 kbit/s voice codec.
The noise of a 1 bit D/A is frequency dependent. It is much better
than other types as you go down to very low frequencies. The noise
rises as you approach the cut-off of the low-pass filter.
Conventional A/D convertiers also produce noise, but it should not
vary with frequency. A good 1 bit A/D converter will have most of its
noise shaped to fall into frequency bands where the ear (or eye) is more
tolerant of noise. This is an incidental benefit of the technique.
You can't get rid of noise in either approach. It comes from
rounding errors in a real system. For a normal A/D you'd need an
infinite string of resistors to define an infinite number of levels.
For a 1 bit codec you'd need an infinitely high bit rate.
So how about DSP. Well all DSP I know of use 8, 12, 16, 24 or 32
bit or even floating point words. You need ONE bit words at maybe 100
times the speed. Maybe a DSP chip is useful. But the decimator is
generally a a special purpose DSP ASIC that feeds a conventional DSP
once it has changed the data from 1 to 8 bit words.
The speed of a MISC chip might make it useful for taking on part of
the job. But I doubt if it would be a good idea to devote it to
decimation of video or voice.
--
Bill Powell. ( MIME, UU )
Atherstone, Warks., CV9 3AR. | Tel: +44 1827-718 945
<whpowell@iee.org> | Fax: -714 884