Here is another reference that mentions both the first and last step
being only 1/2 LSB and the voltage per step (1 LSB) being (1/2^n -1)*Vref:
http://focus.ti.com/lit/an/slaa013/slaa013.pdf
(begin excerpt)
The width of one step is defined as 1 LSB (one least significant bit)
and this is often used as the reference unit for other
quantities in the specification. It is also a measure of the
resolution of the converter since it defines the number of
divisions or units of the full analog range. Hence, 1/2 LSB represents
an analog quantity equal to one half of the analog
resolution.
The resolution of an ADC is usually expressed as the number of bits in
its digital output code. For example, an ADC
with an n-bit resolution has 2n possible digital codes which define 2n
step levels. However, since the first (zero) step
and the last step are only one half of a full width, the full-scale
range (FSR) is divided into 2^n - 1 step widths.
Hence
1 LSB = FSR/(2^n - 1) for an n bit converter.
(end excerpt)
You appear to be right about the first transition starting at 1/2 LSB
after 0. This makes the jagged line on the graph of voltage vs values have
a smaller difference between the 'theoretical' line, thus making the
digitization error smaller.
However, they are wrong about that divisor, at least for some ADC chips.
It is pretty confusing, I'll agree, but if you just make yourself a graph
of input voltage vs output codes, you see that the last value represents
3/2 LSB. (This is also pointed out in an Analog Devices datasheet for the
AD9226, page 7 and 19.)
The voltage is exactly represented at the midpoint of each step, so there
is a possible error for each bit of up to +- 1/2 LSB. Now, imagine a 2 bit
converter (so to speak...) that uses a FS Vref of 1V. It'll have
output transitions at 1/8, 3/8, and 5/8V. That means that for an output
value of 1, the input voltage was 1/4 +- 1/8. For an output value of 0,
the input voltage was 0 +- 1/8. etc. So, the input voltage is within 1/2
LSB (1/8V) of output/4.
The AD9226 actually considers input voltages that are greater than -1/2LSB
to be valid inputs, and considers input voltages above FS - 1/2LSB to be
*invalid*. The out of range output pins are defined this way.
However, it is possible that the scale is stretched on other converters.
If the scale is stretched, then for those converters, the LSB may actually
be as you say. For our 2-bit converter, that would make transitions at
1/6, 3/6, and 5/6. This would result in a maximum error of 166mV as
opposed to 125mV with the other encoding scheme, and thus a bigger
encoding error.
--
Regards,
Bob Monsen
We should take care not to make the intellect our god; it has, of
course, powerful muscles, but no personality.
Albert Einstein