Q

#### Quadibloc

- Jan 1, 1970

- 0

floating-point standard will include the definition of a decimal

floating-point format.

This has some interesting features. Decimal digits are usually

compressed using the Densely Packed Decimal format developed by Mike

Cowlishaw at IBM on the basis of the earlier Chen-Ho encoding. The

three formats offered all have a number of decimal digits precision

that is 3n+1 for some n, so a five-bit field combines the one odd

digit with the most significant portion of the exponent (for a range

of exponents that has 3*2^n values for some integer n) to keep the

coding efficient. Its most unusual, and potentially controversial,

feature is the use it makes of unnormalized values.

One possible objection to a decimal floating-point format, if it were

used in general for all the purposes for which floating-point is used,

is that the precision of numbers in such a format can vary by a whole

decimal digit, which is more than three times as large as a binary

bit, the amount by which the precision varies in a binary format with

a radix-2 exponent. (There were binary formats with radix-16

exponents, on the IBM System/360 and the Telefunken TR 440, and these

were found objectionable, although the radix-8 exponents of the Atlas

and the Burroughs B5500 were found tolerable.)

I devised a scheme, described along with the proposed new formats, on

http://www.quadibloc.com/comp/cp020302.htm

by which a special four-bit field would describe both the most

significant digit of a decimal significand (or coefficient, or

fraction, or, horrors, mantissa) and a least significant digit which

would be restricted in the values which it could take.

MSD 1, appended LSD can be 0, 2, 4, 6, or 8.

MSD 2 or 3, appended LSD can be 0 or 5.

MSD 4 to 9, appended LSD is always 0.

In this way, the distance between representable values increases in

steps of factors of 2 or 2.5 instead of factors of 10, making decimal

floating-point as "nice" as binary floating-point.

As another bonus, when you compare the precision of the field to its

length in bits, you discover that I have managed to achieve the same

benefit for decimal floating point as was obtained for binary floating-

point by hiding the first bit of a normalized significand!

Well, I went on from there.

If one can, by this contrivance, make the exponent move in steps of

1/3rd of a digit instead of whole digits, why not try to make the

exponent move in steps of about 1/3 of a bit, or 1/10 of a digit?

And so, on the next page,

http://www.quadibloc.com/comp/cp020303.htm

I note that if instead of appending one digit restricted to being

either even or a multiple of five, I append values in a *six-digit*

field, I can let the distance between representable points increase by

gentle factors of 1.25 or 1.28.

But such a format is rather complicated. I go on to discuss using an

even smaller factor with sexagesimal floating point... in a format

more suited to an announcement *tomorrow*.

But I also mention how this scheme could be _simplified_ to a minimum.

Let's consider normalized binary floating points.

The first two bits of the significand (or mantissa) might be 10 or 11.

In the former case, let's append a fraction to the end of the

significand that might be 0, 1/3 or 2/3... except, so that we can stay

in binary, we'll go with either 5/16 or 11/16.

In the latter case, the choice is 0 or 1/2.

Then, the coding scheme effectively makes the precision of the number

move in small jumps, as if the exponent were in units of *half* a bit

instead of whole bits.

But now a nagging feeling haunts me.

This sounds vaguely familiar - as if, instead of *inventing* this

scheme, bizarre though it may sound to many, I just *remembered* it,

say from the pages of an old issue of Electronics or Electronics

Design magazine.

Does anyone here remember what I'm thinking of?

John Savard