J
John Larkin
- Jan 1, 1970
- 0
OK, I've got a 64-bit unsigned binary number, in a pair of 32-bit
registers, in an MC68332 cpu. Its maximum value is 1e13, which is 10
seconds measured in picoseconds, so the MS longword can only get up to
about 2328 or something like that. I'm looking for an efficient way to
turn this into a decimal ASCII string, so basicly need to convert it
into bcd. The bcd output will need to have 14 digits. "Numerical
Recipes" doesn't address mundane problems like this.
One way is to create a BCD buffer, zero it, and map each nibble of the
input through a 16-entry bcd lookup table, and sum all the bcd terms.
The 68332 has a fairly efficient BCD add instruction, so adding a bcd
table entry to the running sum isn't ghastly. The lookup table needs
12 entries (one per active nibble) each holding 16 bcd numbers, each
being a 14 digit bcd number, namely 8 bytes, grand total 1536 bytes.
Creating this table would be a minor nuisance; I'd need a
jillion-digit calculator. Well, PowerBasic does have a 64-bit integer
type, so maybe I can write a little Basic program to gen the tables.
There's even a BCD data type, up to 18 digits.
One nice algorithm for binary-to-bcd is to just keep dividing by 10
and posting the remainders as the bcd digits, in reverse order of
course. The 68332 can divide a 64-bit value by 10, with a remainder,
but the quotient is limited to 32 bits, not enough. I could use this
mode to convert the low 32-bit part to bcd, then use four nibble table
lookups on the high half to finish things off. Since I also need a
32-bit-to-bcd thing somewhere else, maybe that isn't such a gross
idea.
If the raw input could be scaled to 64-bit fractional format,
successive multiplies by 10 will pump out digits, ms digit first.
Gotta think about that one... it may have embarassing rounding
problems.
Or maybe I could program our Xilinx FPGA to do a hardware divide,
divide a 64 bit integer by 10 and feed me the remainders. Hell, maybe
it could do the entire ascii conversion in hardware.
Oh, I'm programming in bare-metal assembly.
Anybody have any tricks here?
(You comp.arch guys take it easy on me, please. I'm just a simple
engineer.)
John
registers, in an MC68332 cpu. Its maximum value is 1e13, which is 10
seconds measured in picoseconds, so the MS longword can only get up to
about 2328 or something like that. I'm looking for an efficient way to
turn this into a decimal ASCII string, so basicly need to convert it
into bcd. The bcd output will need to have 14 digits. "Numerical
Recipes" doesn't address mundane problems like this.
One way is to create a BCD buffer, zero it, and map each nibble of the
input through a 16-entry bcd lookup table, and sum all the bcd terms.
The 68332 has a fairly efficient BCD add instruction, so adding a bcd
table entry to the running sum isn't ghastly. The lookup table needs
12 entries (one per active nibble) each holding 16 bcd numbers, each
being a 14 digit bcd number, namely 8 bytes, grand total 1536 bytes.
Creating this table would be a minor nuisance; I'd need a
jillion-digit calculator. Well, PowerBasic does have a 64-bit integer
type, so maybe I can write a little Basic program to gen the tables.
There's even a BCD data type, up to 18 digits.
One nice algorithm for binary-to-bcd is to just keep dividing by 10
and posting the remainders as the bcd digits, in reverse order of
course. The 68332 can divide a 64-bit value by 10, with a remainder,
but the quotient is limited to 32 bits, not enough. I could use this
mode to convert the low 32-bit part to bcd, then use four nibble table
lookups on the high half to finish things off. Since I also need a
32-bit-to-bcd thing somewhere else, maybe that isn't such a gross
idea.
If the raw input could be scaled to 64-bit fractional format,
successive multiplies by 10 will pump out digits, ms digit first.
Gotta think about that one... it may have embarassing rounding
problems.
Or maybe I could program our Xilinx FPGA to do a hardware divide,
divide a 64 bit integer by 10 and feed me the remainders. Hell, maybe
it could do the entire ascii conversion in hardware.
Oh, I'm programming in bare-metal assembly.
Anybody have any tricks here?
(You comp.arch guys take it easy on me, please. I'm just a simple
engineer.)
John