J
Jerry Avins
- Jan 1, 1970
- 0
Jerry said:was found to be unobjectionable in times past.
Objectionable! Damned spell checker!
Jerry
Jerry said:was found to be unobjectionable in times past.
Jerry said:glen herrmannsfeldt wrote:
But in either case, the granularity is much coarser than binary. That
was found to be unobjectionable in times past.
MooseFET wrote: [....]If you have a multiplier, it can be used to to the bit aligning and
the normalization steps of an add. Since one of the terms going to
the multiplier has only a single bit high, you can route either
through the FFT's bit order reverser if you need to shift in the
right(vs left) direction.
In many FPGA implementations, an adder or multiplier is used for one
task only. Think hardware. A gate can be used for many different tasks
before it is connected, but only one afterward.
I understand normalizing binary with a barrel shifter. How is decimal
normalized?
FFTs in binary have reversed bit order addressing at one stage. What is
the storage order of a decimal FFT?
Do decimal trees make as efficient use of time and space as binary
trees? What does it mean to branch on a digit?
MooseFET wrote:
(snip)
It can, if you don't need them for the multiply. Note that
you need both prenormalization (align the radix point before
add/subtract) and post normalization (remove high order
zero digits).
This is not true in all cases. A logic section can connect to a bus
or a MUX so that more than one thing can be routed onto its input.
I was in this case refering to a bit slice implementation that reused
some of the hardware in strange ways.
Jerry said:But in either case, the granularity is much coarser than binary. That
was found to be unobjectionable in times past.
If you want to beat a more conventional processor you have to
do many operations per cycle. If you can't do that, then there
isn't much reason to use the FPGA.
My preference is a systolic
array, though there are other possibilities. The routing slows
the FPGA down compared to a top end DSP, so you want at least
20 or so operations per clock cycle per FPGA, more likely closer
to 100. You might do that with a bus and MUX, but that would
be rare.
Another reason is that you had something else to do that needed about
1/2 of the FPGA you could buy and wanted a low chip/pin count. This is
more along the lines I was thinking of. Besides I was speaking of
what could be done not what was the best idea.
A FIR filter is very much a systolic array with some non-ALUed steps
in it, so I suspect that you would find a huge number of folks who
agree with you on that.
Dropping back a level, one of the reasons that I disbelieve in the
current dogma that floating-point should be separated from integer
arithmetic at the logical unit level is the following:
The most important basic floating-point primitives at a level above
the individual operations are dense matrix multiply and FFT. Both
have the property that their scaling can be generally determined in
advance, and the classic approach of converting from and to floating-
point at the beginning and end and actually doing the work in fixed-
point loses no accuracy.
This would mean that only one set of arithmetic units was needed and
SIGNIFICANTLY simplifies the logic actually obeyed, with the obvious
reduction in time and power consumption. It used to be done, back
in the days of discrete logic, and is really a return to regarding
floating-point as scaled fixed-point!
Now, I think that you embedded DSP people can see the advantages of
being able to do that, efficiently. The program gets the convenience
of floating-point, but key library routines get the performance of
fixed point, and the conversion isn't the major hassle that it usually
is at present.
I don't expect to see it :-(
Nick said:|>
|> > |> Do decimal trees make as efficient use of time and space as binary
|> > |> trees? What does it mean to branch on a digit?
|> >
|> > Irrelevant. Both of those are independent of the storage representation.
|>
|> So memory addresses are to remain binary? How do you expect pointer
|> arithmetic to be implemented?
Eh? Why?
I am old enough to remember when the properties of branching were
often tied to the representation, but there are getting decreasingly
few people who are. Conditional branching almost always follows a
comparison, and comparisons are as implementable in decimal as in
binary.
Similarly, you can implement binary, decimal or ternary trees on
computers with any pointer representation - INCLUDING ones where
pointers are opaque objects and have NO representation as far as
the program is concerned! And it's been done, too.
Now, I think that you embedded DSP people can see the advantages of
being able to do that, efficiently. The program gets the convenience
of floating-point, but key library routines get the performance of
fixed point, and the conversion isn't the major hassle that it usually
is at present.
I don't expect to see it :-(
The assumption was that integer
arithmetic would remain binary. Is that still a good assumption?
Since integers are integers, and since there really are a lot of useful
things that you can do, and algorithms that can be easily implemented iff
the representation is binary, I can't see that we will ever see decimal
integer representations, particularly not for address arithmetic.
Nick said:|>
|> Pointers being opaque is a property of a language. I think we are
|> discussing the properties of future machines here.
Not on a capability machine!
|> Regardless of what
|> the programmer sees, pointers must be incremented, decremented, and
|> indexed relative to. Do you expect the arithmetic that will that to be
|> binary or decimal? Will memory addresses be binary?
Eh? All of those properties are independent of the representation, so
much so that they are equivalent even if the representation doesn't use
a base! The binary/decimal issue is TOTALLY irrelevant to them.
|> I used a mainframe that did floating point in decimal (Spectra 70?).
|> When a switch was made to a machine that did floating point in binary,
|> an important program stopped working. Rather than take any chances with
|> future chances, the program (including all the trig and arbitrary
|> scaling) was rewritten in integer.
Without making any attempt to find out why it stopped working? Aw,
gee. Look, I was writing, using and porting top-quality numerical
software that was expected to work, source unchanged, on floating-point
of any base from 2 to 256 (decimal included) since about 1970. It isn't
hard to do - IF you know what you are doing.