Maker Pro
Maker Pro

Unusual Floating-Point Format Remembered?

Q

Quadibloc

Jan 1, 1970
0
krw said:
I forgot the '91. Ok, so I was a year off. 1MB was a *lot* of
doughnuts in '64.

It would have been exactly 9,437,184 doughnuts in *any* year (the
System/360 adding an extra bit to each byte for either parity or, in a
few cases, ECC).

John Savard
 
Q

Quadibloc

Jan 1, 1970
0
Jerry said:
We did figure that out. The year was 1964; our mainframe, an RCA clone
of some IBM version, had less than a megabyte of core.

In the year 1965, RCA came out with their Spectra 70, a clone of the
IBM 360.

Before that, the RCA 601 computer competed with the 7090, and it had
both binary and decimal floating-point - but neither it, nor the RCA
501 or the RCA 301 were clones of any IBM machine, even though they
had similarities to what they competed with.

John Savard
 
J

Jerry Avins

Jan 1, 1970
0
Quadibloc said:
In the year 1965, RCA came out with their Spectra 70, a clone of the
IBM 360.

Before that, the RCA 601 computer competed with the 7090, and it had
both binary and decimal floating-point - but neither it, nor the RCA
501 or the RCA 301 were clones of any IBM machine, even though they
had similarities to what they competed with.

The program (named AdAPT, by the way) was originally written to use
decimal floating point. From your chronology, it was the Spectra 70 that
killed it.

We learned a lot from writing that program. It's easy enough to draw a
polygon by going from point to point and closing. Filling it is not so
simple. The finite (and selectable) size of the "pen" is a minor
complication. Determining which is the inside is a bit tricky. Our first
method picked an arbitrary point near one of the lines, then calculated
the net angle that a line from it to each vertex in turn rotated
through. If zero, the point is outside. If 2pi, inside. Calculated with
a dot product, if I remember.

Patterns were made by projecting apertures onto film as the head moved.
Line width depended on exposure, so tangential speed had to remain
substantially constant during all motions.

Gerber learned a lot too, some of which we taught them. That's another tale.

Jerry
 
The program (named AdAPT, by the way) was originally written to use
decimal floating point. From your chronology, it was the Spectra 70 that
killed it.

We learned a lot from writing that program. It's easy enough to draw a
polygon by going from point to point and closing. Filling it is not so
simple. The finite (and selectable) size of the "pen" is a minor
complication. Determining which is the inside is a bit tricky. Our first
method picked an arbitrary point near one of the lines, then calculated
the net angle that a line from it to each vertex in turn rotated
through. If zero, the point is outside. If 2pi, inside. Calculated with
a dot product, if I remember.

Patterns were made by projecting apertures onto film as the head moved.
Line width depended on exposure, so tangential speed had to remain
substantially constant during all motions.

Gerber learned a lot too, some of which we taught them. That's another tale.

How did you make the pen stop in time?

/BAH
 
J

Jerry Avins

Jan 1, 1970
0
[email protected] wrote:

...
How did you make the pen stop in time?

Gerber's projection head had a rotary variable-density filter in the
light path. It was turned open-loop with a stepping motor to a position
that depended on the current fraction of the ramp cycle. The relative
speeds of the two axes were determined by binary rate multipliers
depending on the specified slope. It was all discrete logic (and a lot
of fun).

The the rotary filter in the projection head as delivered was geared and
eventually the mechanism hammered itself apart. We replaced the gears
with toothed belt. Not only did that setup last the life of the plotter,
but it made the room a lot quieter. (I didn't bother to reverse the
stepper motor. I simply turned the filter over.)

Jerry
 
No, but SIX years later the MAXIMUM was 1MB. Did you miss that? I
doubt you had 32KB in '64.


The bigger 360s could be configured with 512KB pretty much from their
introduction. The 50, 60 and 70 were announced (April '64) with 512KB
capability, but of course the 60 and 70 never shipped, and IIRC, the
65, 67 and 75 which replaced those two models all announced (a year
later) with 1MB. The 50 and larger models could also have a
considerable amount of external memory expansion (up to 8MB total on
the 50, some of the other models were 4MB or 6MB), but I don't
remember when that became available.

I remember working on a 360/50 (this was in the late seventies), with
512KB of internal core and a separate, third party, semiconductor
(!!), 512KB expansion chassis (for a whopping 1MB total).
 
[email protected] wrote:

...


Gerber's projection head had a rotary variable-density filter in the
light path. It was turned open-loop with a stepping motor to a position
that depended on the current fraction of the ramp cycle. The relative
speeds of the two axes were determined by binary rate multipliers
depending on the specified slope. It was all discrete logic (and a lot
of fun).

The the rotary filter in the projection head as delivered was geared and
eventually the mechanism hammered itself apart. We replaced the gears
with toothed belt. Not only did that setup last the life of the plotter,
but it made the room a lot quieter. (I didn't bother to reverse the
stepper motor. I simply turned the filter over.)

Thank you. I'm not a hardware type and do not know enough to
read between your lines. But I do have a glimmer of what
you described.

How did you keep the pen nib from leaking? Remember when pens
would acquire that ink blob mixed with paper dust?
When not in use, was the pen always off the paper or on the paper?

/BAH
 
J

Jerry Avins

Jan 1, 1970
0
Thank you. I'm not a hardware type and do not know enough to
read between your lines. But I do have a glimmer of what
you described.

How did you keep the pen nib from leaking? Remember when pens
would acquire that ink blob mixed with paper dust?
When not in use, was the pen always off the paper or on the paper?

We didn't use pens. A rarely used option was scribing coated film, but
most of the work was done by projecting light through apertures of
various shapes onto photographic film with the projection head I
described above. The head contained a wheel with 24? apertures that
could be selected on command like the tools in an NC machine. We had an
inventory of many more apertures than would fit the wheel at one time.
An aperture could be removed and remounted so that its image was
repeatably relocated within a step. Gerber's apertures were etched in
metal and not intended for frequent replacement. They had two holes to
match pins in the wheel. Mine were photographic images on thin Plexiglas
drilled with one hole to fit a pin tightly and another larger hole to
serve as a rotation stop. A tapered wedge in the larger held the pin
against the stop. Eventually, we built a mechanical slit whose length
could be varied and which could be rotated to be normal to any direction
of motion. I believe that Gerber adapted that approach in some of its
plotters.

Do you want more detail? This getting a bit afield, but I can understand
your interest.

Jerry
 
M

MooseFET

Jan 1, 1970
0
Jerry Avins wrote:

(snip)


I suppose I do think it is better.

There is a discussion on another newsgroup about the usefulness of
64 bit processing, that the only need for it is for the large address
space, which is needed relatively rarely. I added that there may
be some problems where 64 bit fixed point is important enough.

Consider the case where you have 64 bit arithmetic in hardware,
including the ability to multiply a 64 bit value by a 64 bit
fraction. (That is, the high half of the 128 bit product of
two 64 bit integers.)


There are situations where the 32 bit value isn't enough before the
FFT is done. The one that springs to mind is moving a SQUID around in
the earth's magnetic field and looking for a small variation in the
magnetic field.

The noise level of a low temperature SQUID is more than 32 bits down
from the strength of the earths field.
 
G

glen herrmannsfeldt

Jan 1, 1970
0
Jerry Avins wrote:

(snip)
Don't frown. The dynamic range encountered in an FFT is predictable only
within wide limits. Consider two 64K FFTs with input quantities in the
same range. In one, the energy is spread over the entire spectrum, while
with the other it is concentrated at a single frequency. Do you suggest
providing 16 bits of headroom? It might be reasonable in some
circumstances, but is it better than floating point?

I suppose I do think it is better.

There is a discussion on another newsgroup about the usefulness of
64 bit processing, that the only need for it is for the large address
space, which is needed relatively rarely. I added that there may
be some problems where 64 bit fixed point is important enough.

Consider the case where you have 64 bit arithmetic in hardware,
including the ability to multiply a 64 bit value by a 64 bit
fraction. (That is, the high half of the 128 bit product of
two 64 bit integers.)

-- glen
 
J

Jerry Avins

Jan 1, 1970
0
glen said:
Jerry Avins wrote:

(snip)


I suppose I do think it is better.

There is a discussion on another newsgroup about the usefulness of
64 bit processing, that the only need for it is for the large address
space, which is needed relatively rarely. I added that there may
be some problems where 64 bit fixed point is important enough.

Consider the case where you have 64 bit arithmetic in hardware,
including the ability to multiply a 64 bit value by a 64 bit
fraction. (That is, the high half of the 128 bit product of
two 64 bit integers.)

Glen,

Thanks. I'll stop bending over backwards and fighting my inclinations.

I'm by nature disposed to agree that integer arithmetic is safer
whenever it can be managed (see my discussion of the AdAPT program for
our Gerber); so much so that I usually have to resist the impulse do do
everything that way. (I like to program in assembler, too. Psyching out
a compiler is a real pain for me, but so is scheduling a pipeline.) I'm
also unaccustomed to wide data paths, having been involved with data
acquisition and machine factory control with 8-bit processors. I stuck
with 8-bitters long after 16 bits was common in order to have systems in
which I knew all of the code. They were (and are) fast enough to manage
two conveyor lines with all interlocks and picking robots, schedule
maintenance, log operations, run two consoles, and run a word processor
for commenting the logs. (4 MHz Z-80) The real world moves slowly.

Jerry
 
N

Nick Maclaren

Jan 1, 1970
0
|> (snip regarding fixed point FFT)
|>
|> But is 64 bits enough? I would guess it so, but to do it you
|> need (if you want it fast) a processor to generate the high
|> 64 bits of the 128 bit product. 64 bit processors should be
|> able to do that.

Essentially, the only computational problem with FFTs is the memory
access pattern; in all other respects, it is as civilised an
algorithm as you could hope to find. The number of bits needed
is the required number of bits plus the number of bits needed to
index the array. It isn't even very sensitive to whether the
arithmetic is rounded or not!

Until you get into quite ridiculous array sizes, 64 bits is ample.
Well, so is 52/53. 32 sometimes isn't, and 23/24 often isn't.


Regards,
Nick Maclaren.
 
G

glen herrmannsfeldt

Jan 1, 1970
0
MooseFET wrote:
(snip regarding fixed point FFT)
There are situations where the 32 bit value isn't enough before the
FFT is done. The one that springs to mind is moving a SQUID around in
the earth's magnetic field and looking for a small variation in the
magnetic field.
The noise level of a low temperature SQUID is more than 32 bits down
from the strength of the earths field.

But is 64 bits enough? I would guess it so, but to do it you
need (if you want it fast) a processor to generate the high
64 bits of the 128 bit product. 64 bit processors should be
able to do that.

-- glen
 
64 bits provides a range of +/- 9,223,372,036,854,775,807. What is the
ratio of the forces of Brownian motion in room temperature air and of a
major earthquake?

You should ask an economist/banker type. They deal with funny
money and are not constrained by physical laws and the number
of atoms in existence. There was a guy in my other newsgroup
who talked about not having enough, IIRC, but I cannot remember
details. Do you want me to introduce you?

<snip>

/BAH
 
J

Jerry Avins

Jan 1, 1970
0
glen said:
MooseFET wrote:
(snip regarding fixed point FFT)




But is 64 bits enough?

64 bits provides a range of +/- 9,223,372,036,854,775,807. What is the
ratio of the forces of Brownian motion in room temperature air and of a
major earthquake?
I would guess it so, but to do it you
need (if you want it fast) a processor to generate the high
64 bits of the 128 bit product. 64 bit processors should be
able to do that.

Jerry
 
N

Nick Maclaren

Jan 1, 1970
0
|> > MooseFET wrote:
|> > (snip regarding fixed point FFT)
|> >
|> > But is 64 bits enough?
|>
|> 64 bits provides a range of +/- 9,223,372,036,854,775,807. What is the
|> ratio of the forces of Brownian motion in room temperature air and of a
|> major earthquake?

And, if you have 10^12 data points, the effective range drops to
9 million to 1.


Regards,
Nick Maclaren.
 
M

MooseFET

Jan 1, 1970
0
MooseFET wrote:

(snip regarding fixed point FFT)


But is 64 bits enough?

In the specific case I was thinking of, 64 would be enough. I can
imagine a case where it wouldn't be but not a "real life" one. The
case I gave can also be solved by applying a high pass filter before
the FFT and then adjusting the results.

I would guess it so, but to do it you
need (if you want it fast) a processor to generate the high
64 bits of the 128 bit product. 64 bit processors should be
able to do that.

Yes a Y = X + A*B/(2^N) instruction would be a nice feature, even if N
was restricted to only certain values.
 
M

MooseFET

Jan 1, 1970
0
|> MooseFET wrote:

|> (snip regarding fixed point FFT)
|>
|> But is 64 bits enough? I would guess it so, but to do it you
|> need (if you want it fast) a processor to generate the high
|> 64 bits of the 128 bit product. 64 bit processors should be
|> able to do that.

Essentially, the only computational problem with FFTs is the memory
access pattern;

If you are doing special hardware for an FFT, the addressing is fairly
easy to implement.
The operations involved are all "add like" in operation.
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> If you are doing special hardware for an FFT, the addressing is fairly
|> easy to implement.
|> The operations involved are all "add like" in operation.

That's not the problem.

The problem is that all current memory technologies rely on the data
being accessed in contiguous 'blocks'; it requires a LOT more money
and watts to make true random access efficient. And there is no way
to assign arrays to blocks that doesn't cause some passes of the FFTs
to access the data in a very inefficient pattern.


Regards,
Nick Maclaren.
 
J

Jerry Avins

Jan 1, 1970
0
You should ask an economist/banker type. They deal with funny
money and are not constrained by physical laws and the number
of atoms in existence. There was a guy in my other newsgroup
who talked about not having enough, IIRC, but I cannot remember
details. Do you want me to introduce you?

No thanks. There are always situations that call for more MORE *MORE*.
One of my first delivered assembly-language programs added using triple
precision. Of course, that was on an 8-bit machine :) (So why doesn't
the DEC Alpha have a carry flag?)

Jerry
 
Top