Maker Pro
Maker Pro

Since when is not having a bug patentable?

S

Spehro Pefhany

Jan 1, 1970
0
Error!!! 4095 would have overflowed the data destined for our 16-bit
DAC.

or 4097

Yes, 4097 is safe, as 4096.001 would be safe. Do try to be careful
about the realities.

My favorite '>>' issue with C is that (on an x86, but probably not on
an ARM)

for:-

unsigned int x;

x >> 33; is the same as x >> 1

(iow it only looks at the 5 LSBs of the operand- very 8048ish
behavior)

On an ARM I'd expect the intuitive result of 0 (32-bit int) for any x
if the number of shifts is >= 32.

This is defined in the standard 6.5.7.. wot sez the results are
implementation-defined if you try to right shift as many or more bits
than are in the operand (or if the number of shifts is negative)- (so
don't do that).

Fortunately there are a limited number of these gotchas, so you run
through them sooner or later.
 
J

Jasen Betts

Jan 1, 1970
0
Just found a corner-case bug in my pulse generator firmware. My guy
did

DAC_CODE = TIME_VAL * CAL_FCTR / 4096

(in C, presumably in lowercase)

where all the variables are declared to be signed 32-bit integers. But
the compiler implemented the divide as an unsigned right-shift. He had
to examine the assembly code to see what was going on.

Sounds like that that compiler is broken, all the parameters are signed,
even the 4096 is signed, there no reason for it to generate an
unsigned shift
 
F

Frank Buss

Jan 1, 1970
0
Jasen said:
Sounds like that that compiler is broken, all the parameters are signed,
even the 4096 is signed, there no reason for it to generate an
unsigned shift

I think this is a general problem with many compilers and language
standards: There is an informal description of the language, like the C
language standard, and then the compiler vendors are trying to implement
it. You can't prove that the implementation and the generated code is
right. To be 100% sure you have to verify the assembler output and try
to compare it with the intention in the source code, or at least you
have to write many tests.

There are better ideas, like in Haskell, which has some simple language
constructs, and then a "prelude", which defines more advanced concepts,
but with the basic language. Or Fortress (
http://en.wikipedia.org/wiki/Fortress_(programming_language) ),
which looks already like mathematics. But I don't know a language, where
you can prove down to the assembly level, that the compiler, the output
it generates and the runtime system (if needed), is correct and where
the language definition itself is based on mathematics, like axioms and
provable theorems. Maybe for non-mathematicians it would be more
difficult to write programs with it, but I'm sure it would lead to less
bugs.
 
J

Jasen Betts

Jan 1, 1970
0
Of course GCC for ARM is defective.

it works fine here.


int32_t blah(int32_t a,int32_t b)
{
return a * b / 4096;
}

compiles with optimisation level 2 as

blah:
@ Function supports interworking.
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mul r0, r1, r0
add r3, r0, #4080
cmp r0, #0
add r3, r3, #15
movlt r0, r3
mov r0, r0, asr #12
bx lr
.size blah, .-blah

I don't speak "arm" but it looks like it's asking for arithmetic shift right
after optionally padding the value to get the standard rounding
towards zero and when tested appears to do so both under QEMU and on
real hardware.
 
J

Jasen Betts

Jan 1, 1970
0
Really? Lets see. DEADBEEF arithmetically right shifted 12 will result
in FFFDEADB. Looks a lot like rounding toward zero to me. As for not
identical to an actual divide, i don't think so. I have had to chase that
as the gate and transistor level.

no, rounding it up to FFFDEADC would be rounding towards zero
 
J

josephkk

Jan 1, 1970
0
It is a fence post error when you compare dividing by 0x1000 signed with
the corresponding bit shift on twos complement negative numbers. The
only case where the two methods will agree is when the operand is such
that it has all zero bits in the remainder of the division.


I think you know what I mean here but to avoid further ambiguity a
concrete example with a divisor large enough to show both effects.

If we compare divide by 4 and >>2 on twos complement negative integers

int /4 >>2 8bit

-10 -2 -3 0xF6
-9 -2 -3 0xF7
-8 -2 -2 0xF8
-7 -1 -2 0xF9
-6 -1 -2 0xFA
-5 -1 -2 0xFB
-4 -1 -1 0xFC
-3 0 -1 0xFD
-2 0 -1 0xFE
-1 0 -1 0xFF
0 0 0 0x00
1 0 0 0x01
2 0 0 0x02
3 0 0 0x03
4 1 1 0x04
5 1 1 0x05
6 1 1 0x06
7 1 1 0x07
8 2 2 0x08

Note that /4 rounds towards zero creating a block of seven zeroes
whereas arithmetic shift gives 4 states for every output value.
Leading FFFFFF removed for clarity.

And the C snippet code to do it (yes I know printf is deprocated)

int i,j,k;
for (i=-10; i<= 10; i++)
{
j = i/4;
k = i>>2;
printf("%3i %3i %3i %X\n", i,j,k,i);
}


Putting it another way -1 is invariant under arithmetic right shift.
(and so is zero)

Very clear. Thank you.

?-)
 
M

Martin Brown

Jan 1, 1970
0
it works fine here.


int32_t blah(int32_t a,int32_t b)
{
return a * b / 4096;
}

compiles with optimisation level 2 as

blah:
@ Function supports interworking.
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mul r0, r1, r0
add r3, r0, #4080
cmp r0, #0
add r3, r3, #15
movlt r0, r3
mov r0, r0, asr #12
bx lr
.size blah, .-blah

I don't speak "arm" but it looks like it's asking for arithmetic shift right
after optionally padding the value to get the standard rounding
towards zero and when tested appears to do so both under QEMU and on
real hardware.

Thank you Jasen. Looks increasingly like user error.
Why am I not at all surprised?
 
M

Martin Brown

Jan 1, 1970
0
So you *are* calling John a liar. OK, your cards are on the table.

They are now. I was prepared to give him the benefit of the doubt until
his latest ridiculous claims. The C language is well defined for the
handling of mixed signed and unsigned integer arithmetic as far back as
K&R first edition (see page 184 para "6.5 Unsigned" ). You may not like
the definition (or even allowing implicit type conversions at all) but
the original C standard does define the correct behaviour here.

It is an interesting philosophical question as to whether someone who
clearly does not know what they are talking about can be lying. The
latter term implies that they know the truth but are hiding it.

I think ignorant and arrogant is a far better description. John made a
bogus claim and then later asserted that he "looked it up" - and at that
point he was lying but until then he was simply misinformed.

There is no C reference book that I know of that would support his
position not even "C for Dummies". I was a bit unfair to that book -
they do not make such crass mistakes.
You're a clueless asshole.

<snipped unread crap>

You must have been looking in the mirror. You had a choice but you have
conclusively proven that you know even less about the subject than John.

You are utterly clueless.
 
M

Martin Brown

Jan 1, 1970
0
Anyone with a clue would rescale the numerator "scale factor" to
compensate. And for a 16 bit dac or less you could choose the scale
factor so that the high word was the required result no shift needed.

The purpose here is to establish whether the fault is in the expression
being compiled or at the code generation peephole optimiser stage.
My favorite '>>' issue with C is that (on an x86, but probably not on
an ARM)

for:-

unsigned int x;

x>> 33; is the same as x>> 1

(iow it only looks at the 5 LSBs of the operand- very 8048ish
behavior)

On an ARM I'd expect the intuitive result of 0 (32-bit int) for any x
if the number of shifts is>= 32.

The behaviour for shifts that are negative or > bit_width are decidedly
compiler dependent and cannot be relied upon. Some smarter compilers
will spot expressions that must evaluate to zero at compile time but
most fed x>>y will generate code for a shift by (y modulo bit_width).
This is defined in the standard 6.5.7.. wot sez the results are
implementation-defined if you try to right shift as many or more bits
than are in the operand (or if the number of shifts is negative)- (so
don't do that).

Fortunately there are a limited number of these gotchas, so you run
through them sooner or later.

The secure coding website run by Carnegie Mellon deals with defensive
strategies for this under INT34-C and many other potential C gotchas.

https://www.securecoding.cert.org/c...f+bits+or+more+bits+than+exist+in+the+operand

And as far as unexpected promotions of integer types is concerned.

https://www.securecoding.cert.org/c.../INT02-C.+Understand+integer+conversion+rules

Pity that John didn't know where to look it up.
 
T

Tom Del Rosso

Jan 1, 1970
0
John said:
I was discussing a problem with a C compiler, and expressing my
general feeling that C is an obsolete language full of hazards.

This was revealed in 1993.

====================== Begin =======================
CREATORS ADMIT UNIX, C HOAX

In an announcement that has stunned the computer industry, Ken Thompson,
Dennis Ritchie and Brian Kernighan admitted that the Unix operating
system and C programming language created by them is an elaborate April
Fools prank kept alive for over 20 years. Speaking at the recent
UnixWorld Software Development Forum, Thompson revealed the following:

"In 1969, AT&T had just terminated their work with the GE/Honeywell/AT&T
Multics project. Brian and I had just started working with an early
release of Pascal from Professor Nichlaus Wirth's ETH labs in
Switzerland and we were impressed with its elegant simplicity and power.
Dennis had just finished reading 'Bored of the Rings', a hilarious
National Lampoon parody of the great Tolkien 'Lord of the Rings'
trilogy. As a lark, we decided to do parodies of the Multics
environment and Pascal. Dennis and I were responsible for the operating
environment. We looked at Multics and designed the new system to be as
complex and cryptic as possible to maximize casual users' frustration
levels, calling it Unix as a parody of Multics, as well as other more
risque allusions. Then Dennis and Brian worked on a truly warped
version of Pascal, called 'A'. When we found others were actually
trying to create real programs with A, we quickly added additional
cryptic features and evolved into B, BCPL and finally C. We stopped
when we got a clean compile on the following syntax:

for(;P("\n"),R-;P("|"))for(e=C;e-;P("_"+(*u++/8)%2))P("| "+(*u/4)%2);

To think that modern programmers would try to use a language that
allowed such a statement was beyond our comprehension! We actually
thought of selling this to the Soviets to set their computer science
progress back 20 or more years. Imagine our surprise when AT&T and
other US corporations actually began trying to use Unix and C! It has
taken them 20 years to develop enough expertise to generate even
marginally useful applications using this 1960's technological parody,
but we are impressed with the tenacity (if not common sense) of the
general Unix and C programmer. In any event, Brian, Dennis and I have
been working exclusively in Pascal on the Apple Macintosh for the past
few years and feel really guilty about the chaos, confusion and truly
bad programming that have resulted from our silly prank so long ago."

Major Unix and C vendors and customers, including AT&T, Microsoft,
Hewlett-Packard, GTE, NCR, and DEC have refused comment at this time.
Borland International, a leading vendor of Pascal and C tools, including
the popular Turbo Pascal, Turbo C and Turbo C++, stated they had
suspected this for a number of years and would continue to enhance their
Pascal products and halt further efforts to develop C. An IBM spokesman
broke into uncontrolled laughter and had to postpone a hastily convened
news conference concerning the fate of the RS-6000, merely stating 'VM
will be available Real Soon Now'. In a cryptic statement, Professor
Wirth of the ETH institute and father of the Pascal, Modula 2 and Oberon
structured languages, merely stated that P. T. Barnum was correct.

In a related late-breaking story, usually reliable sources are stating
that a similar confession may be forthcoming from William Gates
concerning the MS-DOS and Windows operating environments. And IBM
spokesman have begun denying that the Virtual Machine (VM) product is an
internal prank gone awry.
{COMPUTERWORLD 1 April}
{contributed by Bernard L. Hayes}
======================= End ========================
 
J

josephkk

Jan 1, 1970
0
John said:
[1] The food truck thing started in LA, I think, and it's a big deal
here now. Mexican, seafood, rotisserie chicken, Indian, and one
improbable truck that offers Irish and Eritrean food. I stood in a
long line and had a hot dog and a soda for $10 last week. I read that
food trucks are getting popular in Paris.


We had them in 1972 at Ft. Rucker, Alabama that baked & sold pizza in
front of your barracks

Mobile food vendors have been around for a long time. Working many port
towns and other transportation hubs since at least the 1700s (fish and
chips in London, minimum), maybe since Rome or ancient Greece.

?-)
 
J

josephkk

Jan 1, 1970
0
John Larkin wrote:


[1] The food truck thing started in LA, I think, and it's a big deal
here now. Mexican, seafood, rotisserie chicken, Indian, and one
improbable truck that offers Irish and Eritrean food. I stood in a
long line and had a hot dog and a soda for $10 last week. I read that
food trucks are getting popular in Paris.


We had them in 1972 at Ft. Rucker, Alabama that baked & sold pizzain
front of your barracks

Mobile food vendors have been around for a long time. Working many port
towns and other transportation hubs since at least the 1700s (fish and
chips in London, minimum), maybe since Rome or ancient Greece.


Well, you must be a lot older than me! ;-)

Shhhh!

?-)))
 
Top