

> it seems that a MAX macro for LUA_UNSIGNED is not #defined in luaconf.h
> so that i had to use ULLONG_MAX from <limits.h> directly instead.
you could also use
lua_Unsigned umax = ~ 0 ;
to determine this upper maximum.
this has the advantage that you do not need to know what C integer type
is actually used as Lua integer type.


Jim, please reply to messages instead of creating a new one. This
keeps threads together. Thanks.


> it seems that a MAX macro for LUA_UNSIGNED is not #defined in luaconf.h
> so that i had to use ULLONG_MAX from <limits.h> directly instead.
you could also use
lua_Unsigned umax = ~ 0 ;
to determine this upper maximum.
this has the advantage that you do not need to know what C integer type
is actually used as Lua integer type.
Doesn't that need to be:
lua_Unsigned umax = ~(lua_Unsigned)0;
to cover all of the cases that the C standard supports? It's been a while since I've had my head that deep in this, but I know that 0 is of type signed int. What I don't remember is if sign extension applies BEFORE or AFTER converting to unsigned; that expression might end up being 0x00000000FFFFFFFF if it converts to unsigned before doing sign extension.
/s/ Adam


In C, 0 is an expression of type int; writing ~0 creates an unary expression taking an int: the ~ operator only uses the standard type promotion (to int) by default and no other longer type (if you pass a short, that short will be promoted to an int), then it computes the bitinversion (i.e. its a simple arithmetic addition: substract the value from the maximum of its input type and adds the minimum of that type); as this input type is an int, , it returns an int equal to (INTMAX  0 + INTMIN). This gives the value you use in the initializer of your declared variable: It's only at that time that there will be a conversion from int to lua_Unsigned ! This conversion being possibly lossy if the target type cannot contain the full range for you initializer value (and because you did not provide an, explicit typecast to the result, the compiler should warn you. As well converting a signed int to an unsigned number will use sign propagation before truncating the extra high bit: this is not lossy if all discarded high bits are the equal to the highest bit kept in the target type (this preserves the value), otherwise you have an overflow and the compiler will also warn you (this happens when the implicit conversion of the initializer value takes a value whose range is reduced in range or precision. Using a static typecast however will silently remove that warning. So Lua_Unsigned umax = ~ 0; is almost equivalent to: Lua_Unsigned umax = (Lua_Unsigned)(~ 0); except that you will not be warned by the compiler in case of loss of range or precision (static typecasts force the compiler to remain silent and discard all bits in excess, even if this changes the numeric value).
So yes you need to place a typecast like you said: lua_Unsigned umax = ~(lua_Unsigned)0;
This static typecast is always safe because 0 and 1 are the only two numeric constants that can be typecasted without any loss of range or precision. Now the ~operator will correctly work on a long unsigned int (Lus_Unsigned) as expected and will return a value of that type, which can can safely used to initialize a variable with exactly the same type.
> it seems that a MAX macro for LUA_UNSIGNED is not #defined in luaconf.h
> so that i had to use ULLONG_MAX from <limits.h> directly instead.
you could also use
lua_Unsigned umax = ~ 0 ;
to determine this upper maximum.
this has the advantage that you do not need to know what C integer type
is actually used as Lua integer type.
Doesn't that need to be:
lua_Unsigned umax = ~(lua_Unsigned)0;
to cover all of the cases that the C standard supports? It's been a while since I've had my head that deep in this, but I know that 0 is of type signed int. What I don't remember is if sign extension applies BEFORE or AFTER converting to unsigned; that expression might end up being 0x00000000FFFFFFFF if it converts to unsigned before doing sign extension.
/s/ Adam


Am 15.05.19 um 04:02 schröbte Coda Highland:
> On Tue, May 14, 2019 at 8:47 PM Jim < [hidden email]> wrote:
>
>>> it seems that a MAX macro for LUA_UNSIGNED is not #defined in luaconf.h
>>> so that i had to use ULLONG_MAX from <limits.h> directly instead.
>>
>> you could also use
>>
>> lua_Unsigned umax = ~ 0 ;
>>
>> to determine this upper maximum.
>> this has the advantage that you do not need to know what C integer type
>> is actually used as Lua integer type.
>>
>
> Doesn't that need to be:
>
> lua_Unsigned umax = ~(lua_Unsigned)0;
>
> to cover all of the cases that the C standard supports? It's been a while
> since I've had my head that deep in this, but I know that 0 is of type
> signed int. What I don't remember is if sign extension applies BEFORE or
> AFTER converting to unsigned; that expression might end up being
> 0x00000000FFFFFFFF if it converts to unsigned before doing sign extension.
lua_Unsigned umax = ~(lua_Unsigned)0;
lua_Unsigned umax = 1;
are both fine.
lua_Unsigned umax = ~0;
works for two's complement machines because it is equivalent to the `1`
case. In the worst case it might be undefined behavior (on one's
complement machines which cannot represent negative zeros).
The issue you mentioned above applies to
lua_Unsigned umax = ~0u;
>
> /s/ Adam
>
Philipp


Philippe:
On Wed, May 15, 2019 at 4:33 AM Philippe Verdy < [hidden email]> wrote:
> In C, 0 is an expression of type int; writing ~0 creates an unary expression taking an int: the ~ operator only uses the standard type promotion (to int) by default and no other longer type (if you pass a short, that short will be promoted to an int), then it computes the bitinversion (i.e. its a simple arithmetic addition: substract the value from the maximum of its input type and adds the minimum of that type); as this input type is an int, , it returns an int equal to (INTMAX  0 + INTMIN). This gives the value you use in the initializer of your declared variable: It's only at that time that there will be a conversion from int to lua_Unsigned ! This conversion being possibly lossy if the target type cannot contain the full range for you initializer value (and because you did not provide an, explicit typecast to the result, the compiler should warn you.
You've got a weird concept of simple. Why go through this arithmetic
addition stuff ( which needs, among other things, a second operand ),
when the programmer requested a bitwise negation and many instruction
sets have a perfectly good unary negation instruction? ( and others
have similar logic operations which are easier for logic stuff ).
Francisco Olarte.


On Wed, May 15, 2019 at 12:53 AM Philipp Janda < [hidden email]> wrote: Am 15.05.19 um 04:02 schröbte Coda Highland:
> On Tue, May 14, 2019 at 8:47 PM Jim <[hidden email]> wrote:
>
>>> it seems that a MAX macro for LUA_UNSIGNED is not #defined in luaconf.h
>>> so that i had to use ULLONG_MAX from <limits.h> directly instead.
>>
>> you could also use
>>
>> lua_Unsigned umax = ~ 0 ;
>>
>> to determine this upper maximum.
>> this has the advantage that you do not need to know what C integer type
>> is actually used as Lua integer type.
>>
>
> Doesn't that need to be:
>
> lua_Unsigned umax = ~(lua_Unsigned)0;
>
> to cover all of the cases that the C standard supports? It's been a while
> since I've had my head that deep in this, but I know that 0 is of type
> signed int. What I don't remember is if sign extension applies BEFORE or
> AFTER converting to unsigned; that expression might end up being
> 0x00000000FFFFFFFF if it converts to unsigned before doing sign extension.
lua_Unsigned umax = ~(lua_Unsigned)0;
lua_Unsigned umax = 1;
are both fine.
lua_Unsigned umax = ~0;
works for two's complement machines because it is equivalent to the `1`
case. In the worst case it might be undefined behavior (on one's
complement machines which cannot represent negative zeros).
The issue you mentioned above applies to
lua_Unsigned umax = ~0u;
Okay, so it does the widening / sign extension before it does the conversion to unsigned. This makes sense and it's probably the
To be honest, I should have known this. I had to build a selfhosting C++ compiler from first principles as a class assignment in 20132014. But it's been YEARS and I don't remember everything still. (I also never finished it because the class instructors vanished around the time that we started working on the actual code generation part.)
/s/ Adam


On Wed, May 15, 2019 at 4:40 AM Francisco Olarte < [hidden email]> wrote: Philippe:
On Wed, May 15, 2019 at 4:33 AM Philippe Verdy <[hidden email]> wrote:
> In C, 0 is an expression of type int; writing ~0 creates an unary expression taking an int: the ~ operator only uses the standard type promotion (to int) by default and no other longer type (if you pass a short, that short will be promoted to an int), then it computes the bitinversion (i.e. its a simple arithmetic addition: substract the value from the maximum of its input type and adds the minimum of that type); as this input type is an int, , it returns an int equal to (INTMAX  0 + INTMIN). This gives the value you use in the initializer of your declared variable: It's only at that time that there will be a conversion from int to lua_Unsigned ! This conversion being possibly lossy if the target type cannot contain the full range for you initializer value (and because you did not provide an, explicit typecast to the result, the compiler should warn you.
You've got a weird concept of simple. Why go through this arithmetic
addition stuff ( which needs, among other things, a second operand ),
when the programmer requested a bitwise negation and many instruction
sets have a perfectly good unary negation instruction? ( and others
have similar logic operations which are easier for logic stuff ).
Francisco Olarte.
This is talking about the standard behavior, not about the implementation details. He's got things a little bit mixed up, which ended up making him wrong, and that's making the communication noisy, but I went and looked up the C language specification to be sure.
It's not the ~ operator that has that behavior. The ~ operator is always bitwise negation. ~0 on a 1's complement machine would be a negative zero. ~0 on a signmagnitude machine would be INT_MIN (which is equal to INT_MAX on those platforms instead of INT_MAX  1).
However, the way the spec is written, the conversion of a negative signed number into an unsigned integer is done by repeatedly adding or subtracting one more than the maximum value of the destination type until the value is in range. This means that if you start with 1 (which is out of range for uint64_t) you have to add ULONG_MAX+1 repeatedly until the value DOES fit. This is independent of the underlying bit pattern of the number, whether it's 2's complement, 1's complement, or signmagnitude. This means that regardless of how the compiler/CPU implements it, (uint64_t)1 MUST be equal to ULONG_MAX.
On a 2's complement machine, this is just a sign extension: copy the most significant bit into all of the new bits of the larger type. The same operation works when casting int32_t to int64_t or to uint64_t.
On a 1's complement machine or a signmagnitude machine, casting to int64_t and uint64_t are different operations. Casting to int64_t on 1's complement is a sign extension, and on a signmagnitude machine you have to copy the old sign bit into the new sign bit and set the old one to zero. Either way, the resulting value is 1LL, as the spec demands. Casting 1 to uint64_t is NOT just casting to int64_t and then reinterpreting the bit pattern like it is on a 2's complement machine. Because the spec demands that it must be equivalent to adding ULONG_MAX+1, you have to do a sign extension and then add 1 if you're using a 1's complement machine, and if you're using a signmagnitude machine it's a bitwise negation followed by setting the sign bit (back) to 1.
/s/ Adam


I'm wrong? you've yourself made a (more complex and intricated) demonstration that this was an additive operation (but using an incremental loop, which is obviously never done like this when these operations can be associated and the number of incremental loops is known from the start).
Being an additive operation does not mean it is not "bitwise". But in fact the term "bitwise" you use is the incorrect term, on BCD machines (generally using signmagnitude) it would be wrong. It's additive yes, but using modular arithmetic (which does not require any loop). All the trick is knowing the (positive) module, which is tied to the signed/unsigned integer "sizes" (i.e. the number of digits it stores, and the base of these digits, which is 2 on all modern machines or 10 on some legacy ones; this "size" is not a number of "char" in C, as returned by the sizeof operator, but is more like the "precision" of floaating point types; it may even be possible that machines will don't use integers at all but floatting point numbers always, in base 2 or 10, with sign and magnitude, or a sign and basecomplement representation and still some exponent field in a nonnegative range for integers and a rounding operation for the extra precision : in all these possible representations operations do not occur "bitwise" but base2 operations are emulated to implmeent "~", or "", or "&", or "^").
On Wed, May 15, 2019 at 4:40 AM Francisco Olarte < [hidden email]> wrote: Philippe:
On Wed, May 15, 2019 at 4:33 AM Philippe Verdy <[hidden email]> wrote:
> In C, 0 is an expression of type int; writing ~0 creates an unary expression taking an int: the ~ operator only uses the standard type promotion (to int) by default and no other longer type (if you pass a short, that short will be promoted to an int), then it computes the bitinversion (i.e. its a simple arithmetic addition: substract the value from the maximum of its input type and adds the minimum of that type); as this input type is an int, , it returns an int equal to (INTMAX  0 + INTMIN). This gives the value you use in the initializer of your declared variable: It's only at that time that there will be a conversion from int to lua_Unsigned ! This conversion being possibly lossy if the target type cannot contain the full range for you initializer value (and because you did not provide an, explicit typecast to the result, the compiler should warn you.
You've got a weird concept of simple. Why go through this arithmetic
addition stuff ( which needs, among other things, a second operand ),
when the programmer requested a bitwise negation and many instruction
sets have a perfectly good unary negation instruction? ( and others
have similar logic operations which are easier for logic stuff ).
Francisco Olarte.
This is talking about the standard behavior, not about the implementation details. He's got things a little bit mixed up, which ended up making him wrong, and that's making the communication noisy, but I went and looked up the C language specification to be sure.
It's not the ~ operator that has that behavior. The ~ operator is always bitwise negation. ~0 on a 1's complement machine would be a negative zero. ~0 on a signmagnitude machine would be INT_MIN (which is equal to INT_MAX on those platforms instead of INT_MAX  1).
However, the way the spec is written, the conversion of a negative signed number into an unsigned integer is done by repeatedly adding or subtracting one more than the maximum value of the destination type until the value is in range. This means that if you start with 1 (which is out of range for uint64_t) you have to add ULONG_MAX+1 repeatedly until the value DOES fit. This is independent of the underlying bit pattern of the number, whether it's 2's complement, 1's complement, or signmagnitude. This means that regardless of how the compiler/CPU implements it, (uint64_t)1 MUST be equal to ULONG_MAX.
On a 2's complement machine, this is just a sign extension: copy the most significant bit into all of the new bits of the larger type. The same operation works when casting int32_t to int64_t or to uint64_t.
On a 1's complement machine or a signmagnitude machine, casting to int64_t and uint64_t are different operations. Casting to int64_t on 1's complement is a sign extension, and on a signmagnitude machine you have to copy the old sign bit into the new sign bit and set the old one to zero. Either way, the resulting value is 1LL, as the spec demands. Casting 1 to uint64_t is NOT just casting to int64_t and then reinterpreting the bit pattern like it is on a 2's complement machine. Because the spec demands that it must be equivalent to adding ULONG_MAX+1, you have to do a sign extension and then add 1 if you're using a 1's complement machine, and if you're using a signmagnitude machine it's a bitwise negation followed by setting the sign bit (back) to 1.
/s/ Adam


On Wed, May 15, 2019 at 9:17 AM Philippe Verdy < [hidden email]> wrote: I'm wrong? you've yourself made a (more complex and intricated) demonstration that this was an additive operation (but using an incremental loop, which is obviously never done like this when these operations can be associated and the number of incremental loops is known from the start).
The only thing ACTUALLY wrong was when you said (INT_MAX  0 + INT_MIN), which doesn't correlate to anything in the spec or in any implementation. There were also some places where your communication was unclear; it seemed to suggest that you were attributing behaviors to operations that didn't apply.
The CORE of your point was correct, though, and my post was meant not to contradict you there but to try to highlight the specifics of what you were pointing out.
Being an additive operation does not mean it is not "bitwise". But in fact the term "bitwise" you use is the incorrect term, on BCD machines (generally using signmagnitude) it would be wrong. It's additive yes, but using modular arithmetic (which does not require any loop). All the trick is knowing the (positive) module, which is tied to the signed/unsigned integer "sizes" (i.e. the number of digits it stores, and the base of these digits, which is 2 on all modern machines or 10 on some legacy ones; this "size" is not a number of "char" in C, as returned by the sizeof operator, but is more like the "precision" of floaating point types; it may even be possible that machines will don't use integers at all but floatting point numbers always, in base 2 or 10, with sign and magnitude, or a sign and basecomplement representation and still some exponent field in a nonnegative range for integers and a rounding operation for the extra precision : in all these possible representations operations do not occur "bitwise" but base2 operations are emulated to implmeent "~", or "", or "&", or "^").
No, I was right. You CAN'T make a Cspecificationcompliant compiler on a BCD architecture or one that exclusively uses IEEE758 floating point values. (You COULD make one that uses a different internal floating point representation, but the way IEEE758 works precludes it.) The specification demands certain constraints on what constitutes a valid integer type, and it explicitly calls out that it must be able to represent all 2^n distinct values using n value bits. You can't do this at all in BCD, and IEEE758 floating point omits the leading 1 i so you don't have distinct representations of 0 and 1 using the value bits alone. It also requires that ~E == max  E if E is of an unsigned type. This means the point is moot: Either your C compiler isn't standardscompliant on those systems, or you have to emulate an acceptable type.
/s/ Adam


Coda:
On Wed, May 15, 2019 at 3:41 PM Coda Highland < [hidden email]> wrote:
> On Wed, May 15, 2019 at 4:40 AM Francisco Olarte < [hidden email]> wrote:
...
>> You've got a weird concept of simple. Why go through this arithmetic
...
> This is talking about the standard behavior, not about the implementation details. He's got things a little bit mixed up, which ended up making him wrong, and that's making the communication noisy, but I went and looked up the C language specification to be sure.
Oh, I inferred what he was trying to point ( although I think he did
it the other way round, I do not read Philippe's post in detail, I
think he has got all the programming languages concepts mixed from his
personal world view, I just like to yank his chain a bit ocasionally
).
> It's not the ~ operator that has that behavior. The ~ operator is always bitwise negation. ~0 on a 1's complement machine would be a negative zero. ~0 on a signmagnitude machine would be INT_MIN (which is equal to INT_MAX on those platforms instead of INT_MAX  1).
Yeah, that is what I was pointint. I'm nearlly sure C has the not
operator because the pdp7/11 had(s) NOT or XOR R,imm in the
instruction set.
> However, the way the spec is written,
...
I've read the spec, but, when he said " then it computes the
bitinversion (i.e. its a simple arithmetic addition: substract the
value from the maximum of its input type and adds the minimum of that
type); as this input type is an int, , it returns an int equal to
(INTMAX  0 + INTMIN)." it seemed ( and still seems to me ) he was
trying to redefine not arithmetically.
FOS.


On Wed, May 15, 2019 at 12:56 PM Francisco Olarte < [hidden email]> wrote: > However, the way the spec is written,
...
I've read the spec, but, when he said " then it computes the
bitinversion (i.e. its a simple arithmetic addition: substract the
value from the maximum of its input type and adds the minimum of that
type); as this input type is an int, , it returns an int equal to
(INTMAX  0 + INTMIN)." it seemed ( and still seems to me ) he was
trying to redefine not arithmetically.
Ah, okay, that makes more sense. That's what I meant about the way it was phrased  I thought he meant that's what the type coercion was doing (it's not).
As a description of the arithmetic effect of the ~ operator... *crunches some numbers* It works for 1's complement and 2's complement for both signed and unsigned, but it only works for unsigned on a signmagnitude system. (And INTMIN for an unsigned type is, obviously, 0.) This is fine according to the C spec because the spec only carries demands on the arithmetic behavior of bitwise operators for unsigned operations, but it does mean you can't quite say that expression is universally true across all conformant C implementations.
/s/ Adam


Adam? ( due to sig vs sourceemailaddress discrepancies I do not know
the proper way to start mails toyou )...
mmm..., reading this....
On Wed, May 15, 2019 at 8:31 PM Coda Highland < [hidden email]> wrote:
> On Wed, May 15, 2019 at 12:56 PM Francisco Olarte < [hidden email]> wrote:
....
>> (INTMAX  0 + INTMIN)." it seemed ( and still seems to me ) he was
>> trying to redefine not arithmetically.
I should have quoted not, or used ~, but it seems I made myself understood.
> Ah, okay, that makes more sense. That's what I meant about the way it was phrased  I thought he meant that's what the type coercion was doing (it's not).
He may have meant, I find is thought proccess puzzling at least.
> As a description of the arithmetic effect of the ~ operator... *crunches some numbers* It works for 1's complement and 2's complement for both signed and unsigned, but it only works for unsigned on a signmagnitude system. (And INTMIN for an unsigned type is, obviously, 0.) This is fine according to the C spec because the spec only carries demands on the arithmetic behavior of bitwise operators for unsigned operations, but it does mean you can't quite say that expression is universally true across all conformant C implementations.
Yep. Thats why I normally try to find ways to leave this to compiler
implementers and avoid mixing logic and arithmetic behaviour, they
work hard at it and generally get things right. And I also act as if
"undefined behaviour" means "we will insert a chunk of code to brick
your hard disk" and not "we'll choose one of the two apparently normal
results" which some people use, saves a lot of trouble in the not so
long run, it doesn't usually bite, but when it does it does it hard.
Francisco Olarte.


"(INTMAX  0 + INTMIN)" was incorrect yes, I should have reread it because there were sign errors.
Le mer. 15 mai 2019 à 19:56, Francisco Olarte < [hidden email]> a écrit : Coda:
On Wed, May 15, 2019 at 3:41 PM Coda Highland <[hidden email]> wrote:
> On Wed, May 15, 2019 at 4:40 AM Francisco Olarte <[hidden email]> wrote:
...
>> You've got a weird concept of simple. Why go through this arithmetic
...
> This is talking about the standard behavior, not about the implementation details. He's got things a little bit mixed up, which ended up making him wrong, and that's making the communication noisy, but I went and looked up the C language specification to be sure.
Oh, I inferred what he was trying to point ( although I think he did
it the other way round, I do not read Philippe's post in detail, I
think he has got all the programming languages concepts mixed from his
personal world view, I just like to yank his chain a bit ocasionally
).
> It's not the ~ operator that has that behavior. The ~ operator is always bitwise negation. ~0 on a 1's complement machine would be a negative zero. ~0 on a signmagnitude machine would be INT_MIN (which is equal to INT_MAX on those platforms instead of INT_MAX  1).
Yeah, that is what I was pointint. I'm nearlly sure C has the not
operator because the pdp7/11 had(s) NOT or XOR R,imm in the
instruction set.
> However, the way the spec is written,
...
I've read the spec, but, when he said " then it computes the
bitinversion (i.e. its a simple arithmetic addition: substract the
value from the maximum of its input type and adds the minimum of that
type); as this input type is an int, , it returns an int equal to
(INTMAX  0 + INTMIN)." it seemed ( and still seems to me ) he was
trying to redefine not arithmetically.
FOS.


On 5/15/19, Coda Highland < [hidden email]> wrote:
> However, the way the spec is written, the conversion of a negative signed
> number into an unsigned integer is done by repeatedly adding or subtracting
> one more than the maximum value of the destination type until the value is
> in range. This means that if you start with 1 (which is out of range for
> uint64_t) you have to add ULONG_MAX+1 repeatedly until the value DOES fit.
> This is independent of the underlying bit pattern of the number, whether
> it's 2's complement, 1's complement, or signmagnitude. This means that
> regardless of how the compiler/CPU implements it, (uint64_t)1 MUST be
> equal to ULONG_MAX.
> On a 2's complement machine, this is just a sign extension: copy the most
> significant bit into all of the new bits of the larger type. The same
> operation works when casting int32_t to int64_t or to uint64_t.
> On a 1's complement machine or a signmagnitude machine, casting to int64_t
> and uint64_t are different operations. Casting to int64_t on 1's complement
> is a sign extension, and on a signmagnitude machine you have to copy the
> old sign bit into the new sign bit and set the old one to zero. Either way,
> the resulting value is 1LL, as the spec demands. Casting 1 to uint64_t is
> NOT just casting to int64_t and then reinterpreting the bit pattern like it
> is on a 2's complement machine. Because the spec demands that it must be
> equivalent to adding ULONG_MAX+1, you have to do a sign extension and then
> add 1 if you're using a 1's complement machine, and if you're using a
> signmagnitude machine it's a bitwise negation followed by setting the sign
> bit (back) to 1.
thanks for clarifying, i did not see why
lua_Unsigned umax = 1 ;
is fine here and hence used the
lua_Unsigned umax = ~ (lua_Unsigned) 0 ;
solution since it was easier to understand how it works.
but both are correct.
a LUA_UNSIGNED_MAX preprocessor macro in the Lua headers
could not hurt, though.


On Thu, May 16, 2019 at 3:13 AM Francisco Olarte < [hidden email]> wrote: Adam? ( due to sig vs sourceemailaddress discrepancies I do not know
the proper way to start mails toyou )...
I answer to either name. My name is Adam, my handle is Coda, and I go by either of them both online or IRL.
On Wed, May 15, 2019 at 8:31 PM Coda Highland <[hidden email]> wrote:
> On Wed, May 15, 2019 at 12:56 PM Francisco Olarte <[hidden email]> wrote:
....
>> (INTMAX  0 + INTMIN)." it seemed ( and still seems to me ) he was
>> trying to redefine not arithmetically.
I should have quoted not, or used ~, but it seems I made myself understood.
In the end, yeah. Threw me off for a while but it made sense in the end.
Yep. Thats why I normally try to find ways to leave this to compiler
implementers and avoid mixing logic and arithmetic behaviour, they
work hard at it and generally get things right. And I also act as if
"undefined behaviour" means "we will insert a chunk of code to brick
your hard disk" and not "we'll choose one of the two apparently normal
results" which some people use, saves a lot of trouble in the not so
long run, it doesn't usually bite, but when it does it does it hard.
Francisco Olarte.
In this case, it's not undefined behavior. The spec dictates that it must be implementationdefined behavior. You can't count on it being consistent across compilers and platforms, but each compiler is expected to pick a behavior and stick with it. It has to be consistent and sensible and not break anything too badly. (Throwing an error is actually considered not breaking anything too badly, because it means the code doesn't proceed with an unpredictable state.)
/s/ Adam

