Setting Float Precision in Lua.c

classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Setting Float Precision in Lua.c

Albert Chan
Some operation system, say Windows, use EXTENDED PRECISION by default.
(64 bits long double instead of 53 bits double)

Example, from my Win7 laptop:

Python return the correct rounded sum:
(thus *all* Python import modules also does 53-bits float rounding)

>>> 1e16 + 2.9999           # good way to test float precision setting
10000000000000002.0  # Python got the rounding right

Lua 5.4.0 (work1)  Copyright (C) <a dir="ltr" href="tel:1994-2018" x-apple-data-detectors="true" x-apple-data-detectors-type="telephone" x-apple-data-detectors-result="0">1994-2018 Lua.org, PUC-Rio
> string.format('%.17g\n', 1e16 + 2.9999)
10000000000000004    -- wrong, rounding in long double setting

For 53 bits double, to make same math result across platform, 
maybe add this to Lua.c ? (even if not needed)

fesetenv (FE_PC53_ENV);





Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
On 6/5/2018 8:33 AM, Albert Chan wrote:

> Some operation system, say Windows, use EXTENDED PRECISION by default.
> (64 bits long double instead of 53 bits double)
>
> Example, from my Win7 laptop:
>
> Python return the correct rounded sum:
> (thus *all* Python import modules also does 53-bits float rounding)
>
>>>> 1e16 + 2.9999           # good way to test float precision setting
> 10000000000000002.0  # Python got the rounding right
>
> Lua 5.4.0 (work1)  Copyright (C) 1994-2018 <tel:1994-2018> Lua.org
> <http://lua.org/>, PUC-Rio
>> string.format('%.17g\n', 1e16 + 2.9999)
> 10000000000000004    -- wrong, rounding in long double setting
>
> For 53 bits double, to make same math result across platform,
> maybe add this to Lua.c ? (even if not needed)
>
> fesetenv (FE_PC53_ENV);

Wrong, Lua coders should instead strive to have a modicum level of
competency in floating-point arithmetic and floating-point
implementations.

If someone insist on _PERFECTION_ for their floating-point output,
then IMHO the onus is on that someone to do (and maintain) the
necessary tweaks (and test regime) for that purpose.

If a calculation such as "1e16 + 2.9999" is _IMPORTANT_ for a
program, then the program should do checks to ensure that it has
been compiled per requirements on problematic platforms.*

I really do not think suppliers of programming language
implementations should be hand-holding or babysitting users who
want perfection down to the very last ULP. Those users are
cordially invited to do it on their own time and dime.


*Well, it's probably a defective program anyway in that it is very
sensitive to digits in the ULP. More chaotic output than reliable
output. Reminded me of the story of the aeronautical grad who
found that his sim gave wildly different results on different chip
platforms. This made his project results more or less unusable.

--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
In reply to this post by Albert Chan
On 6/5/2018 8:33 AM, Albert Chan wrote:
> Some operation system, say Windows, use EXTENDED PRECISION by default.
> (64 bits long double instead of 53 bits double)
>
> Example, from my Win7 laptop:
>
> Python return the correct rounded sum:
> (thus *all* Python import modules also does 53-bits float rounding)
>
>>>> 1e16 + 2.9999           # good way to test float precision setting
      ^^^^^^^^^^^^^

What "1e16 + 2.9999" really means is that the user wants to
utilize all 53 bits of double precision, PLUS having rounding done
just the way the user likes it.

If the program uses all 53 bits and the ULPs are very important,
that is an impossible thing to ask for, because all your rounding
due to arithmetic ops is going to hammer your ULPs. What is a
correct rounding? You would have to thread extremely carefully and
do extensive testing to see rounding effects on error accumulation
in your program and decide whether you want to trust those ULPs.
But we want perfect ULPs now? How? Why?

So what is the point of "1e16 + 2.9999"? It's only useful for
perfectionists who wish to see digit- or bit-perfect output across
all languages and platforms. Please do not see floating-point as
something that is mathematically beautiful or perfect; it is more
of a mass of engineering compromises -- when you push its
capabilities to the limits, you always have to manage the error
thing. This is a world that is very, very far away from perfect
ULP digits or bits.

Now, if you still want to change people's minds, please explain
why the idea is one that is really important to have for Lua, in
terms of how it affects normal apps and programs.


--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan
In reply to this post by KHMan

>> On Jun 4, 2018, at 9:40 PM, KHMan <[hidden email]> wrote:
>>
>> On 6/5/2018 8:33 AM, Albert Chan wrote:
>>
>> For 53 bits double, to make same math result across platform,
>> maybe add this to Lua.c ? (even if not needed)
>> fesetenv (FE_PC53_ENV);
>
> Wrong, Lua coders should instead strive to have a modicum level of competency in floating-point arithmetic and floating-point implementations.
>
> If someone insist on _PERFECTION_ for their floating-point output, then IMHO the onus is on that someone to do (and maintain) the necessary tweaks (and test regime) for that purpose.
>
> If a calculation such as "1e16 + 2.9999" is _IMPORTANT_ for a program, then the program should do checks to ensure that it has been compiled per requirements on problematic platforms.*

The problem is Lua codes cannot access float precision setting.
It had to be done in C side.

Above example showed the problem of *double rounding*
Calculation rounded (up) in 64-bits, then again (up) to 53-bits double.

What is the reason for using machine "default" setting ?
This will produce inconsistent results across platform.

> I really do not think suppliers of programming language implementations should be hand-holding or babysitting users who want perfection down to the very last ULP. Those users are cordially invited to do it on their own time and dime.
>
> *Well, it's probably a defective program anyway in that it is very sensitive to digits in the ULP. More chaotic output than reliable output. Reminded me of the story of the aeronautical grad who found that his sim gave wildly different results on different chip platforms. This made his project results more or less

Some algorithm *required* getting the last ULP right.

Example: https://github.com/achan001/fsum

fsum.lua use Shewchuk Algorithm to get *exact* sum of numbers.
If ask for total, it returned the correctly rounded 53-bit double.
It simply will not work with extended precision rounding.

I had to patch Lua.c to make it work.







Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Dirk Laurie-2
2018-06-05 6:34 GMT+02:00 Albert Chan <[hidden email]>:

>
>>> On Jun 4, 2018, at 9:40 PM, KHMan <[hidden email]> wrote:
>>>
>>> On 6/5/2018 8:33 AM, Albert Chan wrote:
>>>
>>> For 53 bits double, to make same math result across platform,
>>> maybe add this to Lua.c ? (even if not needed)
>>> fesetenv (FE_PC53_ENV);
>>
>> Wrong, Lua coders should instead strive to have a modicum level of competency in floating-point arithmetic and floating-point implementations.
>>
>> If someone insist on _PERFECTION_ for their floating-point output, then IMHO the onus is on that someone to do (and maintain) the necessary tweaks (and test regime) for that purpose.
...
>> I really do not think suppliers of programming language implementations should be hand-holding or babysitting users who want perfection down to the very last ULP. Those users are cordially invited to do it on their own time and dime.
>>

I agree 100% with KHMan. Floating-point really is not, and never has
been, a topic easily handled by non-specialists. You can't expect the
designer of a subroutine library written in C (even if disguised as a
scripting language) to fix whatever the authors of the C compiler did
not.

Nicholas Higham's book "Accuracy and Stability of Numerical
Algorithms" https://epubs.siam.org/doi/book/10.1137/1.9780898718027
devotes 663 pages to the topic.

> Some algorithm *required* getting the last ULP right.
>
> Example: https://github.com/achan001/fsum
>
> fsum.lua use Shewchuk Algorithm to get *exact* sum of numbers.
> If ask for total, it returned the correctly rounded 53-bit double.
> It simply will not work with extended precision rounding.
>
> I had to patch Lua.c to make it work.

That sort of thing is way above what you can expect from Lua. Why,
they even dropped the inverse hyperbolic functions from Lua 5.3 so
that the world can see that Lua is not specialized mathematical
software.

The fact that you know Shewchuk's Algorithm, and can make that patch,
proves that you are an expert. Don't be a bumptious expert.

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Viacheslav Usov
In reply to this post by Albert Chan
On Tue, Jun 5, 2018 at 2:33 AM, Albert Chan <[hidden email]> wrote:

For 53 bits double, to make same math result across platform, 
maybe add this to Lua.c ? (even if not needed)

fesetenv (FE_PC53_ENV);

I suspect the result of that will not be truly cross-platform. Most platforms will fail to compile that [1], but I guess there is one that you happen to use (which one is it?) where that works. To make this truly cross-platform, you should rather say in its stead: #error I cannot set floating-point precision in a cross-platform way [2]

Cheers,
V.


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan
On Jun 5, 2018, at 5:59 AM, Viacheslav Usov <[hidden email]> wrote:

On Tue, Jun 5, 2018 at 2:33 AM, Albert Chan <[hidden email]> wrote:

For 53 bits double, to make same math result across platform, 
maybe add this to Lua.c ? (even if not needed)

fesetenv (FE_PC53_ENV);

I suspect the result of that will not be truly cross-platform. Most platforms will fail to compile that [1], but I guess there is one that you happen to use (which one is it?) where that works. To make this truly cross-platform, you should rather say in its stead: #error I cannot set floating-point precision in a cross-platform way [2]

Cheers,
V.



you are right.

fesetround work c99 or higher (I use gcc 4.7.1)

I wanted float math result to be platform independent,
but the patch itself (fesetround) is not platform independent.

Looking at python pyport.h, it is complicated.

Luckily, Python provided binaries to download.



Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Roberto Ierusalimschy
In reply to this post by KHMan
> If someone insist on _PERFECTION_ for their floating-point output,
> then IMHO the onus is on that someone to do (and maintain) the
> necessary tweaks (and test regime) for that purpose.

If someone insists on _PERFECTION_ for their floating-point output,
they should use the hexadecimal format. Lua has full support for
hexadecimal floating-point numerals.

-- Roberto

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan

> If someone insists on _PERFECTION_ for their floating-point output,
> they should use the hexadecimal format. Lua has full support for
> hexadecimal floating-point numerals.
>
> -- Roberto

Hexadecimal format won't help

It is not the output is off, it is the float internal bits.
Double rounding cause the value to be off by +/- 1 ULP

It is OK if Lua did this consistently throughout its distribution.
Bad rounding is compensated by higher internal float precision.

I preferred rounding in 53-bits. 64-bits rounding had issues.
https://www.vinc17.net/research/extended.en.html

Just pick one.


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
On 6/6/2018 1:15 AM, Albert Chan wrote:
>
>> If someone insists on _PERFECTION_ for their floating-point output,
>> they should use the hexadecimal format. Lua has full support for
>> hexadecimal floating-point numerals.
>>
>> -- Roberto
>
> Hexadecimal format won't help

Yeah, he seems to have misunderstood me there.

> It is not the output is off, it is the float internal bits.
> Double rounding cause the value to be off by +/- 1 ULP
>
> It is OK if Lua did this consistently throughout its distribution.
> Bad rounding is compensated by higher internal float precision.
>
> I preferred rounding in 53-bits. 64-bits rounding had issues.
> https://www.vinc17.net/research/extended.en.html

Why don't you compile your binaries for SSE2 only? Even easier,
just compile to 64-bit binaries? Surprising you mentioned Windows
uses extended precision by default when there is x64 on every
64-bit capable Intel/AMD/other chip... and has been so for many,
many years already.

Avoid the 8087 datapath, it's not hard to avoid that legacy thing.
Then everything will look like a modern 64-bit FPU datapath like
all the other chip archs, more or less, and let's hope the ULPs
are consistent between chip archs and the chips too, ha ha.

It's a banana skin, just walk around the banana skin. The rest of
us, we don't need to be walking there.

--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia



Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan

> Why don't you compile your binaries for SSE2 only? Even easier, just compile to 64-bit binaries? Surprising you mentioned Windows uses extended precision by default when there is x64 on every 64-bit capable Intel/AMD/other chip... and has been so for many, many years already.

I already picked 53-bits roundings.
I use my own laptop behavior just as a example, same with fsum.lua

David Gay's dtoa.c strtod maybe a better example.
With 53-bits roundings, it optimized away common cases [1]

strtod("123456789e-20", NULL)
= 123456789 / 1e20   -- both numbers exactly represented in double
= 1.23456789e-012    -- division guaranteed correct rounding

> Avoid the 8087 datapath, it's not hard to avoid that legacy thing.

seems we are in agreement here.

But, if you believe in this, why not add it to Lua ?
This will automatically let everyone avoid it too.

> It's a banana skin, just walk around the banana skin. The rest of us, we don't need to be walking there.

You already are walking there, just dont know it.
It is hidden behind Lua default of showing 8 significant digits.

Python3 default to show just enough digits to round-trip [2]
So, even 1 ULP difference will shows up in the output.

Should Lua put the banana skin in the trash ?

--

[1] https://www.exploringbinary.com/fast-path-decimal-to-floating-point-conversion/
[2] https://www.exploringbinary.com/the-shortest-decimal-string-that-round-trips-examples/

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
On 6/6/2018 8:53 PM, Albert Chan wrote:

>
>> Why don't you compile your binaries for SSE2 only? Even easier, just compile to 64-bit binaries? Surprising you mentioned Windows uses extended precision by default when there is x64 on every 64-bit capable Intel/AMD/other chip... and has been so for many, many years already.
>
> I already picked 53-bits roundings.
> I use my own laptop behavior just as a example, same with fsum.lua
>
> David Gay's dtoa.c strtod maybe a better example.
> With 53-bits roundings, it optimized away common cases [1]
>
> strtod("123456789e-20", NULL)
> = 123456789 / 1e20   -- both numbers exactly represented in double
> = 1.23456789e-012    -- division guaranteed correct rounding

Say 123456789 is correct to 9 digits. Divide it by anything, it
will be useful to less than 9 digits. Whatever perfect results to
16 digits that is produced does not matter. The latter things only
matter to mathematicians looking for some kind of perfection.


>> Avoid the 8087 datapath, it's not hard to avoid that legacy thing.
>
> seems we are in agreement here.
>
> But, if you believe in this, why not add it to Lua ?
> This will automatically let everyone avoid it too.
>
>> It's a banana skin, just walk around the banana skin. The rest of us, we don't need to be walking there.
>
> You already are walking there, just dont know it.
> It is hidden behind Lua default of showing 8 significant digits.
>
> Python3 default to show just enough digits to round-trip [2]
> So, even 1 ULP difference will shows up in the output.
>
> Should Lua put the banana skin in the trash ?
>
> --
>
> [1] https://www.exploringbinary.com/fast-path-decimal-to-floating-point-conversion/
> [2] https://www.exploringbinary.com/the-shortest-decimal-string-that-round-trips-examples/

Oh no, you've drunk too much of the kool-aid. Oh, I'm aware of the
site above. Long discussion elsewhere last year. Here, read more
to mess with your mind:

http://www.exploringbinary.com/17-digits-gets-you-there-once-youve-found-your-way/

https://www.exploringbinary.com/number-of-decimal-digits-in-a-binary-fraction/

https://www.exploringbinary.com/maximum-number-of-decimal-digits-in-binary-floating-point-numbers/

You've laser-focused onto some kind of path.

I don't think I will bother to change your mind any further. You
work it out on your own.

Good luck in your request to Roberto & co.

--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
In reply to this post by Albert Chan
On 6/6/2018 8:53 PM, Albert Chan wrote:

>
>> Why don't you compile your binaries for SSE2 only? Even easier, just compile to 64-bit binaries? Surprising you mentioned Windows uses extended precision by default when there is x64 on every 64-bit capable Intel/AMD/other chip... and has been so for many, many years already.
>
> I already picked 53-bits roundings.
> I use my own laptop behavior just as a example, same with fsum.lua
>
> David Gay's dtoa.c strtod maybe a better example.
> With 53-bits roundings, it optimized away common cases [1]
>
> strtod("123456789e-20", NULL)
> = 123456789 / 1e20   -- both numbers exactly represented in double
> = 1.23456789e-012    -- division guaranteed correct rounding

Here is a different approach (the long story approach):
=======================================================
(It was bubbling in my brain so I had to type it out. If you don't
understand this, then I really cannot help any further.)

Say, all values are on a line.

A float double actually represents a number that lies anywhere on
a segment on that line. It may be exactly the value of the
representation, but it can also be a little more, or a little
less. All those values in a segment need to be shoehorned into one
binary representation. It's a single binary representation, yet
the values can all be different. It's an approximation.

The examples you keep offering imply exact numbers, that is, they
are points on the line. Then in the examples, the arithmetic
operation is performed, and the FPU should round and hit another
point on the line. There is an expectation of mathematical
perfection or mathematical elegance.

When we work with actual numbers instead of ideal examples, we
always understand that when operations are performed, the result
values hardly ever hit the exact points on the line that equal a
binary representation. Instead, the result value is close, within
the segment which has that binary representation. So there is
error, and error usually accumulates.

Since a binary representation really means a segment of possible
values on the value line, when we do arithmetic with two segments,
we end up with a bigger segment. We can have many combinations of
operands and result within those segments and they are all valid
for the binary representation. But how correct are those values?
Normally we know the quality of our inputs and they are much less
than 16 digits of precision, so we often successfully manage
errors in calculations.

But some people are of the notion that when arithmetic is done on
two points on the value line, the result should hit an exact point
when such a situation arises. It appears that some people have the
first mental model (segments), others have the second mental model
(points). But if we keep thinking about all those exact points on
the line, then the problem is that values next to those points
cannot be shoehorned into beautifully exact and artificial
mathematical examples.

If we want exact calculations all the time, just use floats as
integers. We can assume the integers are exact, as points on the
value line. We also need to do things that don't mess up this
model. But once the result has a fraction, for example when a
division is done, that value is most likely no longer exactly
representable. It's an approximation.

For non-mathematicians, we work with regular numbers or data all
the time and they get processed and the end value is approximated
by the resulting binary representation. Those values do not hit
the points on the value line that are exactly the value of the
binary representations. But we have 16 decimal digits to work
with, so we format the result properly for user consumption by
rounding to much less than 16 digits of precision. This is why I
mentioned the concepts of engineering compromises versus
mathematical perfection.

So it's no problem for most of us. But if mathematicians keep
thinking about ideal situations and keep trying to hit exact
points on the value line, then they should keep on doing so and
not bother the rest of us about it.

> [snip snip snip]


--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia



Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Dirk Laurie-2
2018-06-07 4:04 GMT+02:00 KHMan <[hidden email]>:
> On 6/6/2018 8:53 PM, Albert Chan wrote:
>> strtod("123456789e-20", NULL)
>> = 123456789 / 1e20   -- both numbers exactly represented in double
>> = 1.23456789e-012    -- division guaranteed correct rounding
> Here is a different approach (the long story approach):
> =======================================================
...
> So it's no problem for most of us. But if mathematicians keep thinking about
> ideal situations and keep trying to hit exact points on the value line, then
> they should keep on doing so and not bother the rest of us about it.

Well, you two have really been talking at cross purposes. All of what
you say is true, but you think Albert does not know that. He thinks he
does.

Albert's point (see his second post) applies to the situation where
floating-point to a certain precision is all you have. In that
situation, there are algorithms that deliver additional precision by
doing clever things — but those algorithms rely on knowing what kind
of rounding the processor does, all the time, every time. Now if you
explicitly set rounding mode, you know that. If you don't set it, you
don't know. It's like seeding a pseudo-random number generator [1].

Where my point of view differs from Albert's that he thinks it
strengthens his case by pointing out that Windows is an example of a
system that gives undesired results. I think that it triggers the
reaction: yet another case where Windows is sloppy — so what?

But the issue is just this: a failure to initialize something gives
undesired results.

I think it is a valid point with a very simple, cost-free workaround.

-- Dirk

[1] I use this comparison with trepidation, recalling some previous
threads on this list.

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

KHMan
On 6/7/2018 1:30 PM, Dirk Laurie wrote:

> 2018-06-07 4:04 GMT+02:00 KHMan wrote:
>> On 6/6/2018 8:53 PM, Albert Chan wrote:
>>> strtod("123456789e-20", NULL)
>>> = 123456789 / 1e20   -- both numbers exactly represented in double
>>> = 1.23456789e-012    -- division guaranteed correct rounding
>> Here is a different approach (the long story approach):
>> =======================================================
> ...
>> So it's no problem for most of us. But if mathematicians keep thinking about
>> ideal situations and keep trying to hit exact points on the value line, then
>> they should keep on doing so and not bother the rest of us about it.
>
> Well, you two have really been talking at cross purposes. All of what
> you say is true, but you think Albert does not know that. He thinks he
> does.
>
> Albert's point (see his second post) applies to the situation where
> floating-point to a certain precision is all you have. In that
> situation, there are algorithms that deliver additional precision by
> doing clever things — but those algorithms rely on knowing what kind
> of rounding the processor does, all the time, every time. Now if you
> explicitly set rounding mode, you know that. If you don't set it, you
> don't know. It's like seeding a pseudo-random number generator [1].
>
> Where my point of view differs from Albert's that he thinks it
> strengthens his case by pointing out that Windows is an example of a
> system that gives undesired results. I think that it triggers the
> reaction: yet another case where Windows is sloppy — so what?

Sloppy? Another mathematician might say that using extended
precision is better, only you guys are not using it correctly.
Opinions, everyone has a few of them.

Who's to say that one camp is the purveyor of all settings that
are correct and proper for IA32 floating point?

> But the issue is just this: a failure to initialize something gives
> undesired results.
>
> I think it is a valid point with a very simple, cost-free workaround.

Yeah, until he pops up with the next thing that needs to be
perfect. I can see it already, perfect round-tripping atod/dtoa.
Then another thing. And another thing. Haven't you forgotten his
recent efforts at helping Lua get the bestest, most perfect, most
awesome PRNG? The bestest only lasts until it is knocked down by a
new research paper. Is Lua in the math academia business now?

All these things need auditing. B Dawson etc have even found
glitches with MSVC's number to ASCII or vice versa conversions,
for an older library I think. So to make Lua perfect for number
games, you need the auditing, the hunting down of glitches, and on
and on and on.

More likely it is up to binary release devs who should decide
whether to take this up. Whose real-world apps must really have
perfect output to 16 decimal digits? After 1000 float ops do you
still harp on a few roundings that are not the same for a
platform? It's the end of the world? Don't scientific people
already know how to manage errors for their scientific data? This
is really something that is more useful for math-oriented
academics to parade around with.

So Albert name-drops Vincent Lefèvre on his reply. The latter is
on the gcc mailing list and in the last few years that I noticed
him posting library announcements and such I don't recall him ever
pushing for 'proper' default compiler settings for gcc on IA32.

So Albert still needs to persuade Roberto & co. I'm just shooting
the breeze here. Hey, good luck there. ;-)


--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia



Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan
> So Albert name-drops Vincent Lefèvre on his reply. The latter is on the gcc mailing list and in the last few years that I noticed him posting library announcements and such I don't recall him ever pushing for 'proper' default compiler settings for gcc on IA32.

Vincent, if you are reading this, I am sorry.

Someone mentioned your new book, and you happened to wrote
an article about issues with extended precision roundings.

good luck with your new book.



Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Dirk Laurie-2
In reply to this post by KHMan
2018-06-08 3:40 GMT+02:00 KHMan <[hidden email]>:

> Yeah, until he pops up with the next thing that needs to be perfect.
...
> So Albert name-drops ...
...
> So Albert still needs to
...

This kind of ad-hominem tirade does not become the author of many
erudite and thorough posts on Lua-L, including a groundbreaking
writeup of the Lua VM and bytecode.

One treats every proposal on merit, no matter who posted it and how
many nitpicking, annoying proposals they have made in the past.

-- Dirk

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Roberto Ierusalimschy
In reply to this post by Viacheslav Usov
> On Tue, Jun 5, 2018 at 2:33 AM, Albert Chan <[hidden email]> wrote:
>
> For 53 bits double, to make same math result across platform,
> > maybe add this to Lua.c ? (even if not needed)
> >
> > fesetenv (FE_PC53_ENV);

'fesetenv' is part of C99, but where does 'FE_PC53_ENV' come from?

-- Roberto

Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Albert Chan

>>> maybe add this to Lua.c ? (even if not needed)
>>>
>>> fesetenv (FE_PC53_ENV);
>
> 'fesetenv' is part of C99, but where does 'FE_PC53_ENV' come from?
>
> -- Roberto

#include <fenv.h>


Reply | Threaded
Open this post in threaded view
|

Re: Setting Float Precision in Lua.c

Ką Mykolas
...and the definition itself feels like implementation defined. Could not find it on any standard, only note that other FE_* definitions are left for implementation specific things.

On Fri, 8 Jun 2018, 16:51 Albert Chan, <[hidden email]> wrote:

>>> maybe add this to Lua.c ? (even if not needed)
>>>
>>> fesetenv (FE_PC53_ENV);
>
> 'fesetenv' is part of C99, but where does 'FE_PC53_ENV' come from?
>
> -- Roberto

#include <fenv.h>


12