double vs float vs int

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

double vs float vs int

John Paquin
I know that one of the main features of Lua is that it has a single variable type for numbers. There has been a lot written about the use of doubles as replacements for ints, and also about why doubles are not appropriate for many platforms (PS2, PC Games, etc).

I also know that you can change this type through the configuration (i use floats on the PC and ints on pocket PC).

However, it would be really, really nice to have a floating point type and an int type. I say this because much C code is written using bit flags. the float format can represent only about 16 sequential bit flags. while most C code (that I write anyway) uses up to 32 flags.

So the question I'm asking is, how hard would it be to make a seperate "int" data type?


Reply | Threaded
Open this post in threaded view
|

Re: double vs float vs int

Klaus Ripke
Hi

On Thursday 09 December 2004 16:41, John Paquin wrote:
> So the question I'm asking is, how hard would it be to make a seperate
> "int" data type?

basically creating new numeric types like complex, vectors,
fixed point or arbitrary precision bigints as userdata with
metatabled operator overloading is quite straightforward
-- at the price of some performance penalty, obviously.

Probably you'd choose int as builtin and make all others
nice "classes" (with all the math becoming methods of double).
That way you would be able to choose you number type for
the real calculations by pluging the appropriate lib, err, class.

At least that's the way I am going to do it ...


cheers
Klaus


Reply | Threaded
Open this post in threaded view
|

Re: double vs float vs int

Asko Kauppi-3
In reply to this post by John Paquin

While I actually agree with you about the usefulness of a dedicated 'int' type (especially if bit operations for it were built in) I doubt the Lua authors would go this way. If they did, they would have done it already?

I would see such a type not as a split of the existing 'number' type, but an addon for especially integer operations, as you mentioned. Could this be implemented efficiently via userdata (using the pointer itself as the integer value). This would also allow binops to be added with ease.

I think I'll need to do this once LuaX goes PocketPC. I'm using 'float' there, and that won't be very int32 friendly.. Want to exchange some code on this? ;)

-ak

9.12.2004 kello 17:41, John Paquin kirjoitti:

I know that one of the main features of Lua is that it has a single variable type for numbers. There has been a lot written about the use of doubles as replacements for ints, and also about why doubles are not appropriate for many platforms (PS2, PC Games, etc).

I also know that you can change this type through the configuration (i use floats on the PC and ints on pocket PC).

However, it would be really, really nice to have a floating point type and an int type. I say this because much C code is written using bit flags. the float format can represent only about 16 sequential bit flags. while most C code (that I write anyway) uses up to 32 flags.

So the question I'm asking is, how hard would it be to make a seperate "int" data type?



Reply | Threaded
Open this post in threaded view
|

Re: double vs float vs int

Jay Carlson
In reply to this post by Klaus Ripke
Klaus Ripke wrote:
Hi

On Thursday 09 December 2004 16:41, John Paquin wrote:

So the question I'm asking is, how hard would it be to make a seperate
"int" data type?


basically creating new numeric types like complex, vectors,
fixed point or arbitrary precision bigints as userdata with
metatabled operator overloading is quite straightforward
-- at the price of some performance penalty, obviously.

Probably you'd choose int as builtin and make all others
nice "classes" (with all the math becoming methods of double).
That way you would be able to choose you number type for
the real calculations by pluging the appropriate lib, err, class.

Agreed. If and when I return to the world of FPU-less PDAs, I'm going to make Lua numbers be 32-bit integers and create a double library.

Well, after appropriate benchmarking, of course. Many of us spent many years programming Microsoft BASIC implementations such as Commodore BASIC and Applesoft without significant heartache at all the performance left on the table by using (5-byte) floats for everything.

An interesting thought experiment I indulge in frequently is:

Knowing what we know now about language design, user interfaces, and extensibility, what kind of programming environment would we create for the 8-bit microcomputers?

Scaling down is important. Let's say we have a luxurious 16k of 6502 ROM, and have to do *something* useful in 16k of RAM. (I'm picking 16k because the VIC-20 was sorta hopeless, and you needed about 12k to use Applesoft.) Oh, and secondary storage may be slow and far away---don't count on software virtual memory, as the user may only have a tape drive.

Before you ask, FORTH is not acceptable, as it's too easy to blow up the machine.

Jay




Reply | Threaded
Open this post in threaded view
|

Re: double vs float vs int

Paul Du Bois
In reply to this post by Asko Kauppi-3
IMO some of the things that make the lua 4 implementation simple make
lives for (some) programmers not-so-simple. The biggest issues in our
project were:
- the single numeric type
- the inability to tell the difference between "table[k] contains
false" and "table[k] does not exist" (causing us to use 0 and 1 for
false and true, which in turn caused bugs because 0 in a boolean
context is true)
- the way tables and arrays are conflated

A lot of these design decisions make sense given Lua's history as a
configuration language, but as the language moves to support larger
projects (eg, with a full-fledged package system), perhaps the
motivation can be reexamined? I do believe that the language and
implementation could support a slightly richer set of builtin types,
while still remaining small and simple.

p

Reply | Threaded
Open this post in threaded view
|

[OT] 8-bit machines (was: double vs float vs int)

David Given
In reply to this post by Jay Carlson
Jay Carlson wrote:
[...]
Knowing what we know now about language design, user interfaces, and extensibility, what kind of programming environment would we create for the 8-bit microcomputers?

Way off topic, but this is something I've often thought about. I do actually own an NC200, a Z80-based laptop with 128kB of RAM, a 1MB solid-state disc and a 720kB floppy drive; it runs a proprietry OS, but I keep wondering about writing my own modern OS on it. It would make an ideal test-bed, and would be perfect for teaching embedded systems programming. Alas, I doubt Lua would run on it.

Incidentally, there will *always* be a market for 8-bit micros. They just get smaller and cheaper as time passes. You tend to find them in places where you wouldn't expect to find computers: digital watches, clocks, remote controls, anything that might need to watch inputs and control an output. These days, rather than designing a logic circuit to do the job, it's easier to stamp out an 8-bitter and do it in software. It won't be long until nanotechnology makes them *really* small.

(Has the 4004 finally died?)

[...]
Scaling down is important. Let's say we have a luxurious 16k of 6502 ROM, and have to do *something* useful in 16k of RAM. (I'm picking 16k because the VIC-20 was sorta hopeless, and you needed about 12k to use Applesoft.) Oh, and secondary storage may be slow and far away---don't count on software virtual memory, as the user may only have a tape drive.

The main thing to do is to determine when speed is more important than size. Check this out:

http://www.6502.org/source/interpreters/sweet16.htm

It's a 16-bit VM implemented in 300 bytes by Woz himself for the Apple II, designed to allow easy interleaving of Sweet16 code with native 6502 instructions. It allows you to take the non-speed-critical parts of your program and shrink them vastly. Not only does this give you more code and data space, Sweet16 is far easier to write. It's got sixteen 16-bit registers and most opcodes are one byte.

Before you ask, FORTH is not acceptable, as it's too easy to blow up the machine.

But on an 8-bit machine, it's *always* easy to blow up the machine. Forth is ideal for these devices: it's small (compiled Forth programs are typically *smaller* than native machine code versions), it's fast, it's portable, it's scalable, it's easily integrated with native code, and it's easy to write. (If you have the right mindset. I don't, and personally, I can't stand it. I tried to teach myself Forth at one point and I found myself spending all my time fighting the stack rather than working with it.)

Alas, modern programming languages don't fit these machines well. I have a C compiler for my Z80 machine which produces foul code. (The Z80 has no stack-relative addressing mode, and C uses the stack heavily... sigh.) 6502 C compilers tend to be even worse. What fast, compiled languages are there for these machines?

--
[insert interesting .sig here]


Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Asko Kauppi-3

Did you notice this (via OSNews):?

http://www.windowsfordevices.com/news/NS4666205829.html


11.12.2004 kello 18:36, David Given kirjoitti:

 Jay Carlson wrote:
[...]
Knowing what we know now about language design, user interfaces, and extensibility, what kind of programming environment would we create for the 8-bit microcomputers?

Way off topic, but this is something I've often thought about. I do actually own an NC200, a Z80-based laptop with 128kB of RAM, a 1MB solid-state disc and a 720kB floppy drive; it runs a proprietry OS, but I keep wondering about writing my own modern OS on it. It would make an ideal test-bed, and would be perfect for teaching embedded systems programming. Alas, I doubt Lua would run on it.

Incidentally, there will *always* be a market for 8-bit micros. They just get smaller and cheaper as time passes. You tend to find them in places where you wouldn't expect to find computers: digital watches, clocks, remote controls, anything that might need to watch inputs and control an output. These days, rather than designing a logic circuit to do the job, it's easier to stamp out an 8-bitter and do it in software. It won't be long until nanotechnology makes them *really* small.

(Has the 4004 finally died?)

[...]
Scaling down is important. Let's say we have a luxurious 16k of 6502 ROM, and have to do *something* useful in 16k of RAM. (I'm picking 16k because the VIC-20 was sorta hopeless, and you needed about 12k to use Applesoft.) Oh, and secondary storage may be slow and far away---don't count on software virtual memory, as the user may only have a tape drive.

The main thing to do is to determine when speed is more important than size. Check this out:

http://www.6502.org/source/interpreters/sweet16.htm

It's a 16-bit VM implemented in 300 bytes by Woz himself for the Apple II, designed to allow easy interleaving of Sweet16 code with native 6502 instructions. It allows you to take the non-speed-critical parts of your program and shrink them vastly. Not only does this give you more code and data space, Sweet16 is far easier to write. It's got sixteen 16-bit registers and most opcodes are one byte.

Before you ask, FORTH is not acceptable, as it's too easy to blow up the machine.

But on an 8-bit machine, it's *always* easy to blow up the machine. Forth is ideal for these devices: it's small (compiled Forth programs are typically *smaller* than native machine code versions), it's fast, it's portable, it's scalable, it's easily integrated with native code, and it's easy to write. (If you have the right mindset. I don't, and personally, I can't stand it. I tried to teach myself Forth at one point and I found myself spending all my time fighting the stack rather than working with it.)

Alas, modern programming languages don't fit these machines well. I have a C compiler for my Z80 machine which produces foul code. (The Z80 has no stack-relative addressing mode, and C uses the stack heavily... sigh.) 6502 C compilers tend to be even worse. What fast, compiled languages are there for these machines?

--
[insert interesting .sig here]



Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Enrico Colombini
In reply to this post by David Given
On Saturday 11 December 2004 17:36, David Given wrote:
> http://www.6502.org/source/interpreters/sweet16.htm

<OT><sniff> Apple ][ nostalgia </sniff></OT>

  Enrico



Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Mike Pall-49
Hi,

Enrico Colombini wrote:
> On Saturday 11 December 2004 17:36, David Given wrote:
> > http://www.6502.org/source/interpreters/sweet16.htm
> 
> <OT><sniff> Apple ][ nostalgia </sniff></OT>

More nostalgia ... I almost dug my C64 out for this one:

http://www.sics.se/~adam/software.html

Truly amazing. Looks like C is a viable option for 8 bit programming.

Bye,
     Mike

Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Asko Kauppi-3

C yes, but CBM64 not, sorry about that.. Modern 8-bit processors s.a. Atmel AVR (are there any others? ;O) have indeed been designed with C in mind, and their register set etc. adopts very well to that. The CBM64.. can one call that a register 'set'? Never mind..

Unfortunately, they fall short for Lua on the memory side, which will eventually be overcome.

-ak

12.12.2004 kello 14:24, Mike Pall kirjoitti:

 Truly amazing. Looks like C is a viable option for 8 bit programming.


Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Enrico Colombini
On Sunday 12 December 2004 22:12, Asko Kauppi wrote:
> The CBM64.. can one call that a register 'set'?  

Actually, the 6502 had a very smart RISC-like design, ahead of its time: it 
could use the first 256-byte memory 'page' (apart from the space used by the 
stack) as a register bank.
(by the way, I learned C with a Manx compiler on an Apple II, complete with 
Unix-like environment)

The 16-bit 65816 also was an interesting CPU. Unfortunately, marketing counts 
lots more than technical prowess (the "Betamax effect") and, alas, this also 
applies to languages.

  Enrico



Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Joseph Stewart
This makes me wonder if anyone has ever build a 6502-alike with the
first page as on-chip SRAM?

-joe


On Mon, 13 Dec 2004 14:43:58 +0100, Enrico Colombini <[hidden email]> wrote:
> On Sunday 12 December 2004 22:12, Asko Kauppi wrote:
> > The CBM64.. can one call that a register 'set'?
> 
> Actually, the 6502 had a very smart RISC-like design, ahead of its time: it
> could use the first 256-byte memory 'page' (apart from the space used by the
> stack) as a register bank.
> (by the way, I learned C with a Manx compiler on an Apple II, complete with
> Unix-like environment)
> 
> The 16-bit 65816 also was an interesting CPU. Unfortunately, marketing counts
> lots more than technical prowess (the "Betamax effect") and, alas, this also
> applies to languages.
> 
>   Enrico
> 
> 


-- 
Person who say it cannot be done should not interrupt person doing it.
 -- Chinese Proverb

Reply | Threaded
Open this post in threaded view
|

Re: [OT] 8-bit machines (was: double vs float vs int)

Javier Guerra Giraldez
On Monday 13 December 2004 9:48 am, Joseph Stewart wrote:
> This makes me wonder if anyone has ever build a 6502-alike with the
> first page as on-chip SRAM?

not exactly  6502-like, but the PIC chips are a bit like that

-- 
Javier

Attachment: pgpYUpz7vwkOz.pgp
Description: PGP signature