os.clock conversion to milliseconds under MacOS X

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

os.clock conversion to milliseconds under MacOS X

Marco Bambini
Hello,
what is the correct way to convert a difference between two os.clock values in milliseconds on MacOS X?

Thanks a lot.
--
Marco Bambini
http://www.sqlabs.com
http://twitter.com/sqlabs
http://instagram.com/sqlabs




Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Gary V. Vaughan
On Dec 17, 2014, at 9:50 AM, Marco Bambini <[hidden email]> wrote:
> what is the correct way to convert a difference between two os.clock values in milliseconds on MacOS X?

os.clock() only has 1-second resolution, but if the nearest 1000ms is enough for you then:

    $ lua
    Lua 5.3.0  Copyright (C) 1994-2014 Lua.org, PUC-Rio
    > began = os.clock ()
    > local elapsed = os.difftime (os.clock (), began)
    > io.write (string.format ("%dms elapsed\n", 1000 * elapsed))
    1000ms elapsed

otherwise, you might find the luaposix[1] time functions, with nanoseconds resolution, more useful:

    $ lua
    Lua 5.3.0  Copyright (C) 1994-2014 Lua.org, PUC-Rio
    > posix = require 'posix'
    > timersub, gettimeofday = posix.timersub, posix.gettimeofday
    > began = gettimeofday ()
    > elapsed = timersub (gettimeofday (), began)
    > io.write (string.format ("%.0fms elapsed\n", elapsed.sec * 1000 + elapsed.usec / 1000))
    1758ms elapsed

HTH,
--
Gary V. Vaughan (gary AT vaughan DOT pe)
Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Gary V. Vaughan

> On Dec 17, 2014, at 11:09 AM, Gary V. Vaughan <[hidden email]> wrote:
>
> On Dec 17, 2014, at 9:50 AM, Marco Bambini <[hidden email]> wrote:
>> what is the correct way to convert a difference between two os.clock values in milliseconds on MacOS X?
>
> os.clock() only has 1-second resolution, but if the nearest 1000ms is enough for you then:
>
>    $ lua
>    Lua 5.3.0  Copyright (C) 1994-2014 Lua.org, PUC-Rio
>> began = os.clock ()
>> local elapsed = os.difftime (os.clock (), began)
>> io.write (string.format ("%dms elapsed\n", 1000 * elapsed))
>    1000ms elapsed
>
> otherwise, you might find the luaposix[1] time functions, with nanoseconds resolution, more useful:

I mean, *micro*second resolution...

>    $ lua
>    Lua 5.3.0  Copyright (C) 1994-2014 Lua.org, PUC-Rio
>> posix = require 'posix'
>> timersub, gettimeofday = posix.timersub, posix.gettimeofday
>> began = gettimeofday ()
>> elapsed = timersub (gettimeofday (), began)
>> io.write (string.format ("%.0fms elapsed\n", elapsed.sec * 1000 + elapsed.usec / 1000))
>    1758ms elapsed
>
> HTH,
> --
> Gary V. Vaughan (gary AT vaughan DOT pe)


Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Luiz Henrique de Figueiredo
In reply to this post by Gary V. Vaughan
>     > began = os.clock ()
>     > local elapsed = os.difftime (os.clock (), began)

os.difftime is meant to be used with numbers returned by os.time, not os.clock.
In POSIX systems, you can simply subtract the numbers, in both cases.

Perhaps the manual should say so.

Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Gary V. Vaughan
Hi Luiz,

> On Dec 17, 2014, at 11:39 AM, Luiz Henrique de Figueiredo <[hidden email]> wrote:
>
>>> began = os.clock ()
>>> local elapsed = os.difftime (os.clock (), began)
>
> os.difftime is meant to be used with numbers returned by os.time, not os.clock.
> In POSIX systems, you can simply subtract the numbers, in both cases.
>
> Perhaps the manual should say so.

Yikes!  Yes please, I've been abusing it in luaposix *and* specl for quite some time
having not known any better :-/

Cheers,
--
Gary V. Vaughan (gary AT vaughan DOT pe)


Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Roberto Ierusalimschy
In reply to this post by Luiz Henrique de Figueiredo
> >     > began = os.clock ()
> >     > local elapsed = os.difftime (os.clock (), began)
>
> os.difftime is meant to be used with numbers returned by os.time, not os.clock.
> In POSIX systems, you can simply subtract the numbers, in both cases.
>
> Perhaps the manual should say so.

It already does:

  os.clock ()

  Returns an approximation of the amount in seconds of CPU time
  used by the program.

Note the units ("in seconds"). You can always subtract seconds from
seconds. (That is valid in any system, not only POSIX.)

-- Roberto

Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Gary V. Vaughan
Hi Roberto,

> On Dec 17, 2014, at 12:49 PM, Roberto Ierusalimschy <[hidden email]> wrote:
>
>>>> began = os.clock ()
>>>> local elapsed = os.difftime (os.clock (), began)
>>
>> os.difftime is meant to be used with numbers returned by os.time, not os.clock.
>> In POSIX systems, you can simply subtract the numbers, in both cases.
>>
>> Perhaps the manual should say so.
>
> It already does:
>
>  os.clock ()
>
>  Returns an approximation of the amount in seconds of CPU time
>  used by the program.
>
> Note the units ("in seconds"). You can always subtract seconds from
> seconds. (That is valid in any system, not only POSIX.)

Sure, but since the docs for os.difftime() imply that it is a more portable
version of POSIX `t2 - t1`, I have been using it on `os.clock ()` results too.

While the manual entry for `os.time ()` indicates that it should be used only
as an argument to `os.date ()` or `os.difftime ()`, and the manual entry for
`os.date ()` clearly states that if  the `time` parameter is used, it should
be a value from `os.date ()`... the entry for `os.difftime ()` seems to imply
that I can pass any seconds-like value.  And indeed, on POSIX, that has been
working admirably for me.

Now that it's been pointed out, and I've given it some thought, it seems
perfectly sensible that `os.difftime ()` only takes `os.date ()` return
values, but it would likely save other folks from mistakenly abusing the
function in the way I have for the last few years if the manual was instead
worded along the lines, analogously to `os.date ()`:

  os.difftime (t2, t1)

  Returns the number of seconds from time t1 to time t2 (see the os.time
  function for a description of these values). In POSIX, Windows, and some
  other systems, this value is exactly t2-t1.

Cheers,
--
Gary V. Vaughan (gary AT vaughan DOT pe)
Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Isaac Dupree-2
On 12/17/2014 12:05 PM, Gary V. Vaughan wrote:
>   os.difftime (t2, t1)
>
>   Returns the number of seconds from time t1 to time t2 (see the os.time
>   function for a description of these values). In POSIX, Windows, and some
>   other systems, this value is exactly t2-t1.

That documentation cannot be correct on Unix-time systems if there is a
leap second between t1 and t2.

(For example, in UTC Unix time, 1341144000 - 1341057600 = 86400 but
there were actually 86401 seconds from June 30, 2012 12:00:00 UTC to
July 1, 2012 12:00:00 UTC.  That said, even Python's documentation
doesn't clearly tell me whether its time deltas treat leap seconds as
being zero seconds long; we seem to be resigned to IERS-initiated
software bugs every few years.)


Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Tangent 128
On 12/17/2014 02:52 PM, Isaac Dupree wrote:

>
> That documentation cannot be correct on Unix-time systems if there is a
> leap second between t1 and t2.
>
> (For example, in UTC Unix time, 1341144000 - 1341057600 = 86400 but
> there were actually 86401 seconds from June 30, 2012 12:00:00 UTC to
> July 1, 2012 12:00:00 UTC.  That said, even Python's documentation
> doesn't clearly tell me whether its time deltas treat leap seconds as
> being zero seconds long; we seem to be resigned to IERS-initiated
> software bugs every few years.)
>

Depends if you are using UTC Unix time (traditional) or TAI Unix time
(the "right/" tzdata files). ;)

Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

Tim Hill

> On Dec 17, 2014, at 4:17 PM, Joseph Wallace <[hidden email]> wrote:
>
> On 12/17/2014 02:52 PM, Isaac Dupree wrote:
>>
>> That documentation cannot be correct on Unix-time systems if there is a
>> leap second between t1 and t2.
>>
>> (For example, in UTC Unix time, 1341144000 - 1341057600 = 86400 but
>> there were actually 86401 seconds from June 30, 2012 12:00:00 UTC to
>> July 1, 2012 12:00:00 UTC.  That said, even Python's documentation
>> doesn't clearly tell me whether its time deltas treat leap seconds as
>> being zero seconds long; we seem to be resigned to IERS-initiated
>> software bugs every few years.)
>>
>
> Depends if you are using UTC Unix time (traditional) or TAI Unix time
> (the "right/" tzdata files). ;)
>

Leap seconds or not, the real point is that “wall time” measures of time are never monatonic; after all a clock may be “wrong” and need to be corrected at any time (including from user input). If durations need to be measured by subtracting two time points, then a monotonic time measure such as time since last boot should be used (e.g. uptime).

—Tim


Reply | Threaded
Open this post in threaded view
|

Re: os.clock conversion to milliseconds under MacOS X

William Ahern
In reply to this post by Tangent 128
On Wed, Dec 17, 2014 at 07:17:22PM -0500, Joseph Wallace wrote:

> On 12/17/2014 02:52 PM, Isaac Dupree wrote:
> >
> > That documentation cannot be correct on Unix-time systems if there is a
> > leap second between t1 and t2.
> >
> > (For example, in UTC Unix time, 1341144000 - 1341057600 = 86400 but
> > there were actually 86401 seconds from June 30, 2012 12:00:00 UTC to
> > July 1, 2012 12:00:00 UTC.  That said, even Python's documentation
> > doesn't clearly tell me whether its time deltas treat leap seconds as
> > being zero seconds long; we seem to be resigned to IERS-initiated
> > software bugs every few years.)
> >
>
> Depends if you are using UTC Unix time (traditional) or TAI Unix time
> (the "right/" tzdata files). ;)

It's better to treat Unix time (aka POSIX time) as unrelated to UTC or TAI,
regardless of your system settings. POSIX time is literally defined as 86400
seconds per day. It has no concept of leap seconds, unlike UTC or TAI. It
follows that the definition of a "second" in POSIX is not the same as the
metric unit, "second", used by UTC and TAI. Same spelling, same
pronounciation, and even a passing similarity, but two entirely different
units of time.

You cannot consistently convert a POSIX timestamp to UTC or TAI with an
accuracy of 1 second. It's fundamentally a lossy conversion.

The benefit of POSIX time is that calculation of civil date-times is
incredibly easy, both past and future. It's trivial to calculate the
accurate minute, hour, day, day of week, month, and year, just not the
accurate UTC or TAI second or sub-second.

On the other hand, with UTC and TAI it's not possible to calculate all
future dates accurately, and for calculating past dates you need a table of
leap seconds. Very ugly.

Personally, I prefer the POSIX the definition of time. It's an elegant hack.
When converting to UTC it's rare that you would need second accuracy,
anyhow. In the domain of problems that rely on civil time, I'd like to here
an argument about how something would go wrong if a timestamp was converted
to :60 rather than :61. It's a distinction without a difference.

Normally when you need accurate second or sub-second resolution, civil time
is not your immediate concern. No need to conflate the two
concepts--physical constant "second" and the "second" as a unit of
delineation of the civil day.

If I care about measuring metric seconds (e.g. in a more rigorously
scientific context), UTC is much too convoluted. TAI is tempting, but TAI is
also a lie: relativity tells us that there is no synchronized passage of
time, even between two points on the face of the earth. And in fact, this
limits the resolution of any shared timescale for earth. Which means there's
no avoiding thinking about what you actually need--TAI will usually be
sufficient, but you should also be able to justify it, rather than just
defaulting to TAI without consideration in the naive belief that it solves
all problems because of its tight coupling with the metric second.

Ultimately, just avoid converting between the systems. Use the same unit and
system consistently, and don't convert until the last possible moment, when
the loss in accuracy becomes least costly. Just like with floating point,
bignum arithmetic, etc.