Are-we-fast-yet benchmark suite applied to different Lua VM versions + JS V8

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Are-we-fast-yet benchmark suite applied to different Lua VM versions + JS V8

rochus.keller@bluewin.ch
In case you are interested:  https://github.com/smarr/are-we-fast-yet is a benchmark suite I already used with my SOM Smalltalk implementations (see https://github.com/rochus-keller/smalltalk/ and https://github.com/rochus-keller/som/). Here is the description of the benchmark suite: https://stefan-marr.de/papers/dls-marr-et-al-cross-language-compiler-benchmarking-are-we-fast-yet/.

Applied to different Lua VM versions, I get these results: http://software.rochus-keller.ch/are-we-fast-yet_lua_results_2020-10-12.pdf

I currently try to run the benchmark (plain Lua) with Pallene and consider to try a similar approach with my SOM implementation (which heavily depends on closures and thus doesn't profit from a speed-up on LuaJIT). Any thoughts?

Best
R.
Reply | Threaded
Open this post in threaded view
|

Re: Are-we-fast-yet benchmark suite applied to different Lua VM versions + JS V8

Paul Baker
Have you tried running the Node.js benchmarks using V8’s `—jitless` flag, for a fairer comparison to the Lua interpreter and `LuaJIT -joff`?
Reply | Threaded
Open this post in threaded view
|

Re: Re: Are-we-fast-yet benchmark suite applied to different Lua VM versions + JS V8

rochus.keller@bluewin.ch
In reply to this post by rochus.keller@bluewin.ch
@ Paul Baker

>> Have you tried running the Node.js benchmarks using V8’s `—jitless` flag, for a fairer comparison to the Lua interpreter and `LuaJIT -joff`?
No; I was just interested in how much faster the V8 JIT works compared to LuaJIT 2.0. The -joff is only interesting to see how effective the JIT works (i.e. whether there is a speed-up at all compared to the interpreter).


Let me also mention the following:
Meanwhile I was able to install and make experiments with Pallene. After the paper had inspired my confidence before, I am now somewhat disappointed. Pallene is described in the paper as a subset of Lua; this is obviously not true; better to say that there is a (small) intersection between Lua and Pallene. As far as I understand, it is possible to access Pallene functions from Lua, but apparently not the other way around, and it does not seem to be possible to use most of the features supported by Lua in Pallene. So you can't just take a Lua program and add type annotations to it, but you have to redesign and implement a considerable part of it due to the lack of  expressiveness of Pallene. The "advantage" of Pallene over LuaJIT is therefore too expensive. The effort to rewrite all are-we-fast-yet benchmarks on Pallene is something I cannot and will not afford at the moment.