I currently try to run the benchmark (plain Lua) with Pallene and consider to try a similar approach with my SOM implementation (which heavily depends on closures and thus doesn't profit from a speed-up on LuaJIT). Any thoughts?
>> Have you tried running the Node.js benchmarks using V8’s `—jitless` flag, for a fairer comparison to the Lua interpreter and `LuaJIT -joff`?
No; I was just interested in how much faster the V8 JIT works compared to LuaJIT 2.0. The -joff is only interesting to see how effective the JIT works (i.e. whether there is a speed-up at all compared to the interpreter).
Let me also mention the following:
Meanwhile I was able to install and make experiments with Pallene. After the paper had inspired my confidence before, I am now somewhat disappointed. Pallene is described in the paper as a subset of Lua; this is obviously not true; better to say that there is a (small) intersection between Lua and Pallene. As far as I understand, it is possible to access Pallene functions from Lua, but apparently not the other way around, and it does not seem to be possible to use most of the features supported by Lua in Pallene. So you can't just take a Lua program and add type annotations to it, but you have to redesign and implement a considerable part of it due to the lack of expressiveness of Pallene. The "advantage" of Pallene over LuaJIT is therefore too expensive. The effort to rewrite all are-we-fast-yet benchmarks on Pallene is something I cannot and will not afford at the moment.