large allocation with 'io.read' function

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

large allocation with 'io.read' function

op7ic \x00
version: lua 5.3.4

Howdy,

I`m not this is really a bug or not but probably something worth
looking at (?). Basically it looks that asking for very large
allocation of memory (59GB in case below) will happily be accepted by
Lua. This leads of pretty much system halt because eventually the
system will run out of memory.

If you run this code in Lua interpreter it will attempt to allocate
59GB of memory:

io.read(59e+9)--


The actual code will attempt to allocate the buffer before it checks
for size of available memory and thus result in quite a big system
hang. In particular 'newsize' parameter is just taken straight in as
io.read parameter and thus never really sanitized:

 461 static void *resizebox (lua_State *L, int idx, size_t newsize) {
 462   void *ud;
 463   lua_Alloc allocf = lua_getallocf(L, &ud);
 464   UBox *box = (UBox *)lua_touserdata(L, idx);
 465   void *temp = allocf(ud, box->box, box->bsize, newsize);
 466   if (temp == NULL && newsize > 0) {  /* allocation error? */
 467     resizebox(L, idx, 0);  /* free buffer */
 468     luaL_error(L, "not enough memory for buffer allocation");
 469   }

Lua doesn't crash just hags as a result ..

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

Dirk Laurie-2
2017-03-03 13:03 GMT+02:00 op7ic \x00 <[hidden email]>:

> version: lua 5.3.4
>
> I`m not this is really a bug

Yes, we have been told to use the word sparingly. Maybe call it
an "unintended side-effect".

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

op7ic \x00
fair enough. In that case its "unintended side-effect" ;) I`m not
entirely sure what would be the best solution as in current state
running code above would just DoS the system ..

On Fri, Mar 3, 2017 at 11:17 AM, Dirk Laurie <[hidden email]> wrote:
> 2017-03-03 13:03 GMT+02:00 op7ic \x00 <[hidden email]>:
>
>> version: lua 5.3.4
>>
>> I`m not this is really a bug
>
> Yes, we have been told to use the word sparingly. Maybe call it
> an "unintended side-effect".
>

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

Ahmed Charles
> fair enough. In that case its "unintended side-effect" ;) I`m not
> entirely sure what would be the best solution as in current state
> running code above would just DoS the system ..

None of the possible values that should be passed to io.read() should be taken from user input, especially not directly, so I fail to see how this is any different than calling malloc() with a user value. The issue isn't that malloc() allocates as much memory as you ask for, it's that you let someone else decide how much to ask for.


In that sense, this has as much potential to DoS the system as malloc does.

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

Lorenzo Donati-3
In reply to this post by op7ic \x00
On 03/03/2017 12:53, op7ic \x00 wrote:

> fair enough. In that case its "unintended side-effect" ;) I`m not
> entirely sure what would be the best solution as in current state
> running code above would just DoS the system ..
>
> On Fri, Mar 3, 2017 at 11:17 AM, Dirk Laurie <[hidden email]> wrote:
>> 2017-03-03 13:03 GMT+02:00 op7ic \x00 <[hidden email]>:
>>
>>> version: lua 5.3.4
>>>
>>> I`m not this is really a bug
>>
>> Yes, we have been told to use the word sparingly. Maybe call it
>> an "unintended side-effect".
>>
>
>
In the usual Lua philosophy, the language provides mechanisms not
policies. Therefore Lua team avoids putting arbitrary limits where there
is no clear reasonable choice.

What could that limit be? For a MCU, for example, 16 bytes could also be
a reasonable limit. On a huge server machine 1GiB could be reasonable.

I agree that 59GB sounds fairly unreasonable, for today's standards, but
the point is, there is no hard and fast value good for every situations.

Anyway the solution is easy, just enclose the io.read function in a
wrapper function that does size checking against a limit reasonable for
_your_ setup/system.

You might also want to wrap that function globally, so that any usage of
the standard io.read would be checked. This could be useful to enforce
policies across all parts of a bigger app.

Of course these measures are good to prevent programming mistakes, not
as a robust security measure against DoS attacks. In this latter case
you'd need to design the whole system against those threats (not easy
stuff and probably impossible on Lua side alone without setting up
properly the OS environment as well).


Cheers!

-- Lorenzo







Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

Roberto Ierusalimschy
In reply to this post by Dirk Laurie-2
> 2017-03-03 13:03 GMT+02:00 op7ic \x00 <[hidden email]>:
>
> > version: lua 5.3.4
> >
> > I`m not this is really a bug
>
> Yes, we have been told to use the word sparingly. Maybe call it
> an "unintended side-effect".

Why is 'io.read(59e+9)' different from any other code that consumes
large amounts of memory (e.g., 'local a = {}; for i = 1, 1e50 do
a[i] = i end')? Is that previous loop an "unintended side-effect" too?

-- Roberto

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

Enrico Colombini
In reply to this post by Lorenzo Donati-3
On 03-Mar-17 13:19, Lorenzo Donati wrote:
> Anyway the solution is easy, just enclose the io.read function in a
> wrapper function that does size checking against a limit reasonable for
> _your_ setup/system.

A custom allocator limiting total allocation would protect against any
memory-based DoS, as far as Lua is concerned.

--
   Enrico

Reply | Threaded
Open this post in threaded view
|

Re: large allocation with 'io.read' function

William Ahern
On Fri, Mar 03, 2017 at 05:00:42PM +0100, Enrico Colombini wrote:
> On 03-Mar-17 13:19, Lorenzo Donati wrote:
> >Anyway the solution is easy, just enclose the io.read function in a
> >wrapper function that does size checking against a limit reasonable for
> >_your_ setup/system.
>
> A custom allocator limiting total allocation would protect against any
> memory-based DoS, as far as Lua is concerned.

For most Unix-like systems the process memory limit can be set from the
shell envionment using

  $ ulimit -d [SIZE]

Afterwards, the limit will be inherited by any newly invoked processes.

The ulimit command just uses the POSIX-defined interface:

  struct rlimit rlim;
  getrlimit(RILMIT_DATA, &rlim); // get current values
  rlim.rlim_cur = [SIZE]; // change soft limit, keep hard limit
  setrlimit(RLIMIT_DATA, &rlim);

When a process reaches the soft limit malloc will fail, even on Linux with
aggressive overcommit enabled. The soft limit can be adjusted up and down,
but cannot be more than the hard limit. The hard limit can only be adjusted
downward.

Most Linux distributions set a data limit of infinity. Other systems, like
OpenBSD, set much stricter defaults.