Multithreading and locking...

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Multithreading and locking...

Francisco Olarte
Hello everyone, I have a problem with multithreading...
For a program I'm doing I need to use a single interpreter from
multiple threads. I do not have problems in the normal case, just
lock/unlock around usage, but some times I need to "reenter".

I NEVER need to interrupt running lua code, but occasionally the
following will occur :

( I will use thread for OS threads and coroutine for lua ones )

1.- OS Thread A enters lua interpreter ( I normally do do this on
fresh coroutine ).

2.- ostA calls a C function I provide it. This function calls into the
C core and in some cases sends a message to os thread B, waiting on
its reply.

3.- ostB needs to call into the interpreter, get some data, do some
more work and reply to ostA.

4.- ostA gets the reply and return to Lua.


I was looking at lua_lock / lua_unlock but they seemed to happen a
lot, and besides I do not want two threads working inside the
interpreter at the same time ( to avoid data races ). So I was
thinking on:

a.- Whenever an OS thread wants access to the interpreter I lock it on
entering, unlocks on exit.

b.- Whenever one of my C funcions needs to call into the core ( to
something which may potentially need the interpreter ), I unlock the
interpreter before entering the core and relock it before returning.

>From what I've read (in the source, like luaD_precall in ldo.c) when
lua calls a C function it surrounds it with lua_unlock(), lua_lock(),
so my locking will be a much stricter version of the normal version.
I'd like to know if this is "safe".

( I have an alternate solution. As I always enter the interpreter in a
coroutine I can lock/unlock around that and, whenever I need to call a
C function which may need to reenter yield a special token ( an LUD, a
fixed one for "magic yield" ) plus the call data in another LUD (
really a pointer to a runnable C++ object ), catch LUA_YIELD+magic
number, unlock the interpreter, call the core C function ( invoking
the runnable with the interpreter as a parameter ) and lua_resume()
after relocking, but seems like overengineering if the simpler
unlocking approach works ).

Any sugestions?

Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Victor Bombi
Personally I prefer using lanes for running a different Lua_State in every OS thread (called a lane) and getting comunication between lanes with the lanes API.
You say that

> For a program I'm doing I need to use a single interpreter from
> multiple threads.

Are you sure you need a single interpreter?

> El 21 de mayo de 2019 a las 11:45 Francisco Olarte <[hidden email]> escribió:
>
>
> Hello everyone, I have a problem with multithreading...
> For a program I'm doing I need to use a single interpreter from
> multiple threads. I do not have problems in the normal case, just
> lock/unlock around usage, but some times I need to "reenter".
>
> I NEVER need to interrupt running lua code, but occasionally the
> following will occur :
>
> ( I will use thread for OS threads and coroutine for lua ones )
>
> 1.- OS Thread A enters lua interpreter ( I normally do do this on
> fresh coroutine ).
>
> 2.- ostA calls a C function I provide it. This function calls into the
> C core and in some cases sends a message to os thread B, waiting on
> its reply.
>
> 3.- ostB needs to call into the interpreter, get some data, do some
> more work and reply to ostA.
>
> 4.- ostA gets the reply and return to Lua.
>
>
> I was looking at lua_lock / lua_unlock but they seemed to happen a
> lot, and besides I do not want two threads working inside the
> interpreter at the same time ( to avoid data races ). So I was
> thinking on:
>
> a.- Whenever an OS thread wants access to the interpreter I lock it on
> entering, unlocks on exit.
>
> b.- Whenever one of my C funcions needs to call into the core ( to
> something which may potentially need the interpreter ), I unlock the
> interpreter before entering the core and relock it before returning.
>
> >From what I've read (in the source, like luaD_precall in ldo.c) when
> lua calls a C function it surrounds it with lua_unlock(), lua_lock(),
> so my locking will be a much stricter version of the normal version.
> I'd like to know if this is "safe".
>
> ( I have an alternate solution. As I always enter the interpreter in a
> coroutine I can lock/unlock around that and, whenever I need to call a
> C function which may need to reenter yield a special token ( an LUD, a
> fixed one for "magic yield" ) plus the call data in another LUD (
> really a pointer to a runnable C++ object ), catch LUA_YIELD+magic
> number, unlock the interpreter, call the core C function ( invoking
> the runnable with the interpreter as a parameter ) and lua_resume()
> after relocking, but seems like overengineering if the simpler
> unlocking approach works ).
>
> Any sugestions?
>
> Francisco Olarte.
>

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
On Tue, May 21, 2019 at 12:07 PM Victor Bombi <[hidden email]> wrote:
> Personally I prefer using lanes for running a different Lua_State in every OS thread (called a lane) and getting comunication between lanes with the lanes API.

My main problem is I do not have control of who calls the interpreter,
in heavily depends on runtime state. The app uses thread pools to do
it's work and sometimes calls back to my core logic in the
interpreter.

I've read of lanes before, reread it now and saw the first paragraph,
"Lua Lanes is a Lua extension library providing the possibility to run
multiple Lua states in parallel. It is intended to be used for
optimizing performance on multicore CPU's and to study ways to make
Lua programs naturally parallel to begin with. " I do not need, and
explicitly do not want, multiple states, and have no performance
problem, I'm going to use less than 1% of a single core in lua. Lua
code is going to control C code which goes in it's own thread pool,
just making the interesting high level executive decisions, not doing
the low level stuff.

> You say that
> > For a program I'm doing I need to use a single interpreter from
> > multiple threads.
> Are you sure you need a single interpreter?

I do not strictly need it, I can just code the core logic in C++ and
use zero interpreters, and reload dso for reconfiguration, or use a
shared data structure and shuttle data around and store interpreters
in thread locals, or use a lua interpreter for each thing. But the
problem I try to solve greatly benefits from a shared interpreter. The
lua code becomes easier. It normally just turns events into coroutine
dispatchs for handling some things, like IVR call flows, and
manipulates some global state for others, like a routing decission.
Having all this in a single interpreter means nothing moves while we
are inside lua, and the lack of syncing while inside means I get in &
out very fast, simple code, small bug surface.

But in some corner cases I MAY have to reenter. The typical case is
what I more or less told, I call a C function which sends a message
which ends up in another thread which wants to, say, read some data (
calling a func ) in the interpreter. Concurrency is not a problem. I
would like to do that by temporarily unlocking, but I think I can
manage it which yields, after all I'm going to have a dedicated C++
module for it, and encapsulate potentially reentering functions into
some objects having a before-unlock, unlocked, after-unlock virtual
methods. I want to just call them, but I think with very little
overhead I can code the interpreter entering code as a small loop
which handles LUA_YIELD and dispatch it using lua_yield() from some
stub transparently in the C++ side.

Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Victor Bombi


Even without using lanes, you can start a different interpreter from each OS thread. The only problem will be thread communication but you can implement it in several ways with luasocket or ZeroMQ for example.

> > El 21 de mayo de 2019 a las 12:53 Francisco Olarte <[hidden email]> escribió:
> >
> >
> > On Tue, May 21, 2019 at 12:07 PM Victor Bombi <[hidden email]> wrote:
> > > Personally I prefer using lanes for running a different Lua_State in every OS thread (called a lane) and getting comunication between lanes with the lanes API.
> >
> > My main problem is I do not have control of who calls the interpreter,
> > in heavily depends on runtime state. The app uses thread pools to do
> > it's work and sometimes calls back to my core logic in the
> > interpreter.
> >
> > I've read of lanes before, reread it now and saw the first paragraph,
> > "Lua Lanes is a Lua extension library providing the possibility to run
> > multiple Lua states in parallel. It is intended to be used for
> > optimizing performance on multicore CPU's and to study ways to make
> > Lua programs naturally parallel to begin with. " I do not need, and
> > explicitly do not want, multiple states, and have no performance
> > problem, I'm going to use less than 1% of a single core in lua. Lua
> > code is going to control C code which goes in it's own thread pool,
> > just making the interesting high level executive decisions, not doing
> > the low level stuff.
> >
> > > You say that
> > > > For a program I'm doing I need to use a single interpreter from
> > > > multiple threads.
> > > Are you sure you need a single interpreter?
> >
> > I do not strictly need it, I can just code the core logic in C++ and
> > use zero interpreters, and reload dso for reconfiguration, or use a
> > shared data structure and shuttle data around and store interpreters
> > in thread locals, or use a lua interpreter for each thing. But the
> > problem I try to solve greatly benefits from a shared interpreter. The
> > lua code becomes easier. It normally just turns events into coroutine
> > dispatchs for handling some things, like IVR call flows, and
> > manipulates some global state for others, like a routing decission.
> > Having all this in a single interpreter means nothing moves while we
> > are inside lua, and the lack of syncing while inside means I get in &
> > out very fast, simple code, small bug surface.
> >
> > But in some corner cases I MAY have to reenter. The typical case is
> > what I more or less told, I call a C function which sends a message
> > which ends up in another thread which wants to, say, read some data (
> > calling a func ) in the interpreter. Concurrency is not a problem. I
> > would like to do that by temporarily unlocking, but I think I can
> > manage it which yields, after all I'm going to have a dedicated C++
> > module for it, and encapsulate potentially reentering functions into
> > some objects having a before-unlock, unlocked, after-unlock virtual
> > methods. I want to just call them, but I think with very little
> > overhead I can code the interpreter entering code as a small loop
> > which handles LUA_YIELD and dispatch it using lua_yield() from some
> > stub transparently in the C++ side.
> >
> > Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Viacheslav Usov
In reply to this post by Francisco Olarte
On Tue, May 21, 2019 at 11:47 AM Francisco Olarte <[hidden email]> wrote:

> For a program I'm doing I need to use a single interpreter from multiple threads. I do not have problems in the normal case, just lock/unlock around usage, but some times I need to "reenter".

I would stay away from that.

If you can ensure that ostA, while waiting for a message, is not using Lua (except holding some memory for its Lua API stack) and ostB is not using Lua _in_any_way_ when and after it sends a message to ostA, then you have already achieved a fully synchronised execution of those threads with respect to Lua and no further locks can make it any safer. Those locks will probably only give you an inflated sense of security.

The emphasis on "in any way" is supposed to mean that all of the calls into Lua have returned in this thread and none will be made (till another cycle begins).

Your approach a/b, at first glance, looks like it might work and it is indeed similar to the paradigm used by Lua itself. The question is how generally robust that paradigm is. Imagine thread A does this:

lock(); push_value(); unlock(); do_something_not_touching_Lua(); lock(); pop_value(); unlock(); 

(where values are pushed and popped on the Lua API stack)

And thread B does this:

lock(); push_another_value(); unlock(); do_something_not_touching_Lua_2(); lock(); pop_value(); unlock(); 

What value each thread is going to pop? Do you even want to have to answer such questions?

 
Cheers,
V.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Francisco Olarte
Victor:

On Tue, May 21, 2019 at 1:07 PM Victor Bombi <[hidden email]> wrote:
> Even without using lanes, you can start a different interpreter from each OS thread. The only problem will be thread communication but you can implement it in several ways with luasocket or ZeroMQ for example.

My problem is NOT thread communication, I have a C++ host, I can just
have a shared & synced data, or do synced RPC to a singleton
interpreter as I do in other projects. The problem is the raison
d'être of lua is coordinating the working threads, it needs to make
decissions and maintain and manipulate global state. In C++ it's easy,
some worker needs a decission, calls in, I just sync on a global state
object and if I need a potentially reentering callback I leave it in a
consistent way, unlock, callback, relock, reexamine potentially
changed interesting content ( which due to the dessign is generally
nothing ), move along, finish work, unlock.

Doing it in lua means I can change control logic easily. Using
coroutines means I can linearize event handling, I've done it in other
projects, works quite well as their threading model means the
interpreter is not reentered from another thread. But this ones as
message queues and workers around, and sometimes I have this problem.

Francisco Olart.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Victor Bombi
So thats even better: Just start a different interpreter in each OS thread (as is recommended in so many places) and concurrency problems will be gone away.

> El 21 de mayo de 2019 a las 13:33 Francisco Olarte <[hidden email]> escribió:
>
>
> Victor:
>
> On Tue, May 21, 2019 at 1:07 PM Victor Bombi <[hidden email]> wrote:
> > Even without using lanes, you can start a different interpreter from each OS thread. The only problem will be thread communication but you can implement it in several ways with luasocket or ZeroMQ for example.
>
> My problem is NOT thread communication, I have a C++ host, I can just
> have a shared & synced data, or do synced RPC to a singleton
> interpreter as I do in other projects. The problem is the raison
> d'être of lua is coordinating the working threads, it needs to make
> decissions and maintain and manipulate global state. In C++ it's easy,
> some worker needs a decission, calls in, I just sync on a global state
> object and if I need a potentially reentering callback I leave it in a
> consistent way, unlock, callback, relock, reexamine potentially
> changed interesting content ( which due to the dessign is generally
> nothing ), move along, finish work, unlock.
>
> Doing it in lua means I can change control logic easily. Using
> coroutines means I can linearize event handling, I've done it in other
> projects, works quite well as their threading model means the
> interpreter is not reentered from another thread. But this ones as
> message queues and workers around, and sometimes I have this problem.
>
> Francisco Olart.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Viacheslav Usov
Viacheslav:

On Tue, May 21, 2019 at 1:32 PM Viacheslav Usov <[hidden email]> wrote:
> On Tue, May 21, 2019 at 11:47 AM Francisco Olarte <[hidden email]> wrote:
> > For a program I'm doing I need to use a single interpreter from multiple threads. I do not have problems in the normal case, just lock/unlock around usage, but some times I need to "reenter".
> I would stay away from that.

I really tried.

> If you can ensure that ostA, while waiting for a message, is not using Lua (except holding some memory for its Lua API stack)

Waiting for the reply to message is synchronous. ostA, after
unlocking, calls a C function, and relocks at return.

> and ostB is not using Lua _in_any_way_ when and after it sends a message to ostA,

ostB is, in this case, invoked by the function called by A sending a
message to it's queue. Sending blocks A until B replies, it's a
syncronous send.

> then you have already achieved a fully synchronised execution of those threads with respect to Lua and no further locks can make it any safer. Those locks will probably only give you an inflated sense of security.

I need a lock in the interpreter because I have more than one thread
wanting to use it, let's say A1, A2, I do not want A2 entering while
A1 is in, I do not mind it entering/exiting while A1 is waiting for B.

> The emphasis on "in any way" is supposed to mean that all of the calls into Lua have returned in this thread and none will be made (till another cycle begins).

If ostA calls into ostB indirectly, it is blocked waiting for a
response. It has called a synchronous sendmessage, will not do nothing
until B is out of the interpreter and replies.

> Your approach a/b, at first glance, looks like it might work and it is indeed similar to the paradigm used by Lua itself. The question is how generally robust that paradigm is. Imagine thread A does this:
> lock(); push_value(); unlock(); do_something_not_touching_Lua(); lock(); pop_value(); unlock();
> (where values are pushed and popped on the Lua API stack)
> And thread B does this:
> lock(); push_another_value(); unlock(); do_something_not_touching_Lua_2(); lock(); pop_value(); unlock();

1st, the "do something not touching lua" is always "wait", the thread
is not doing nothing fancy.

When ostA decides it wants something done in lua it locks the state,
pushes args, calls a function, pops return value, unlocks.

Whenever it needs to call the core I unlock/lock I leave the state
"clean", I mean, empty stack. ostA,  inside Lua has called the a C
function. The C function then get args from the state, and, if it
needs to call some core function which may reenter, clears the stack,
unlocks, calls core function, relocks, pushed return values, returns
into lua. Everytime the state is unlocked, the stack is empty. If I
need to do a sequence of opertations manipulating the stack ALL is
done locked.

> What value each thread is going to pop? Do you even want to have to answer such questions?

No, that's why I clear the stack before unlocking. Same as I assure my
global state objects are in a consistent quiescent state when I do
similar things in C++. I'm not going to be doing this a lot, and I've
designed the system in a way in which the normal sending of messages
does not involve syncing ( I yield a message as a table from lua, and
resume with the reply when needed ), but sometimes a C function in the
core MAY send me a message from another thread from deep inside.

Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Victor Bombi
Victor:

On Tue, May 21, 2019 at 1:43 PM Victor Bombi <[hidden email]> wrote:
> So thats even better: Just start a different interpreter in each OS thread (as is recommended in so many places) and concurrency problems will be gone away.

Your top-quoting is making following your thoughts difficult for me. I
DO NOT WANT an interpreter per thread. I want a single interpreter
with a couple hundreds, maybe a thousand, coroutines inside, doing
things in a simple way because they know they are the only one
running. But sometimes they may reenter from another thread.

Regards.
   Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Thijs Schreijer
In reply to this post by Francisco Olarte


On 21 May 2019, at 12:53, Francisco Olarte <[hidden email]> wrote:

On Tue, May 21, 2019 at 12:07 PM Victor Bombi <[hidden email]> wrote:
Personally I prefer using lanes for running a different Lua_State in every OS thread (called a lane) and getting comunication between lanes with the lanes API.

My main problem is I do not have control of who calls the interpreter,
in heavily depends on runtime state. The app uses thread pools to do
it's work and sometimes calls back to my core logic in the
interpreter.

I've read of lanes before, reread it now and saw the first paragraph,
"Lua Lanes is a Lua extension library providing the possibility to run
multiple Lua states in parallel. It is intended to be used for
optimizing performance on multicore CPU's and to study ways to make
Lua programs naturally parallel to begin with. " I do not need, and
explicitly do not want, multiple states, and have no performance
problem, I'm going to use less than 1% of a single core in lua. Lua
code is going to control C code which goes in it's own thread pool,
just making the interesting high level executive decisions, not doing
the low level stuff.

You say that
For a program I'm doing I need to use a single interpreter from
multiple threads.
Are you sure you need a single interpreter?

I do not strictly need it, I can just code the core logic in C++ and
use zero interpreters, and reload dso for reconfiguration, or use a
shared data structure and shuttle data around and store interpreters
in thread locals, or use a lua interpreter for each thing. But the
problem I try to solve greatly benefits from a shared interpreter. The
lua code becomes easier. It normally just turns events into coroutine
dispatchs for handling some things, like IVR call flows, and
manipulates some global state for others, like a routing decission.
Having all this in a single interpreter means nothing moves while we
are inside lua, and the lack of syncing while inside means I get in &
out very fast, simple code, small bug surface.

But in some corner cases I MAY have to reenter. The typical case is
what I more or less told, I call a C function which sends a message
which ends up in another thread which wants to, say, read some data (
calling a func ) in the interpreter. Concurrency is not a problem. I
would like to do that by temporarily unlocking, but I think I can
manage it which yields, after all I'm going to have a dedicated C++
module for it, and encapsulate potentially reentering functions into
some objects having a before-unlock, unlocked, after-unlock virtual
methods. I want to just call them, but I think with very little
overhead I can code the interpreter entering code as a small loop
which handles LUA_YIELD and dispatch it using lua_yield() from some
stub transparently in the C++ side.

Francisco Olarte.


Haven’t touched the code in years, but maybe something like DSS might be of help? Designed to work with a single Lua state and have os threads, with syncing calls.


Regards
Thijs
Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Kaj Eijlers
In reply to this post by Francisco Olarte


On Tue, May 21, 2019 at 5:03 AM Francisco Olarte <[hidden email]> wrote:
Victor:

On Tue, May 21, 2019 at 1:43 PM Victor Bombi <[hidden email]> wrote:
> So thats even better: Just start a different interpreter in each OS thread (as is recommended in so many places) and concurrency problems will be gone away.

Your top-quoting is making following your thoughts difficult for me. I
DO NOT WANT an interpreter per thread. I want a single interpreter
with a couple hundreds, maybe a thousand, coroutines inside, doing
things in a simple way because they know they are the only one
running. But sometimes they may reenter from another thread.

Regards.
   Francisco Olarte.


I am confused what the gain is of running them on other threads if you have to lock the main object (the Lua state). Wouldn't you end up with 99% of critical path inside mutexes and thus gain nothing from threading it since each task will be blocking on the next? (and if the answer is 'no, because the tasks do more than the lua access - wouldn't a message queue to the main thread suffice?). 
If not, can critical data be stored in a blackboard-like structure or transactional memory?
Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Viacheslav Usov
On Wed, May 29, 2019 at 8:34 AM Kaj Eijlers <[hidden email]> wrote:

> Wouldn't you end up with 99% of critical path inside mutexes and thus gain nothing from threading it since each task will be blocking on the next? (and if the answer is 'no, because the tasks do more than the lua access - wouldn't a message queue to the main thread suffice?).

This reminds of a message I wanted to post in this thread but somehow never did.

I, too, think that Francisco's problem could be dealt with using a single Lua state that runs in a dedicated thread ("supervisor"), that reads and dispatches messages from a thread-safe queue, and all the other threads send messages to this queue when they need to interact with the supervisor, rather than using Lua API directly.

Cheers,
V.
Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Kaj Eijlers
Kaj:

On Wed, May 29, 2019 at 8:34 AM Kaj Eijlers <[hidden email]> wrote:
> I am confused what the gain is of running them on other threads if you have to lock the main object (the Lua state). Wouldn't you end up with 99% of critical path inside mutexes and thus gain nothing from threading it since each task will be blocking on the next? (and if the answer is 'no, because the tasks do more than the lua access - wouldn't a message queue to the main thread suffice?).

I do not want the other threads, but the engine I'm extending has
them. It has messages, which can be queued ( fire & forget, ignore
reply, sending thread goes on ) or dispatched ( sending thread is
blocked waiting  for reply ). Messages are dispatched in different
thread pools, typically because they are messages which are queued and
benefit from having their own thread pool, but sometimes a message
needs to be sent ( and waited ) from one thread to one of these thread
pools, and the sending thread needs to block waiting for the reply.

So, my problem is thread A enters into lua state S1, does some lua
things and calls a Cfunction which calls the C core, which in turn
sends-and-wait a message M1 which is handled by a queue served by (
amongst other ), thread B, which, to handle  it,  due to
configuration, needs to enter state S1 ( because some API stuff is
hooked to be served by lua code ).

If, instead of using lua, I use C++ the thing is easy, I just insure
the data structures are coherent and unlocked before letting A call
the core.

In lua I can do several things:

I can rearchitect all the code so each message is handled by a
different state and I use a thread-safe data structure for all global
data. This leads to complex lua code, I've made my estimations and
it's easier to just use C++ for the extensions.

I can use a dedicated thread with a dedicated message queue for S1, so
A does not enter S1, sends it a message, and S1 does not call the api,
it sends a message and waits for a posted reply. This is complex, some
comment as above.

What I want to do is try a simple thing, lock the state on A before
first entering and, On the Cfunction which calls the api, grab
parameters, lua_settop(0), unlock, call api, relock, push results,
return., then do some more lua things, exit, unlock. If B enters it
finds the state unlocked and can lock it, and the core, which is
written in standard C, should not be able to distinguish it is being
called from a different thread ( it should be the same as if thread A
has called into it again, which is what would have happen if the
message M1 were not configured to be handled by a dedicated thread
pool ).
I leave the state in a coherent view, and the internal data structures
are correctly set up, and it works nicely when the called API does not
need to be served by another thread pool.

>From my examination of the sources lua is suppose to be stable if I
just define lua_lock/unlock, but this is a mess, as I do not want
threads concurrently entering at any time and I would need another
high level lock. But I've seen that lua calls Cfunctions doing
unlock(),($f)(..),lock(), so I think my approach is "safer". ( i.e.,
every code chunk which is locked by the lock/unlock approach is locked
by mine ).

> If not, can critical data be stored in a blackboard-like structure or transactional memory?

Yes, it can, but the added complexity is not worth it.

Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Thijs Schreijer
Thijs:

On Tue, May 28, 2019 at 9:39 PM Thijs Schreijer <[hidden email]> wrote:
> Haven’t touched the code in years, but maybe something like DSS might be of help? Designed to work with a single Lua state and have os threads, with syncing calls.
> see: https://github.com/Tieske/DarkSideSync

Thanks, but not, frpom what I read it solves a completely different
problem. I have async work in the core, and similar things, but I have
the problem solved and integrated with a coroutine system, the used
core help transforming async work to queued events which I turn to
several coroutines easily.

Regards.
Francisco Olarte.

Reply | Threaded
Open this post in threaded view
|

Re: Multithreading and locking...

Francisco Olarte
In reply to this post by Viacheslav Usov
Viacheslav:

On Wed, May 29, 2019 at 12:29 PM Viacheslav Usov <[hidden email]> wrote:
> On Wed, May 29, 2019 at 8:34 AM Kaj Eijlers <[hidden email]> wrote:
> > Wouldn't you end up with 99% of critical path inside mutexes and thus gain nothing from threading it since each task will be blocking on the next? (and if the answer is 'no, because the tasks do more than the lua access - wouldn't a message queue to the main thread suffice?).
> This reminds of a message I wanted to post in this thread but somehow never did.
> I, too, think that Francisco's problem could be dealt with using a single Lua state that runs in a dedicated thread ("supervisor"), that reads and dispatches messages from a thread-safe queue, and all the other threads send messages to this queue when they need to interact with the supervisor, rather than using Lua API directly.

It can be done this way, but is not that simple. Lua handles callbacks
which need to return data, so I need to send a message to lua and
block the calling thread while it works. Then I need to use a
reentrant message queue, so if a call the PI using an asynchronous
message to other thread I can process intermediate request. I mean,
API wants to send MA to lua and get RA. To generate RA lua needs to
call API with MB and get RB, and to generate RB the API needs to send
MC and get RC from lua ( all from the same state ). It can be done,
just a couple of queues and some changes in the architecture, not
complex, but not easy. As I said before, its much easier to just ditch
lua then and use C++ ( which I need to use anyway, to glue lua-core,
just change the interpreter instation and script loading to some
dlopen and calling ).

Francisco Olarte.