My general idea is that a single-threaded application ( the Lua interpreter ) will always deteriorate the performance of a multi-threaded application that depends on it ( a generic C++ application ).
To circumvent this problem I'm thinking about an asynchronous approach on the interpreter while keeping the C++ application multi-threaded, this basically means that based on my approach a Lua interpreter should somehow push the entire script/file in a scheduler with an asynchronous approach ( without waiting for the result ) and it's up to the well designed C++ multi-threaded messaging system to keep everything sequential.
The usual relationship is C/C++ function <-> Lua ( with a sequential approach ) ; I would like to have something like C++ messaging system <-> entire Lua script .
I'm also open to any kind of approach that can solve this and really help the mix between Lua and a C++ application designed for multi-threading.
Is this approach made possible by some piece of software ?
EDIT
I need something "user-proof" and I need to implement this behaviour right in C++/Lua API design.
One option is to implement communication to lua as a co-routine. Messages are sent to C++ via coroutine.yield(messagedata), and then it sends back results via lua_resume. (See also: lua_newthread). You could even wrap your functions to provide a nicer event UI.
function doThing(thing, other, data)
return coroutine.yield("doThing", thing, other, data)
end
You can still only have one thread running the lua interpreter at any given time (you will have to do locking) but you can have multiple such co-routines running concurrently.
Concurrency in Lua is a topic that has many many solutions. Here is a resource:
http://lua-users.org/wiki/MultiTasking
You actually can make it easy for yourself since you do not actually have to run Lua itself multithreaded, which would impose a number of additional issues.
The obvious solution is running Lua in a separate thread but providing only a thin API for Lua in which every single API call immediately either forks a new thread/process or uses some sort of message passing for asynchronous data transfer, or even uses short-duration semaphores to read/write some values. This solution requires some sort of idle loop or event listeners unless you want to do busy waiting...
Another option that I think is still quite easy to implement with a new API, is actually the approach of node.js:
Run Lua in a separate thread
Make your whole API of functions that only take callbacks. These callbacks are queued and can be scheduled by your C++ application.
You can even, but do not have to, provide callback wrappers for the standard Lua API.
Example:
local version;
Application.requestVersionNumber(function(val) version = val; end)
Of course this example is riduculously trivial, but you get the idea.
One thing you should know though is that with the callback approach the scripts quickly get highly tiered if you are not careful. While that's not bad for performance, they can get hard to read.
Related
I'm designing the threading architecture for my game engine, and I have reached a point where I am stumped.
The engine is partially inspired by Grimrock's engine, where they put as much as they could into LuaJIT, with some things, including low level systems, written in C++.
This seemed like a good plan, given that LuaJIT is easy to use, and I can continue to add API functions in C++ and expand it further. Faster iteration is nice, the ability to have a custom IDE attached to the game and edit the code while it runs is an interesting option, and serializing from Lua is also easy.
But I am stumped on how to go about adding threading. I know Lua has coroutines, but that is not true threading; it's basically to keep Lua from stalling as it waits for code that takes too long.
I originally had in mind to have the main thread running in Lua and calling C++ functions which are dispatched to the scheduler, but I can't find enough information on how Lua functions. I do know that when Lua calls a C++ function it runs outside of the state, so theoretically it may be possible.
I also don't know whether, if Lua makes such a call that is not supposed to return anything, it will hang on the function until it's done.
And I'm not sure whether the task scheduler runs in the main thread, or if it is simply all worker threads pulling data from a queue.
Basically meaning that, instead of everything running at once, it waits for the game state update before doing anything.
Does anyone have any ideas, or suggestions for threading?
In general, a single lua_State * is not thread safe. It's written in pure C and meant to go very fast. It's not safe to allow exceptions go through it either. There's no locks in there and no way for it to protect itself.
If you want to run multiple lua scripts simultaneously in separate threads, the most straightforward way is to use luaL_newstate() separately in each thread, initialize each of them, and load and run scripts in each of them. They can talk to the C++ safely as long as your callbacks use locks when necessary. At least, that's how I would try to do it.
There are various things you could do to speed it up, for instance, if you are loading copies of a single script in each of the threads, you could compile it to lua bytecode before you launch any of the threads, then put the buffer into shared memory, and have the scripts load the shared byte code without changing. That's most likely an unnecessary optimization though, depending on your application.
I have implemented a WebSocket handler in C++ and I need to send ping messages once in a while. However, I don't want to start one thread per socket/one global poll thread which only calls the ping function but instead use some OS functionality to call my timer function. On Windows, there is SetTimer but that requires a working message loop (which I don't have.) On Linux there is timer_create, which looks better.
Is there some portable, low-overhead method to get a function called periodically, ideally with some custom context? I.e. something like settimer (const int millisecond, const void* context, void (*callback)(const void*))?
[Edit] Just to make this a bit clearer: I don't want to have to manage additional threads. On Windows, I guess using CreateThreadpoolTimer on the system thread pool will do the trick, but I'm curious to hear if there is a simpler solution and how to port this over to Linux.
If you are intending to go cross-platform, I would suggest you use a cross platform event library like libevent.
libev is newer, however currently has weak Win32 support.
If you use sockets, you can use select, to wait sockets events with timeout,
and in this loop calc time and call callback in suitable time.
If you are looking for a timer that will not require an additional thread, let you do your work transparently and then call the timer function at the appropriate time in the same thread by pre-emptively interrupting your application, then there is no such portable thing.
The first reason is that it's downright dangerous. That's like writing a multi-threaded application with absolutely no synchronization. The second reason is that it is extremely difficult to have good semantics in multi-threaded applications. Which thread should execute the timer callback?
If you're writing a web-socket handler, you are probably already writing a select()-based loop. If so, then you can just use select() with a short timeout and check the different connections for which you need to ping each peer.
Whenever you have asynchronous events, you should have an event loop. This doesn't need to be some system default one, like Windows' message loop. You can create your own. But you should be using it.
The whole point about event-based programming is that you are decoupling your code handling to deal with well-defined functional fragments based on these asynchronous events. Without an event loop, you are condemning yourself to interleaving code that get's input and produces output based on poorly defined "states" that are just fragments of procedural code.
Without a well-defined separation of states using an event-based design, code quickly becomes unmanageable. Because code pauses inside procedures to do input tasks, you have lifetimes of objects that will not span entire procedure scopes, and you will begin to write if (nullptr == xx) in various places that access objects created or destroyed based on events. Dispatch becomes comnbinatorially complex because you have different events expected at each input point and no abstraction.
However, simply using an event loop and dispatch to state machines, you've decreased handling complexity to basic management of handlers (O(n) handlers versus O(mn) branch statements with n types of events and m states). You decouple handling but still allow for functionality to change depending on state. But now these states are well-defined using state classes. And new states can be added if the requirements of the product change.
I'm just saying, stop trying to avoid an event loop. It's a software pattern for very important reasons, all of which have to do with producing professional, reusable, scalable code. Use Boost.ASIO or some other framework for cross platform capabilities. Don't get in the habit of doing it wrong just because you think it will be less of an effort. In the end, even if it's not a professional project that needs maintenance long term, you want to practice making your code professional so you can do something with your skills down the line.
I am thinking of writing a server application - along the lines of mySQL or Apache.
The main requirements are:
Clients will communicate with the server via TCP/IP (sockets)
The server will spawn a new child process to handle requests (ala Apache)
Ideally, I would like to use the BOOST libraries rather than attempt to reinvent my own. There must be code somewhere that does most of what I am trying to do - so I can use it (or atleast part of it as my starting point) can anyone point me to a useful link?
In the (hopefully unlikely) event that there is no code I can use as a starting point, can someone point out the most appropriate BOOST libraries to use - and a general guideline on how to proceeed.
My main worry is how to know when one of the children has crashed. AFAIK, there are two ways of doing this:
Using heartbeats between the parent and children (this quickly becomes messy, and introduces more things that could go wrong)
Somehow wrap the spawning of the process with a timeout parameter - but this is a dumb approach, because if a child is carrying out time intensive work, the parent may incorrectly think that the child has died
What is the best practises of making the parent aware that a child has died?
[Edit]
BTW, I am developing/running/deploying on Linux
On what platform (Windows/Linux/both)? Processes on Windows are considered more heavy-weight than on Linux, so you may indeed consider threads.
Also, I think it is better (like Apache does) not to spawn a process for each request but to have a process pool, so you save the cost of creating a process, especially on Windows.
If you are on Linux, can waitpid() be useful for you? You can use it in the non-blocking mode to check recurrently with some interval whether one of the child processes terminated
I can say for sure that Pion is your only stable option.
I have never used it but I intend to, and the API looks very clean.
As for the Boost libraries you would need:
Boost.Asio
Boost.Threading
Boost.Spirit (or something similar to parse the HTTP protocol)
Boost.IPC
What about using threads (which are supported by Boost) rather than forking the process? This would allow you to make queries about the state of a child and, imho, threads are simpler to handle than forking.
Generally Boost.Asio is good point to begin with.
But several points to be aware of:
Boost.Asio is very good library but it is not very fork aware, so don't try to share Asio
event loop between several fork processes - this would not work (i.e. - if boost::asio::io_service was created before fork - don't use it in more then one process after it)
Also it does not allow you to release file handler from boost::asio::XX::socket
so only way is to call dup and then pass it to child process.
But to be honest? I don't think you'll find any network event loop library that is
fork aware (maybe with exception of CppCMS's booster.aio that I had written
to be fork aware by myself).
Waiting for children is quite simple you can define a signal handler with sigaction
on SIGCHLD signal that is send then child crashes or exits.
So all you need to do is handle this signal and in main loop call waitpid when such
signal received.
With asio you can use "self-pipe" trick to wake the loop from sleep from signal handler.
First, take a look at CPPCMS. It might already fit your needs.
Now, as pointed by others, boost::asio is a good starting point but is really the basics of the task.
Maybe you'll be more interested in the works being done about server-code based on boost::asio : cpp-netlib (that is made to be submitted in boost once done) The author's blog.
I've made an FOSS library for creating C++ applications in a modular way. It's hosted at
https://github.com/chilabot/chila
here's my blog: http://chilatools.blogspot.com/view/sidebar
It's specially suited for generic server creation (that was my motivation for constructing it), but I think it can be used for any kind of application.
The part that has to be deployed with the final binary is LGPL, so it can be used with commercial applications.
I'm developing a project and I have to make a wrapper to some hardware functions.
We have to write and read data to and from a non-volatile memory. I have a library with the read and write functions from the seller company. The problem is that these functions should be called with a delay between each call due to hardware characteristics.
So my solution is to start a thread, make a queue and make my own read and write functions. So every time my functions are called, the data will be stored on the queue and then in the loop thread will be actually read or written on the memory. My functions will use a mutex to synchronize the access to the queue. My wrapper is going to be on a dll. The main module will call my dll init function once to start the thread, and then it will call my read/write functions many times from different threads.
My questions is: Is it safe to do this? the original functions are non reentrant. I don't know if this is going to be a problem. Is there a better way to do this?
Any help will be appreciated.
Sorry I forgot something:
-The language to be used is C++
-The main program will call my wrapper dll but also will call other modules (dlls) that are going to call the wrapper dll.
Adding a mediator in this context is a pretty typical solution so you aren't out in the weeds here. I would say you would need to implement this because the original functions are not reentrant. Assuming, of course, that you own the access to the hardware. (i.e. You are the driver.) If other people can get access to the same piece of hardware, then you're going to have to come up with some higher level contract. Your thread then provides the ordered access to the driver. You'll find that the mediator will also allow you to throttle.
The hard part it seems is knowing when it is okay to make the next call to the device. Does it have some sort of flag to let you know it is ready for reads and writes? Some other questions: How do you plan to communicate state to your clients? Since you are providing an async interface, you'll need to have some sort of error callback registration, etc. Take a look at a normal async driver interface for ideas.
But overall, sounds like a good strategy to start with. As another poster mentioned, more specifics would be nice.
I am looking for a cross-platform C++ master/worker library or work queue library. The general idea is that my application would create some sort of Task or Work objects, pass them to the work master or work queue, which would in turn execute the work in separate threads or processes. To provide a bit of context, the application is a CD ripper, and the the tasks that I want to parallelize are things like "rip track", "encode WAV to Mp3", etc.
My basic requirements are:
Must support a configurable number of concurrent tasks.
Must support dependencies between tasks, such that tasks are not executed until all tasks that they depend on have completed.
Must allow for cancellation of tasks (or at least not prevent me from coding cancellation into my own tasks).
Must allow for reporting of status and progress information back to the main application thread.
Must work on Windows, Mac OS X, and Linux
Must be open source.
It would be especially nice if this library also:
Integrated with Qt's signal/slot mechanism.
Supported the use of threads or processes for executing tasks.
By way of analogy, I'm looking for something similar to Java's ExecutorService or some other similar thread pooling library, but in cross-platform C++. Does anyone know of such a beast?
Thanks!
I haven't used it in long enough that I'm not positive whether it exactly meets your needs, but check out the Adaptive Communications Environment (ACE). This library allows you to construct "active objects" which have work queues and execute their main body in their own threads, as well as thread pools that can be shared amoung objects. Then you can pass queue work objects on to active objects for them to process. Objects can be chained in various ways. The library is fairly heavy and has a lot to it to learn, but there have been a couple of books written about it and theres a fair amount of tutorial information available online as well. It should be able to do everything you want plus more, my only concern is whether it possesses the interfaces you are looking for 'out of the box' or if you'd need to build on top of it to get exactly what you are looking for.
I think this calls for intel's Threading Building Blocks, which pretty much does what you want.
Check out Intels' Thread Building Blocks library.
Sounds like you require some kind of "Time Sharing System".
There are some good open source ones out there, but I don't know
if they have built-in QT slot support.
This is probably a huge overkill for what you need but still worth mentioning -
BOINC is a distributed framework for such tasks. There's a main server that gives out tasks to perform and a cloud of workers that do its bidding. It is the framework behind projects like SETI#Home and many others.
See this post for creating threads using the boost library in C++:
Simple example of threading in C++
(it is a c++ thread even though the title says c)
basically, create your own "master" object that takes a "runnable" object and starts it running in a new thread.
Then you can create new classes that implement "runnable" and throw them over to your master runner any old time you want.