A Wrapper to hardware functions - c++

I'm developing a project and I have to make a wrapper to some hardware functions.
We have to write and read data to and from a non-volatile memory. I have a library with the read and write functions from the seller company. The problem is that these functions should be called with a delay between each call due to hardware characteristics.
So my solution is to start a thread, make a queue and make my own read and write functions. So every time my functions are called, the data will be stored on the queue and then in the loop thread will be actually read or written on the memory. My functions will use a mutex to synchronize the access to the queue. My wrapper is going to be on a dll. The main module will call my dll init function once to start the thread, and then it will call my read/write functions many times from different threads.
My questions is: Is it safe to do this? the original functions are non reentrant. I don't know if this is going to be a problem. Is there a better way to do this?
Any help will be appreciated.
Sorry I forgot something:
-The language to be used is C++
-The main program will call my wrapper dll but also will call other modules (dlls) that are going to call the wrapper dll.

Adding a mediator in this context is a pretty typical solution so you aren't out in the weeds here. I would say you would need to implement this because the original functions are not reentrant. Assuming, of course, that you own the access to the hardware. (i.e. You are the driver.) If other people can get access to the same piece of hardware, then you're going to have to come up with some higher level contract. Your thread then provides the ordered access to the driver. You'll find that the mediator will also allow you to throttle.
The hard part it seems is knowing when it is okay to make the next call to the device. Does it have some sort of flag to let you know it is ready for reads and writes? Some other questions: How do you plan to communicate state to your clients? Since you are providing an async interface, you'll need to have some sort of error callback registration, etc. Take a look at a normal async driver interface for ideas.
But overall, sounds like a good strategy to start with. As another poster mentioned, more specifics would be nice.

Related

Is it safe to change the reactor's state using the async API without manual synchronization?

Hey
I'm using gRPC with the async API. That requires constructing reactors based on classes like ClientBidiReactor or ServerBidiReactor
If I understand correctly, the gRPC works like this: It takes threads from some thread pool, and using these threads it executes certain methods of the reactors that are being used.
The problem
Now, the problem is when the reactors become stateful. I know that the methods of a single reactor will most probably be executed sequentially, but they may be run from different threads, is this correct? If so, then is it possible that we may encounter a problem described for instance here?
Long story short, if we have an unsynchronized state in such circumstances, is it possible that one thread will update the state, then a next method from the reactor will be executed from a different thread and it will see the not-updated value because the state's new value has not been flushed to the main memory yet?
Honestly, I'm a little confused about this. In the grpc examples here and here this doesn't seem to be addressed (the mutex is for a different purpose there and the values are not atomic).
I used/linked examples for the bidi reactors but this refers to all types of reactors.
Conclusion / questions
There are basically a couple of questions from me at this point:
Are the concerns valid here and do I properly understand everything or did I miss something? Does the problem exist?
Do we need to manually synchronize reactors' state or is it handled by the library somehow(I mean is flushing to the main memory handled)?
Are the library authors aware of this? Did they keep this in mind while they were coding examples I linked?
Thank you in advance for any help, all the best!
You're right that the examples don't showcase this very well, there's some room for improvement. The operation-completion reaction methods (OnReadInitialMetadataDone, OnReadDone, OnWriteDone, ...) can be called concurrently from different threads owned by the gRPC library, so if your code accesses any shared state, you'll want to coordinate that yourself (via synchronization, lock-free types, etc). In practice, I'm not sure how often it happens, or which callbacks are more likely to overlap.
The original callback API spec says a bit more about this, under a "Thread safety" clause: L67: C++ callback-based asynchronous API. The same is reiterated a few places in the callback implementation code itself - client_callback.h#L234-236 for example.

Game Engine Multithreading with Lua

I'm designing the threading architecture for my game engine, and I have reached a point where I am stumped.
The engine is partially inspired by Grimrock's engine, where they put as much as they could into LuaJIT, with some things, including low level systems, written in C++.
This seemed like a good plan, given that LuaJIT is easy to use, and I can continue to add API functions in C++ and expand it further. Faster iteration is nice, the ability to have a custom IDE attached to the game and edit the code while it runs is an interesting option, and serializing from Lua is also easy.
But I am stumped on how to go about adding threading. I know Lua has coroutines, but that is not true threading; it's basically to keep Lua from stalling as it waits for code that takes too long.
I originally had in mind to have the main thread running in Lua and calling C++ functions which are dispatched to the scheduler, but I can't find enough information on how Lua functions. I do know that when Lua calls a C++ function it runs outside of the state, so theoretically it may be possible.
I also don't know whether, if Lua makes such a call that is not supposed to return anything, it will hang on the function until it's done.
And I'm not sure whether the task scheduler runs in the main thread, or if it is simply all worker threads pulling data from a queue.
Basically meaning that, instead of everything running at once, it waits for the game state update before doing anything.
Does anyone have any ideas, or suggestions for threading?
In general, a single lua_State * is not thread safe. It's written in pure C and meant to go very fast. It's not safe to allow exceptions go through it either. There's no locks in there and no way for it to protect itself.
If you want to run multiple lua scripts simultaneously in separate threads, the most straightforward way is to use luaL_newstate() separately in each thread, initialize each of them, and load and run scripts in each of them. They can talk to the C++ safely as long as your callbacks use locks when necessary. At least, that's how I would try to do it.
There are various things you could do to speed it up, for instance, if you are loading copies of a single script in each of the threads, you could compile it to lua bytecode before you launch any of the threads, then put the buffer into shared memory, and have the scripts load the shared byte code without changing. That's most likely an unnecessary optimization though, depending on your application.

An async interpreter for Lua to solve a multithreaded approach?

My general idea is that a single-threaded application ( the Lua interpreter ) will always deteriorate the performance of a multi-threaded application that depends on it ( a generic C++ application ).
To circumvent this problem I'm thinking about an asynchronous approach on the interpreter while keeping the C++ application multi-threaded, this basically means that based on my approach a Lua interpreter should somehow push the entire script/file in a scheduler with an asynchronous approach ( without waiting for the result ) and it's up to the well designed C++ multi-threaded messaging system to keep everything sequential.
The usual relationship is C/C++ function <-> Lua ( with a sequential approach ) ; I would like to have something like C++ messaging system <-> entire Lua script .
I'm also open to any kind of approach that can solve this and really help the mix between Lua and a C++ application designed for multi-threading.
Is this approach made possible by some piece of software ?
EDIT
I need something "user-proof" and I need to implement this behaviour right in C++/Lua API design.
One option is to implement communication to lua as a co-routine. Messages are sent to C++ via coroutine.yield(messagedata), and then it sends back results via lua_resume. (See also: lua_newthread). You could even wrap your functions to provide a nicer event UI.
function doThing(thing, other, data)
return coroutine.yield("doThing", thing, other, data)
end
You can still only have one thread running the lua interpreter at any given time (you will have to do locking) but you can have multiple such co-routines running concurrently.
Concurrency in Lua is a topic that has many many solutions. Here is a resource:
http://lua-users.org/wiki/MultiTasking
You actually can make it easy for yourself since you do not actually have to run Lua itself multithreaded, which would impose a number of additional issues.
The obvious solution is running Lua in a separate thread but providing only a thin API for Lua in which every single API call immediately either forks a new thread/process or uses some sort of message passing for asynchronous data transfer, or even uses short-duration semaphores to read/write some values. This solution requires some sort of idle loop or event listeners unless you want to do busy waiting...
Another option that I think is still quite easy to implement with a new API, is actually the approach of node.js:
Run Lua in a separate thread
Make your whole API of functions that only take callbacks. These callbacks are queued and can be scheduled by your C++ application.
You can even, but do not have to, provide callback wrappers for the standard Lua API.
Example:
local version;
Application.requestVersionNumber(function(val) version = val; end)
Of course this example is riduculously trivial, but you get the idea.
One thing you should know though is that with the callback approach the scripts quickly get highly tiered if you are not careful. While that's not bad for performance, they can get hard to read.

Scenario: Global variables in DLL which is used by Multi-threaded Application

Few months back, I had come across this interesting scenario asked by a guy (on orkut). Though, I've come up with a "non-portable" solution to this problem (have tested it with small code), but still would like to know what you guys have to say and suggest.
Suppose, I created a DLL, exporting some functionalities, written in C++, for single threaded client. This DLL declares lots of global variables, some maybe const variables (read-only) and others are modifiable.
Anyway, later things changed and now I want the same DLL to work with multi-threaded application (without modifying the DLL); that means, several threads access the functions and global variables from the DLL, and modify them.. and so on. All these may cause global variables to hold inconsistent values.
So the question is,
Can we do something in the client code to prevent multi-access of the DLL, and at the same time, ensuring that each thread runs in it's own context (meaning, when it gets access to the DLL, the DLL's global values are same as it was before)?
Sure, you can always create a wrapper-layer handling multi-threading specific tasks such as locking. You could even do so in a second DLL that links with the original one, and then have the final project link with that new DLL.
Be aware that no matter how you implement it, this won't be an easy task. You have to know exactly which thread is able to modify which value at what time, who is able to read what and when etc. unless you want to run into problems like deadlocks or race conditions.
If you're solution allows it, it's often best to assign a single thread to modify any data, and have all others just read and never write, as concurrent reading access is always easier to implement than concurrent writing access (Boost provides all basic functionality to do so, for example shared_mutex).
Can we do something in the client code to prevent multi-access of the DLL, and at the same time, ensuring that each thread runs in it's own context (meaning, when it gets access to the DLL, the DLL's global values are same as it was before)?
This is the hard part. I think the only way top do this would be to create a wrapper around teh existing DLL. When it is called, it would restore the state (global variables) for the current thread, and save them when the call to the DLL returns. You would need to know all of the state variables in the DLL, and be able to read/write them.
If performance is not an issue, a single lock for the entire DLL would suffice, and be the easiest to implement correctly. That would ensure that only one thread was accessing (reading or writing) the DLL at one time.

Is checking current thread inside a function ok?

Is it ok to check the current thread inside a function?
For example if some non-thread safe data structure is only altered by one thread, and there is a function which is called by multiple threads, it would be useful to have separate code paths depending on the current thread. If the current thread is the one that alters the data structure, it is ok to alter the data structure directly in the function. However, if the current thread is some other thread, the actual altering would have to be delayed, so that it is performed when it is safe to perform the operation.
Or, would it be better to use some boolean which is given as a parameter to the function to separate the different code paths?
Or do something totally different?
What do you think?
You are not making all too much sense. You said a non-thread safe data structure is only ever altered by one thread, but in the next sentence you talk about delaying any changes made to that data structure by other threads. Make up your mind.
In general, I'd suggest wrapping the access to the data structure up with a critical section, or mutex.
It's possible to use such animals as reader/writer locks to differentiate between readers and writers of datastructures but the performance advantage for typical cases usually wont merit the additional complexity associated with their use.
From the way your question is stated, I'm guessing you're fairly new to multithreaded development. I highly suggest sticking with the simplist and most commonly used approaches for ensuring data integrity (most books/articles you readon the issue will mention the same uses for mutexes/critical sections). Multithreaded development is extremely easy to get wrong and can be difficult to debug. Also, what seems like the "optimal" solution very often doesn't buy you the huge performance benefit you might think. It's usually best to implement the simplist approach that will work then worry about optimizing it after the fact.
There is a trick that could work in case, as you said, the other threads will only make changes only once in a while, although it is still rather hackish:
make sure your "master" thread can't be interrupted by the other ones (higher priority, non fair scheduling)
check your thread
if "master", just change
if other, put off scheduling, if needed by putting off interrupts, make change, reinstall scheduling
really test to see whether there are no issues in your setup.
As you can see, if requirements change a little bit, this could turn out worse than using normal locks.
As mentioned, the simplest solution when two threads need access to the same data is to use some synchronization mechanism (i.e. critical section or mutex).
If you already have synchronization in your design try to reuse it (if possible) instead of adding more. For example, if the main thread receives its work from a synchronized queue you might be able to have thread 2 queue the data structure update. The main thread will pick up the request and can update it without additional synchronization.
The queuing concept can be hidden from the rest of the design through the Active Object pattern. The activ object may also be able to publish the data structure changes through the Observer pattern to other interested threads.