I am having problem to implement the following scenario. My problem statement goes like this:
I have 3 threads. ThreadCamera for grabbing frames from a camera. ThreadProcess for processing (doing some image processing with OpenCV on the image/frame grabbed) the frame and main GUI Thread for displaying the image.
I don't know how much time ThreadProcess will take to process an image. So I want to pass the image from ThreadCamera to ThreadProcess , do some image processing on the image and pass it to the main GUI Thread for display.
When ThreadProcess processes the image the ThreadCamera should sleep. I.e. it should not grab further frames from the camera. When the ThreadProcess finishes the image processing task it should pass the image and some information to the main GUI Thread. After this only the ThreadCamera should wake up and grab the next frame/image from the camera runnig in that(ThreadCamera) thread.
Thanx Guys...after some comments to put Camera and Image Processing job in a single thread i would like to know another point..which is..
What if don't want to sleep the camera while the processing is going on?It does not matter to me if I loose some of the frames grabbed by CameraThread(which in any case I am loosing if i sleep or not sleep the camera)
I am using QObject for each process(Camera Process and Image Processing job) and movetoThread command to make it run in a particular thread.
Any insight about the implementation and signal/slot design will be helpful..
What you're looking for is a simple "Publish/Subscribe" pattern. In this type of 'distribution' pattern, all messages are sent and just dropped by the client when it's not in a state to receive images
I would implement this as the following in your application:
Have all separate threads (Camera,Processing,Gui) like you already do.
Have the CameraThread peridocally (through a qTimer signal maybe if you want to make it simple) capture an image and sent it over a signal/slot connection to the processingThread.
When the processingThread is processing an image, it sets a state flag (can just be a member variable, a bool would work) to say that its currently processing an image. When you're done processing the image you set the flag to say that you're not processing.
In the processingThreads slot, which receives the images from the CameraThread, you will first check to see if you're currently processing an image. If you are, you do not do anything with the signals data and you just return. If you are not processing an image, you will store the signals data and call the process function.
The trick to making this work is to include this function call (QCoreApplication::processEvents()) in your ProcessingThreads main loop in the processing function. This will allow your ProcessingThread to process any signals it gets WHILE it's doing something useful.
The state variable checking will allow you to 'drop' all new images sent to you while you're processing a current one and not queue them up.
Related
I know you can't do any rendering in separate threads using SDL2, but what about events like leftbutton1down etc?
I don't see how I can continuously draw at lets say, 60fps, while having the event poll in the same thread, so, wondering what's the best structure to use for my game.
Current idea (each separate thread separated by ','):
Main thread with drawing + physics, event polling (inputs)
I am getting started with the wxThread.
I have shown the video in the GUI thread with wxWidgets. And now I want to handle each frame with opencv like drawing one circle in each frame. At the beginning, I have done the processing in the GUI thread. But when I run the program, I find that the image in the GUI isnot shown continuously and some image doesnot include the circle.
I have realized that image processing part is not put into the GUI thread. I should creat a new thread to complete the image processing part.
But I don't know how to how to make GUI thread and worker thread synchronization, it means that it can make the GUI thread non-blocking and the worker thread process the frame with opencv. Although I konw the wxthread tutorial, I couldnot deal with the problem about how to share the data between the two wxThread.
Could anyone have some ideas about it or some reference? Thx.
There is a pretty extensive section on thread synchronization in the wx-wiki
https://wiki.wxwidgets.org/Inter-Thread_and_Inter-Process_communication
For background-tasks I like to use the "post event to main thread" approach
This might also work for you. - Send an event whenever your work-thread is done with the processing of a single image and display the image in the event-handler method
I am currently developing a 3D engine from scratch (again) as I wanted to use more modern techniques (and frankly, my previous design was crap). Now I am in the process of implementing my input thread.
Now that I am more experienced I know that if I write to the same variable from my input thread and my rendering/main thread then I will get data races so I decided to use mutexes(mutices?) to lock data that could be written to in different threads, but this is causing an unacceptable bug: mouse input isn't smooth any more :/
I did kind of expect that though, I just thought my thinking might be off.
Now I am stuck at a crossroads because I don't know how to go about fixing this issue!
The variable that I am writing to from both threads is x_rel and y_rel which is mouse position relative to last position when I received an event.
The input thread sets the variables and the rendering/main thread resets them to 0.0 when it is finished with them. This works fine, but as I said, this gives me very rigid mouse motion.
My question here is, what can I do to get smooth input while still being race safe across threads?
Here is my mutex definition (it is global):
std::mutex mouse_mutex;
Here is the code that I use to get the mouse events:
void input_thread_func(application &app, const bool &running, double &x_rel, double &y_rel){
while(running){
application::event ev = app.get_input();
switch(ev.type){
case events::mouse_motion :{
if(mouse_mutex.try_lock()){
x_rel = ev.xrel;
y_rel = ev.yrel;
mouse_mutex.unlock();
}
break;
}
default:
break;
}
}
}
And here is my main function:
int main(int argc, char *argv[]){
/* all my init stuff */
application app;
bool running = true;
double x_rel = 0.0, y_rel = 0.0;
std::thread input_thread(
input_thread_func,
std::ref(app), std::cref(running)
std::ref(x_rel), std::ref(y_rel)
);
double multiplier = /* whatever I like */;
while(running){
/* check input data */
if(mouse_mutex.try_lock()){
update_camera(x_rel * multiplier, y_rel * multiplier);
app.set_mouse_pos(0, 0);
x_rel = 0.0; y_rel = 0.0;
mouse_mutex.unlock();
}
/* do rendering stuff */
}
}
This is my understanding of your problem:
Basically, you have 2 threads, one dealing with mouse events coming from the system, and one taking care of rendering duties. As far as I understand, the later needs the most recent mouse position in order to compute an accurate set of camera matrices. However, because of the nature of the input device, the event thread is flooded with mouse position events: the system polls the device fast enough to get many updates per rendering frame, and these updates get pushed back to the event thread. That thread having only that task to do, it will constantly lock/unlock the mouse mutex while processing these events, and the odds of having that lock held by the event thread is high when the rendering thread actually wants to get ahold of it.
There are two possible problems coming out of your current setup:
either the rendering thread needs all the mouse updates, and thus you'd need to implement an event queue between that thread and the event thread to keep track of all of them, then you'd be essentially filtering mouse events to dispatch them to rendering:
setup a mouse event queue between those threads, and just push mouse events from the input queue to that new queue.
have rendering check the mouse queue at each frame, and do as many camera updates as necessary, or better yet - as suggested in the comments, combine all these events in one single camera update, if possible. You should try putting that computation in the event thread, in order to reduce load on rendering.
or rendering only needs the latest event (as it seems to be doing), in which case you need to update the mutexed data structure with the latest one only (thus reducing contention on that mutex).
To recap, depending on how the camera update function works wrt. mouse events, you will probably have to change it to work with only one set of absolute coordinates built cumulatively from all the relative events (maybe you can get them directly from the mouse event structure?), and only update the mutexed data once per event stream.
Note:
You may also check how other engines do it: the IdTech2 (Quake I/II) series of engines are monothreaded, but they're still a good source of inspiration. Each frame, these
deal with all mouse inputs in one go. During a frame render, a first routine is called (In_Move,HandleEvents, or some other function, depending on the backend, see the sys_* files) to check all the events and update all the related structures. When the rendering code (R_RenderFrame) is invoked, there're no contention on these structures anymore. You probably want to "emulate" the same behaviour, by making sure that rendering isn't held back by one or more mutexes. A possible solution has been described above for mouse input, and can certainly be extended to handle other type of input devices.
Here is the situation:
You have one long-running calculation running in a background thread.
This calculation is sending out a signal to, for example, refresh a GUI element, every 100 msec.
Let's say it sends out 100 such signals.
The widget being redrawn takes more than 100 msec to redraw; let's say 1 second.
What happens in the event loop? Do the signal calls "pile up" until they are all executed (i.e. 100 seconds)? Is there any mechanism for "dropping" events?
User events are never discarded. If you queue emitted signal events faster than you can process them, your event queue will grow until you run out of memory and your program will crash. It's worth noting, though, that QTimer will skip timeout events if the system is under heavy load. To some extent, that may help regulate your throughput.
You could also consider sending feedback from one thread to the other (an acknowledgement, perhaps), and manually adjust your timing in the producer thread based on how far behind the consumer thread is. Or, you could use a metaphorical sledgehammer and switch to a blocking queued connection.
In your example, you could measure the drawing time in the widget. If the drawing takes for example 240 ms, then you could process the next 2 signals quickly without drawing anything at all. That way the signals wouldn't pile up.
Edit:
Actually there is a slight problem in my solution. The last signal should always cause a redraw, otherwise the widget would show wrong data when the calculation is finished.
When a signal is skipped, a single shot timer could be started for example with a 150 ms interval. When a redraw is done because of a signal, this timer would be stopped. So after the last redraw signal, this single shot timer would cause the drawing of the final state. I guess this would work, but it would be quite complicated.
Starting a simple timer to do the redrawing when the calculation starts would quite probably be a better approach. If the drawing of the widget takes a lot of time, the timer interval could be dynamically adjusted according to the draw time.
I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).