Capturing a video from multiple thread Opencv in Cpp - c++

I am learner in Cpp and opencv,I am trying to access the same video from multiple thread and while doing it I am getting deadlock which is pretty expected
I am creating n threads and trying to process the video by dividing it in n part and simultaneously process it in different threads.this is my void function.I found some python solution of doing but didn't able to understand that.
void *finddensity(void *videoinfo)
{
VideoCapture cap(video.mp4);
//do some processing on each frame
}
and then I am creating thread using pthread_create
is there any ways to access the video avoid any deadlock and also there is struct for videoinfo ?
Thank you

So as the task was just a course assignment what I did is loaded the whole video frames in memory(which is not a good practice) and than used mutex lock to access the frames from every thread the video size was small (174MB) but I was able to store video at 5FPS in my memory and then completed the task.
But if there is any other general or better solution(which should be there) please respond here thanks BiOS for formatting code :-).

Related

Assigning and Managing Specific Threads under Constraints

Forgive me for I am not an expert in multi-threading by any means and need some assistance. So just some given knowledge before I get to my question:
Pre-Knowledge
Developing C++ code on the Jetson TK1
Jetson has 4 CPU cores (quad-core CPU ARMv7 CPU)
From what I have researched, each core can utilize one thread ( 4 cores -> 4 threads)
I am running a computer vision application which uses OpenCV
Capturing frames from a webcam as well as grabbing frames from a video file
Psuedo-Code
I am trying to optimize my multi-threaded code such that I can gain the maximum amount of performance for my application. Currently this is basic layout of my code:
int HALT=0;
//Both func1 and func2 can be ran parallel for a short period of time
//but both must finish before moving to the next captured webcam frame
void func1(*STUFF){
//Processes some stuff
}
void func2(*STUFF){
//Processes similar stuff
}
void displayVideo(*STUFF){
while(PLAYBACK!=DONE){
*reads video from file and uses imshow to display the video*
*delay to match framerate*
}
HALT=1;
}
main{
//To open these I am using OpenCVs VideoCapture class
*OPEN VIDEO FILE*
*OPEN WEBCAM STREAM*
thread play(displayVideo, &STUFF);
play.detach();
while(HALT!=1){
*Grab frame from webcam*
//Process frame
thread A(func1,&STUFF);
thread B(func2,&STUFF);
A.join();
*Initialize some variables and do some other stuff*
B.join();
*Do some processing... more than what is between A.join and B.join*
*Possibly display webcam frame using imshow*
*Wait for user input to watch for terminating character*
}
//This while loop runs for about a minute or two so thread A and thread
//B are being constructed many times.
}
Question(s)
So what I would like to know is if there is a way to specify which core/thread I will use when I construct a new thread. I fear that when I am creating threads A and B over and over again, they are jump around to different threads and hampering the speed of my system and/or the reading of the video. Although this fear is not well justified, I see very bizarre behavior on the four cores when running the code. Typically I will always see one core running around 40-60% which I would assume is either the main thread or the play thread. But as for the other cores, the computational load is very jumpy. Also throughout the application playing, I see two cores go from around 60% all the way to 100% but these two cores don't remain constant. It could be the first, second, third, or even fourth core and then they will greatly decline usually to about 20->40%. Occasionally I will see only 1 core drop to 0% and remain that way for what appears to be another cycle through the while loop(i.e. grab frame, process, thread A, thread B, repeat). Then I will see all four of them active again which is the more expected behavior.
I am hoping that I have not been too vague in this post. I just see that I am getting slightly unexpected behavior and I would like to understand what I might be doing incorrectly or not accounting for. Thank you to whomever can help or point me in the right direction.

C++ Qt fast timing of asynchronous processes advice

i'm currently dealing with a Qt GUI i have to set up for a measurement device. The device is working with a frame grabber card which grabs images from a line camera really fast. My image processing which is not that complex takes 0.2ms to complete, and it takes about 40ms to display the signal and the processing result with QCustomPlot which is totally okay.
Besides the GUI output the processed signal will also be put out as an analog signal by a NI DAQ device.
My problem is that i have to update the analog signal with a constant frequency and still update the GUI from time to time.
My current approach or idea was to create a data pool thread and two worker threads. One worker thread receives the data from the frame grabber, processes it an updates the data pool. The second worker thread updates the analog channel of the NI DAQ with a certain frequency of about 2-5kHz given by a clock in the NI DAQ device.
And the GUI thread would read the data pool from to time to time to update the signal display with a rate of about 20-30Hz.
I wanted to use the Qt thread management and he signal-and-slot mechanism because of its "simplicity" and because i already worked with threads in combination with Qt and its thread classes.
Is there maybe a better way, does somebody have an idead or any suggestion? Is it possible that i get problems in the timing of the threads?
Furhtermore is it possible to assign one thread to one single CPU core on a multi core CPU, so that this core only processes this single thread?
Is there maybe a better way, does somebody have an idead or any suggestion? Is it possible that i get problems in the timing of the threads?
Signal/Slot mechanism is fine, try it and if you get into performance issues you can still try to find another approach. I used Signal/Slot Mechanism for real-time video processing with QAbstractVideoSurface and Mediaplayer. It worked for me.
Furhtermore is it possible to assign one thread to one single CPU core on a multi core CPU, so that this core only processes this single thread?
Why would you do that? The operating system or threading library has a scheduler, which takes care of such things. As long you got no good reason doing this yourself, you should just use the existing way.
I would try it with three threads: 1)UI thread, 2)grab-and-process thread, 3)analogue output thread.
The trick is to use a triple buffer to connect output of grab-and-process to input of analogue output.
Say, at moment t, thread(2) finishes processing frame[(t+0)%3], change output destination to frame[(t+1)%3] immediately, and notifies thread(3), which is looping through data in frame[(t+2)%3], to switch to frame[(t+0)%3] when appropriate.
I used this technique when I was working on an image processing project that has a 10fps processing frame rate and a 60fps NTSC output frame rate. To eliminate the tearing effect, a circular buffer with three buffers is the least.

Multithreading an OpenCV Program

Thanks for reading my post.
I have a problem with multithreading an opencv application I was hoping you guys could help me out with.
My aim is to Save 400 frames (in jpeg) from the middle of a video sequence for further examination.
I have the code running fine single threaded, but the multithreading is causing quite a lot of issues so I’m wondering if I have got the philosophy all wrong.
In terms of a schematic of what I should do, would I be best to:
Option 1 : somehow simultaneously access the single video file (or make copies?), then with individual threads cycling through the video frame by frame, save each frame when it is between predetermined limits? E.g. thread 1 saves frames 50 to 100, thread 2 saves frames 101 to 150 etc.
Option 2 : open the file once, cycle through frame by frame then pass an individual frame to a series of unique threads to carry out a saving operation. E.g. frame 1 passed to thread 1 for saving, frame 2 to thread 2 for saving, frame 3 to thread 1, frame 4 to thread 2 etc etc.
Option 3 : some other buffer/thread arrangement which is a better idea than the above!
I'm using visual C++ with the standard libs.
Many thanks for your help on this,
Cheers, Kay
Option 1 is what i have tried to do this far, but because of the errors, i was wondering if it was even possible to do this! Can threads usually access the same file? how do I find out how many threads i can have?
Certainly, different threads can access the same file, but it's really a question if the supporting libraries support that. For reading a video stream, you can use either OpenCV or ffmpeg (you can use both in the same app, ffmpeg for reading and OpenCV for processing, for example). Haven't looked at the docs, so I'm guessing here: either lib should allow multiple readers on the same file.
To find out the number of cores:
SYSTEM_INFO sysinfo;
GetSystemInfo( &sysinfo );
numCPU = sysinfo.dwNumberOfProcessors;
from this post . You would create one thread / core as a starting point, then change the number based on your performance needs and on actual testing.

Threads sharing resources C++

I currently have two threads running in my program:
Main thread - Grabs image from a webcam, stored in a CVD image. Does processing on this image.
Server thread - sends the full image data stored in the above CVD image to its clients using named pipes.
When I run my program it works for a very short while before crashing with the following exception:
0xC000005: Access violation reading location 0x0000000
Which I assume is because my server thread is attempting to access the image at the same time as the main thread.
I haven't done any concurrent programming before (this is my first time) but I have a vague idea about how to solve it at the moment.
My plan is to have some sort of lock that prevents access to the image from the main thread when the server is preparing to send it to the client. However I realised there might be a problem where the server thread constantly keeps the resource as the client is constantly requesting a new frame. So I am thinking to only respond to the client whenever a new frame is grabbed from the webcam to avoid the blocking issue above.
To sum this up:
Main thread:
1. If Image is available
then - Lock image, copy over new data from webcam, release image
else - goto 1
2. Do processing
Server:
1. Receive request for new frame from client
2. If (haven't sent the current frame yet)
then - Lock CVD image access, send over frame, release image.
else - wait until new image available.
3. goto 1
My question is, would this be a suitable solution? and what do I need in order to implement this? i.e. how do I stop execution of certain parts of my code whilst another thread is executing a part of its own code.
Some more info:
I am using VS2010 C++
The client is in C# and there is only 1 client.
I am accessing the image data from the CVD image using image[x][y] which returns a byte value representing the intensity of the image.
There is a copyTo() function available with the CVD image. It seems to do a memory copy of the image to create a new object with the same data. Would this be useful?
I cannot run the program in debug mode because I am working off an existing codebase with no debug mode set.
I would use a circular buffer so I could be reading one frame while writing a different one to clients, assuming you don't want to drop frames.
Look at http://msdn.microsoft.com/en-us/library/windows/desktop/ms682530(v=vs.85).aspx for info on Windows Critical Sections.
Finally, if you have the existing code, why can't you turn debug info on and rebuild? Otherwise you're shooting in the dark trying to find the cause of this crash.
how do I stop execution of certain parts of my code whilst another thread is executing a part of its own code
Synchronization will be done by the pipe itself — if you call ReadFile()¹ in your client it will pause its execution until some data came through it.
There are sample implementations of pipe server and client on MSDN. It might help.
¹ I mean not overlapped call

Display image in second thread, OpenCV?

I have a loop to take in images from a high speed framegrabbger at 250fps.
/** Loop processes 250 video frames per second **/
while(1){
AcquireFrame();
DoProcessing();
TakeAction();
}
At the same time, I would like the user to be able to monitor what is going on. The user only needs to see images at around 30 fps (or less). How do I set up a second thread that displays the current frame every so often?
Thread(){
cvShowImage();
Wait(30); /** Wait for 30 ms **/
}
I am on Windows on a quad core Intel machine using MinGW, gcc and OpenCV 1.1. The main criteria is that the display thread must take as little time away from my main processing loop as possible. Every millisecond counts.
I have tried using CreateThread() to create a new thread with cvShowImage() and cvWaitKey() but apparently those functions are not threadsafe.
I am considering using OpenMP, but some people report problems with OpenMP and OpenCV. I also am considering trying to use DirectX directDraw because apparently it is very fast. but it looks complicated and evidentally there are problems using Windows DLL's with MinGw.
Which of these avenues would be the best place to start?
Ok. So embarrassingly my question is also its own answer.
Using CreateThread(), CvShowImage() and CvWaitKey() as described in my question actually works-- contrary to some postings on the web which suggest otherwise.
In any event, I implemented something like this:
/** Global Variables **/
bool DispThreadHasFinished;
bool MainThreadHasFinished;
iplImage* myImg;
/** Main Loop that loops at >100fps **/
main() {
DispThreadHasFinished = FALSE;
MainThreadHasFinished = FALSE;
CreateThread(..,..,Thread,..);
while( IsTheUserDone() ) {
myImg=AcquireFrame();
DoProcessing();
TakeAction();
}
MainThreadHasFinished = TRUE;
while ( !DisplayThreadHasFinished ) {
CvWaitKey(100);
}
return;
}
/** Thread that displays image at ~30fps **/
Thread() {
while ( !MainThreadHasFinished ) {
cvShowImage(myImg);
cvWaitKey(30);
}
DispThreadHasFinished=TRUE;
return;
}
When I originally posted this question, my code was failing for unrelated reasons. I hope this helps!
Since the frame grabbing doesn't need to use the UI, I'd set up a secondary thread to handle the frame grabbing, and have the original thread that handles the UI display the sample frames. If you tried to display the frame currently be grabbed, you'd have to lock the data (which is generally fairly slow). To avoid that, I'd display a frame one (or possibly two) "behind" the one currently being grabbed, so there's no contention between grabbing and displaying the data. You'll still have to ensure that incrementing the current frame number is thread-safe, but that's pretty simple -- use InterlockedIncrement in the capture thread.
I'm sorry I can't give you a better answer right now, but it seems that your question is not about the structure of your program but rather about the tool you should use to implement multithreading. For this I would recommend Qt. I have been using Qt for a while but I'm just now getting into multithreading.
It seems to me that your best bet might be a QReadWriteLock. This allows you to read from an image but the reader thread will give up its lock when the writer thread comes along. In this case you could keep a copy of the image you last displayed and display it if the image is locked for writing.
Sorry again that I can't be more detailed but, like I said, I'm just getting into this as well. I'm basically trying to do the same thing that you are, but not nearly as fast :). Good luck!
I'm not sure why this happens, but I've added a cvWaitKey after every cvShowImage and the picture was displayed properly.
cvShowImage(myImage);
cvWaitKey(1);