how to use cv::VideoCapture::waitAny() in opencv - c++

I've been trying to find a way to asynchronously check to see if the next frame of the camera that I get using videocapture is ready.
I came across waitAny() which is described to "Wait for ready frames from VideoCapture.".
in the OpenCV documentation, I didn't find any useful info on how to use this or what are the use cases.
I've been searching the net for two days now and the only thing I found is how to define the parameters this function needs(I'm new to c++) and I don't know how to fill them and what are their use cases.
here is the documentation: https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#ade1c7b8d276fea4d000bc0af0f1017b3

An approach that IMO is fairly common is to have a pair of threads -- one that "produces" frames from VideoCapture (calls VideoCapture::read() in a loop), and another that actually uses these frames (a "consumer"). The producer pushes images onto a queue (that's shared across two threads), while the consumer pops them.
In this situation, checking whether a camera has produced an image amounts to checking whether the queue is empty.
By itself, VideoCapture does not provide such an async API.
That said, if you want to use waitAny, and if you have a single camera, you could do something like this:
VideoCapture cap = /* get the VideoCapture object from somewhere */
constexpr int64 kTimeoutNs = 1000;
std::vector<int> ready_index;
cv::Mat image;
if (VideoCapture::waitAny({cap}, ready_index, kTimeoutNs)) {
// Camera was ready; get image.
cap.retrieve(image);
} else {
// Camera was not ready; do something else.
}
Above, VideoCapture::waitAny will wait for the specified timeout (1 microsecond) for the camera to produce a frame, and will return after this period.
If the camera is ready, it will return true (and will also populate ready_index with the index of the ready camera. Since you only have a single camera, the vector will be either empty or non-empty).
That said, waitAny seems to only be supported by VideoCapture sources that use V4L in the backend.

Related

using OpenCV to capture images, not video

I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?

Proper use of cv::VideoCapture

I've been having some issues regarding capturing video from a live stream.
I open up the video with the open function of the cv::VideoCapture. However I need to manually ask when a frame is ready in something like this:
while (true){
cv::Mat frame;
if (videoCapture.read(frame)){
// Do stuff ...
}
else{
// Video is done.
}
}
The problem with this code is that it will definitely process a single frame multiple times, the number of times depending on the camera's FPS. This is because the read() function will only return false if the camera is disconnected, according to the documentation.
So my question is, how can I know if there is a NEW frame available? That I'm not just getting the old one again?

Video camera stream does not capture continuously

I have a program that reads from video camera, and finds the maximum color intensity of all the frames. I noticed that there is a dashed line effect when I streak a laser pointer across the camera viewing area.
I think this is because the fps of the camera is less than the shutter speed of the camera. or that the waitkey function prevents the camera to read during that period.
Is there a way to remove this effect so that if I streak a laser pointer across the camera view it leaves a continuous line?
EDIT: I also need the results in real time if that is possible.
here is the simplified code below:
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)
You should try to first save the video and then process it later.
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)
Your code captures a frame and then processes it using your processImage function and then waits in cv2.waitkey(1) and then displays the processed frame, and finally, again reads the next frame. So there is a time lapse between reading of the two frames. You can save it like this:
while 1:
ret, frame = vid.read()
out.write(frame)
While shiva's suggestion will probably improve your results, it will still be unpredictable, and the behaviour will greatly depend on the capabilities of the camera and your ability to control it. I would expect the gaps to be there.
Instead, I would make an attempt at correcting this:
Find the endpoints of each streak in a frame.
Find the two adjacent endpoints in adjacent frames.
Draw artificial laser line connecting those two points. (Perhaps by using a sample taken from both ends and interpolating between them)
If for some reason you can't interpolate, but still need to be able to do the processing in near-realtime, consider putting the image acquisition into a separate thread. Set this thread to high priority, to minimise the gaps between the frames.
This thread will repeatedly:
Acquire frame from camera.
Insert the frame into a synchronized queue.
Then, create a processing thread (this could be your main thread as well), which will take the frames from the synchronized queue, and do whatever processing is necessary.
To achieve the best results, you should use a camera that is able to run in "freerun" mode, where the camera automatically triggers itself, and sends a stream of data to the computer to process.
This might not be possible to do directly with OpenCV and a regular webcam. Some lower-end industrial camera, either a GigEVision or USB3 model might be more appropriate. (AVT Manta, Basler Scout, etc.) You may be able to find some good deals on EBay if the cost is out of your price range.
You would most likely need to use a special API to control the camera and acquire the frames, but the rest of the processing would be done using OpenCV.

OpenCV: Should the write from VideoWriter run in an independent thread?

I'm writing a program that reads multiple webcams, stitches the pictures together and saves them into a video file.
Should i use threads to capture the images and to write the resulting large image into a file?
If yes, should i use c+11 or boost?
Do you maybe have some simple example code which shows how to avoid racing conditions?
I already found this thread which uses threads for write but it doesn't seem to avoid racing conditions.
pseudocode of my program:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
while(true){
for(camera:camVector){
camera.grab() //send signal to grab picture
}
picVector = getFrames(camVector) //collect frames of all cameras into a Vector
Mat bigImg = stitchPicTogether(picVector)
outputVideo.write(bigImg)
}
That's what I'd do:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
for(camera:camVector) camera.grab(); // request first frame
while(true){
parallel for(camera:camVector) {
frame = getFrame(camera);
pasteFrame(/* destination */ bigImg,
/* geometry */ geometry[camera],
/* source */ frame
);
camera.grab(); // request next frame, so the webcam driver can start working
// while you write to disk
}
outputVideo.write(bigImg)
}
This way stiching is done in parallel, and if your webcam drivers have different timing, you can begin stiching what you recieved from one webcam while you wait for the other webcam's frame.
About implementation, you could simply go with OpenMP, already used in OpenCV, something like #pragma omp parallel. If you prefer something more C++ like, give Intel TBB a try, it's cool.
Or as you said, you could go with native C++11 threads or boost. Choosing between the two mainly depend on your work environment: if you intend to work with older compiler, boost is safer.
In all cases, no locking is required, except a join at the end, to wait for all threads to finish their work.
If your bottleneck is writing the output, the limitting factor is either disk IO or video compression speed. For disk, you can try changing codec and/or compressing more. For compression speed, have a look at GPU codecs like this.

Image Stitching from a live Video Stream in OpenCv

I am trying to Stitch an image from a live video camera (more like a panorama) using OpenCv. The stitching is working fine. My problem is, i want the stitching to be done in real time say around 30 mph but the processing of the stitching is slow.
I want to use Threads to improve the speed but in order to use them do i need to store my live video stream or is there any way to directly use threads for the live stream.
Here is a sample code:
SapAcqDevice *pAcq=new SapAcqDevice("Genie_HM1400_1", false);
SapBuffer *pBuffer = new SapBuffer(20,pAcq);
SapView *pView=new SapView(pBuffer,(HWND)-1);
SapAcqDeviceToBuf *pTransfer= new SapAcqDeviceToB(pAcq,pBuffer,XferCallback,pView);
pAcq->Create();
pBuffer->Create();
pView->Create();
pTransfer->Create();
pTransfer->Grab();
printf("Press any key to stop grab\n");
getch();
pTransfer->Freeze();
pTransfer->Wait(5000);
printf("Press any key to terminate\n");
getch();
This above code is used to capture the live stream. The XferCallback function is used to do the processing of the frames. In this function i call my stitch engine. Since the processing of the engine is slow i want to use threads.
Here is a sample code of the callback function:
SapView *pView = (SapView *) pInfo->GetContext();
SapBuffer *pBuffer;
pBuffer = pView->GetBuffer();
void *pData=NULL;
pBuffer->GetAddress(&pData);
int width=pBuffer->GetWidth();
int height=pBuffer->GetHeight();
int depth=pBuffer->GetPixelDepth();
IplImage *fram;
fram = cvCreateImage(cvSize(width,height),depth,1);
cvSetImageData(fram,pData,width);
stitching(frame_num , fram);
cvWaitKey(1);
frame_num++;
I want many threads working on the stitch engine.
If you think you can get the stitching fast enough using threads, then go for it.
do i need to store my live video stream or is there any way to
directly use threads for the live stream.
You might benefit from setting up a ring buffer with preallocated frames. You know the image size isn't going to change. So your Sapera acquisition callback simply pushes a frame into the buffer.
You then have another thread that sits there stitching as fast as it can and maintaining state information to help optimize the next stitch. You have not given much information about the stitching process, but presumably you can make it parallel with OpenMP. If that is fast enough to keep up with frame acquisition then you'll be fine. If not, then you will start dropping frames because your ring buffer is full.
As hinted above, you can probably predict where the stitching for the next frame ought to begin. This is on the basis that movement between one frame and the next should be reasonably small and/or smooth. This way you narrow your search and greatly improve the speed.