Video camera stream does not capture continuously - python-2.7

I have a program that reads from video camera, and finds the maximum color intensity of all the frames. I noticed that there is a dashed line effect when I streak a laser pointer across the camera viewing area.
I think this is because the fps of the camera is less than the shutter speed of the camera. or that the waitkey function prevents the camera to read during that period.
Is there a way to remove this effect so that if I streak a laser pointer across the camera view it leaves a continuous line?
EDIT: I also need the results in real time if that is possible.
here is the simplified code below:
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)

You should try to first save the video and then process it later.
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)
Your code captures a frame and then processes it using your processImage function and then waits in cv2.waitkey(1) and then displays the processed frame, and finally, again reads the next frame. So there is a time lapse between reading of the two frames. You can save it like this:
while 1:
ret, frame = vid.read()
out.write(frame)

While shiva's suggestion will probably improve your results, it will still be unpredictable, and the behaviour will greatly depend on the capabilities of the camera and your ability to control it. I would expect the gaps to be there.
Instead, I would make an attempt at correcting this:
Find the endpoints of each streak in a frame.
Find the two adjacent endpoints in adjacent frames.
Draw artificial laser line connecting those two points. (Perhaps by using a sample taken from both ends and interpolating between them)
If for some reason you can't interpolate, but still need to be able to do the processing in near-realtime, consider putting the image acquisition into a separate thread. Set this thread to high priority, to minimise the gaps between the frames.
This thread will repeatedly:
Acquire frame from camera.
Insert the frame into a synchronized queue.
Then, create a processing thread (this could be your main thread as well), which will take the frames from the synchronized queue, and do whatever processing is necessary.
To achieve the best results, you should use a camera that is able to run in "freerun" mode, where the camera automatically triggers itself, and sends a stream of data to the computer to process.
This might not be possible to do directly with OpenCV and a regular webcam. Some lower-end industrial camera, either a GigEVision or USB3 model might be more appropriate. (AVT Manta, Basler Scout, etc.) You may be able to find some good deals on EBay if the cost is out of your price range.
You would most likely need to use a special API to control the camera and acquire the frames, but the rest of the processing would be done using OpenCV.

Related

How to detect camera frame loss using Windows media API like Media Foundation or DirectShow?

I am writing an application for Windows that runs a CUDA accelerated HDR algorithm. I've set up an external image signal processor device that presents as a UVC device, and delivers 60 frames per second to the Windows machine over USB 3.0.
Every "even" frame is a more underexposed frame, and every "odd" frame is a more overexposed frame, which allows my CUDA code perform a modified Mertens exposure fusion algorithm to generate a high quality, high dynamic range image.
Very abstract example of Mertens exposure fusion algorithm here
My only problem is that I don't know how to know when I'm missing frames, since the only camera API I have interfaced with on Windows (Media Foundation) doesn't make it obvious that a frame I grab with IMFSourceReader::ReadSample isn't the frame that was received after the last one I grabbed.
Is there any way that I can guarantee that I am not missing frames, or at least easily and reliably detect when I have, using a Windows available API like Media Foundation or DirectShow?
It wouldn't be such a big deal to miss a frame and then have to purposefully "skip" the next frame in order to grab the next oversampled or undersampled frame to pair with the last frame we grabbed, but I would need to know how many frames were actually missed since a frame was last grabbed.
Thanks!
There is IAMDroppedFrames::GetNumDropped method in DirectShow and chances are that it can be retrieved through Media Foundation as well (never tried - they are possibly obtainable with a method similar to this).
The GetNumDropped method retrieves the total number of frames that the filter has dropped since it started streaming.
However I would question its reliability. The reason is that with these both APIs, the attribute which is more or less reliable is a time stamp of a frame. Capture devices can flexibly reduce frame rate for a few reasons, including both external like low light conditions and internal like slow blocking processing downstream in the pipeline. This makes it hard to distinguish between odd and even frames, but time stamp remains accurate and you can apply frame rate math to convert to frame indices.
In your scenario I would however rather detect large gaps in frame times to identify possible gap and continuity loss, and from there run algorithm that compares exposure on next a few consecutive frames to get back to sync with under-/overexposition. Sounds like a more reliable way out.
After all this exposure problem is highly likely to be pretty much specific to the hardware you are using.
Normally MFSampleExtension_Discontinuity is here for this. When you use IMFSourceReader::ReadSample, check this.

Detect bad frames in OpenCV 2.4.9

I know the title is a bit vague but I'm not sure how else to describe it.
CentOS with ffmpeg + OpenCV 2.4.9. I'm working on a simple motion detection system which uses a stream from an IP camera (h264).
Once in a while the stream hiccups and throws in a "bad frame" (see pic-bad.png link below). The problem is, these frames vary largely from the previous frames and causes a "motion" event to get triggered even though no actual motion occured.
The pictures below will explain the problem.
Good frame (motion captured):
Bad frame (no motion, just a broken frame):
The bad frame gets caught randomly. I guess I can make a bad frame detector by analyzing (looping) through the pixels going down from a certain position to see if they are all the same, but I'm wondering if there is any other, more efficient, "by the book" approach to detecting these types of bad frames and just skipping over them.
Thank You!
EDIT UPDATE:
The frame is grabbed using a C++ motion detection program via cvQueryFrame(camera); so I do not directly interface with ffmpeg, OpenCV does it on the backend. I'm using the latest version of ffmpeg compiled from git source. All of the libraries are also up to date (h264, etc, all downloaded and compiled yesterday). The data is coming from an RTSP stream (ffserver). I've tested over multiple cameras (dahua 1 - 3 MP models) and the frame glitch is pretty persistent across all of them, although it doesn't happen continuously, just once on a while (ex: once every 10 minutes).
What comes to my mind in first approach is to check dissimilarity between example of valid frame and the one we are checking by counting the pixels that are not the same. Dividing this number by the area we get percentage which measures dissimilarity. I would guess above 0.5 we can say that tested frame is invalid because it differs too much from the example of valid one.
This assumption is only appropriate if you have a static camera (it does not move) and the objects which can move in front of it are not in the shortest distance (depends from focal length, but if you have e.g. wide lenses so objects should not appear less than 30 cm in front of camera to prevent situation that objects "jumps" into a frame from nowhere and has it size bigger that 50% of frame area).
Here you have opencv function which does what I said. In fact you can adjust dissimilarity coefficient more large if you think motion changes will be more rapid. Please notice that first parameter should be an example of valid frame.
bool IsBadFrame(const cv::Mat &goodFrame, const cv::Mat &nextFrame) {
// assert(goodFrame.size() == nextFrame.size())
cv::Mat g, g2;
cv::cvtColor(goodFrame, g, CV_BGR2GRAY);
cv::cvtColor(nextFrame, g2, CV_BGR2GRAY);
cv::Mat diff = g2 != g;
float similarity = (float)cv::countNonZero(diff) / (goodFrame.size().height * goodFrame.size().width);
return similarity > 0.5f;
}
You do not mention if you use ffmpeg command line or libraries, but in the latter case you can check the bad frame flag (I forgot its exact description) and simply ignore those frames.
remove waitKey(50) or change it to waitKey(1). I think opencv does not spawn a new thread to perform capture. so when there is a pause, it confuses the buffer management routines, causing bad frames..maybe?
I have dahua cameras and observed that with higher delay, bad frames are observed. And they go away completely with waitKey(1). The pause does not necessarily need to come from waitKey. Calling routines also cause such pauses and result in bad frames if they are taking long enough.
This means that there should be minimum pause between consecutive frame grabs.the solution would be to use two threads to perform capture and processing separately.

How to average all the frames of a video file in which objects are not moving in OpenCV?

I have different frames of a video file. Now, on observing each frames separately, I noticed there are many frames in which objects has not moved. I need to do averaging of all those frames and make a single frame using OpenCV.
I am totally new in OpenCV, so It will be great help if can able to get codes for frame averaging.
One simple technique...
Subtract previous frame from present frame...using this opencv function. Take only those frame which have difference negative or positive above a threshold...like in the frames where the object is almost static then on doing frame difference you will get low value...skip those frames....again when only the object is moving and the rest of the background is more or less static like a man moving in a park, there just you can store the position of the man and the background gets duplicated...

Image Stitching from a live Video Stream in OpenCv

I am trying to Stitch an image from a live video camera (more like a panorama) using OpenCv. The stitching is working fine. My problem is, i want the stitching to be done in real time say around 30 mph but the processing of the stitching is slow.
I want to use Threads to improve the speed but in order to use them do i need to store my live video stream or is there any way to directly use threads for the live stream.
Here is a sample code:
SapAcqDevice *pAcq=new SapAcqDevice("Genie_HM1400_1", false);
SapBuffer *pBuffer = new SapBuffer(20,pAcq);
SapView *pView=new SapView(pBuffer,(HWND)-1);
SapAcqDeviceToBuf *pTransfer= new SapAcqDeviceToB(pAcq,pBuffer,XferCallback,pView);
pAcq->Create();
pBuffer->Create();
pView->Create();
pTransfer->Create();
pTransfer->Grab();
printf("Press any key to stop grab\n");
getch();
pTransfer->Freeze();
pTransfer->Wait(5000);
printf("Press any key to terminate\n");
getch();
This above code is used to capture the live stream. The XferCallback function is used to do the processing of the frames. In this function i call my stitch engine. Since the processing of the engine is slow i want to use threads.
Here is a sample code of the callback function:
SapView *pView = (SapView *) pInfo->GetContext();
SapBuffer *pBuffer;
pBuffer = pView->GetBuffer();
void *pData=NULL;
pBuffer->GetAddress(&pData);
int width=pBuffer->GetWidth();
int height=pBuffer->GetHeight();
int depth=pBuffer->GetPixelDepth();
IplImage *fram;
fram = cvCreateImage(cvSize(width,height),depth,1);
cvSetImageData(fram,pData,width);
stitching(frame_num , fram);
cvWaitKey(1);
frame_num++;
I want many threads working on the stitch engine.
If you think you can get the stitching fast enough using threads, then go for it.
do i need to store my live video stream or is there any way to
directly use threads for the live stream.
You might benefit from setting up a ring buffer with preallocated frames. You know the image size isn't going to change. So your Sapera acquisition callback simply pushes a frame into the buffer.
You then have another thread that sits there stitching as fast as it can and maintaining state information to help optimize the next stitch. You have not given much information about the stitching process, but presumably you can make it parallel with OpenMP. If that is fast enough to keep up with frame acquisition then you'll be fine. If not, then you will start dropping frames because your ring buffer is full.
As hinted above, you can probably predict where the stitching for the next frame ought to begin. This is on the basis that movement between one frame and the next should be reasonably small and/or smooth. This way you narrow your search and greatly improve the speed.

Does cvQueryFrame have buffer for frames in advance?

If i do:
while(1) {
//retrieve image from the camera
webCamImage=cvQueryFrame(camera) // where 'camera' is cvCreateCameraCapture(0)
//do some heavy processing on the image that may take around half a second
funcA()
}
Now when I go to consecutive iterations, it seems that webCamImage lags !
Even if i move the camera, webCamImage takes long time to get updated to the new field of view, and it keeps showing and processing previous field of view camera frames.
I am assuming that cvQuery has some buffer that retrieves the frames.
Can you please advise me on how to get the updated camera view each iteration ?
Many thanks
cvQueryFrame is just a wrapper that calls 2 other functions: cvGrabFrame, which gets data from the camera very quickly, and cvRetrieveFrame, which uncompresses this data and puts it into an IplImage. If you need frames captured immediately, just grab the frame, and retrieve it for processing later.
See http://opencv.jp/opencv-1.0.0_org/docs/ref/opencvref_highgui.htm FMI
Having said that, though, I use cvQueryFrame with a typical webcam, and I have no trouble getting dozens of frames per second. Any chance that the part that's lagging is actually in your funcA() call? edit: from the comment in your code, I see that funcA() is indeed the slow part. If it takes half a second to execute, you'll only get a new frame from cvQUeryFrame every half second, just as you describe. Try either making funcA faster, or put it in a separate thread.
and as a friendly reminder, the IplImage returned by cvQueryFrame/cvRetrieveFrame should not be modified or deleted by the user; it's part of OpenCV's internal system for storing things, and if you're doing anything interesting with it, you should make a copy. I don't know if you're doing this already, but I certainly did it wrong when I started out.