I'm working on a project where the video I have was recorded at one resolution but when I plug the camera into my computer to perform the calibration with OpenCV it runs at lower resolution, so when I apply the correction to the video it's not quite right.
I am using the OpenCV calibration code, and I'm hoping someone may have already done this and be able to tell me how (and hopefully where) I can adjust the code for the desired resolution for calibration.
I have also discovered that when I adjust the resolution for just opening the camera to display an image it doesn't work unless I first reduce the frame rate, but if I run the code again with the frame rate increased to what it was originally it works, has anyone else come across this? It doesn't make sense to me.
Related
I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)
First of all, I understand this question has been asked several times at here.
FindChessboardCorners cannot detect chessboard on very large images by long focal length lens
Opencv corners detection for High resolution images
However, my situation is a little bit different.
My first experiment is retrieving sequential images of 3264 x 2448 from a Webcam which supports resolution this high, and uses findChessboardCorners to detect the corners on a pattern I placed.
Gladly it works! So I move to next experiment.(See success cases below, I cropped them)
This time I try to project a pattern from a projector of my own to a clean board and detect it, Sadly I failed at here.(Example below, 2592 x 1944)
The two experiments retrieves similar images(I think so), but how come one succeeds and the other one won't? Especially the succeed one has the highest resolution.
I also tried adjust the size of the pattern that projector projects, didn't work.
Adjust the distance of the board, didn't work.
Adjust the camera settings, from lighter to darker, didn't work.
By the way, I suppose the resolution I choose affects the camera intrinsic parameters, so "resize" the image shouldn't be a good idea right? Since I require parameters under high resolution.
I have a c++ code that is being run on the Parrot AR.Drone version 2.0 to detect objects, then save images of the detected objects to the controller (computer). As you all may know, the AR.Drone has an 720p High Definition camera. However, the saved images are very blurry. I cannot seem to find any OpenCV function that increases the resolution of the saved images, however I believe the resolution is set to 95/100 by default for OpenCV. Does anyone know of any solution to this problem?
Any input or comment would be helpful.
I think you mean 95/100 of jPEG quality. You can change the third parameter of cv::imwrite like it said in the opencv documentation
cv::imwrite("name.jpg", image, CV_IMWRITE_JPEG_QUALITY=100); //100 instead of default 95
But this method only increase the quality, not the resolution... and there shouldn't be much difference between 95 and 100%.
I am using a DirectShow filtergraph to grab frames from videos. The current implementation follows this graph:
SourceFilter->SampleGrabber->NullRenderer
This works most of the time to extract images frame by frame for further processing. However I encountered issues with some videos that do not have a PAR of 1:1. These images occur stretched in my processing steps.
The only way to fix this I have found for now is to use a VMR9 renderer in windowless mode that uses GetCurrentImage() to extract a bitmap with the correct aspect ratio. But this method is not very useful for continuous grabbing of thousands of frames.
My question now is: what is the best way to fix this problem? Has anyone run into this issue as well?
Sample Grabber gets you frames with original pixels. It is not exactly a problem if there is aspect ratio attached and the pixels are not "square pixels". To convert to square pixels you simply need to stretch the image respectively. It would be easier for you to do this scale step outside of DirectShow pipeline, and you have all data you need: pixels and original media type. You can calculate the corresponding resolution with square pixels and resample the picture.
I know the title is a bit vague but I'm not sure how else to describe it.
CentOS with ffmpeg + OpenCV 2.4.9. I'm working on a simple motion detection system which uses a stream from an IP camera (h264).
Once in a while the stream hiccups and throws in a "bad frame" (see pic-bad.png link below). The problem is, these frames vary largely from the previous frames and causes a "motion" event to get triggered even though no actual motion occured.
The pictures below will explain the problem.
Good frame (motion captured):
Bad frame (no motion, just a broken frame):
The bad frame gets caught randomly. I guess I can make a bad frame detector by analyzing (looping) through the pixels going down from a certain position to see if they are all the same, but I'm wondering if there is any other, more efficient, "by the book" approach to detecting these types of bad frames and just skipping over them.
Thank You!
EDIT UPDATE:
The frame is grabbed using a C++ motion detection program via cvQueryFrame(camera); so I do not directly interface with ffmpeg, OpenCV does it on the backend. I'm using the latest version of ffmpeg compiled from git source. All of the libraries are also up to date (h264, etc, all downloaded and compiled yesterday). The data is coming from an RTSP stream (ffserver). I've tested over multiple cameras (dahua 1 - 3 MP models) and the frame glitch is pretty persistent across all of them, although it doesn't happen continuously, just once on a while (ex: once every 10 minutes).
What comes to my mind in first approach is to check dissimilarity between example of valid frame and the one we are checking by counting the pixels that are not the same. Dividing this number by the area we get percentage which measures dissimilarity. I would guess above 0.5 we can say that tested frame is invalid because it differs too much from the example of valid one.
This assumption is only appropriate if you have a static camera (it does not move) and the objects which can move in front of it are not in the shortest distance (depends from focal length, but if you have e.g. wide lenses so objects should not appear less than 30 cm in front of camera to prevent situation that objects "jumps" into a frame from nowhere and has it size bigger that 50% of frame area).
Here you have opencv function which does what I said. In fact you can adjust dissimilarity coefficient more large if you think motion changes will be more rapid. Please notice that first parameter should be an example of valid frame.
bool IsBadFrame(const cv::Mat &goodFrame, const cv::Mat &nextFrame) {
// assert(goodFrame.size() == nextFrame.size())
cv::Mat g, g2;
cv::cvtColor(goodFrame, g, CV_BGR2GRAY);
cv::cvtColor(nextFrame, g2, CV_BGR2GRAY);
cv::Mat diff = g2 != g;
float similarity = (float)cv::countNonZero(diff) / (goodFrame.size().height * goodFrame.size().width);
return similarity > 0.5f;
}
You do not mention if you use ffmpeg command line or libraries, but in the latter case you can check the bad frame flag (I forgot its exact description) and simply ignore those frames.
remove waitKey(50) or change it to waitKey(1). I think opencv does not spawn a new thread to perform capture. so when there is a pause, it confuses the buffer management routines, causing bad frames..maybe?
I have dahua cameras and observed that with higher delay, bad frames are observed. And they go away completely with waitKey(1). The pause does not necessarily need to come from waitKey. Calling routines also cause such pauses and result in bad frames if they are taking long enough.
This means that there should be minimum pause between consecutive frame grabs.the solution would be to use two threads to perform capture and processing separately.