Blurry Saved Images of Detected Objects using OpenCV - c++

I have a c++ code that is being run on the Parrot AR.Drone version 2.0 to detect objects, then save images of the detected objects to the controller (computer). As you all may know, the AR.Drone has an 720p High Definition camera. However, the saved images are very blurry. I cannot seem to find any OpenCV function that increases the resolution of the saved images, however I believe the resolution is set to 95/100 by default for OpenCV. Does anyone know of any solution to this problem?
Any input or comment would be helpful.

I think you mean 95/100 of jPEG quality. You can change the third parameter of cv::imwrite like it said in the opencv documentation
cv::imwrite("name.jpg", image, CV_IMWRITE_JPEG_QUALITY=100); //100 instead of default 95
But this method only increase the quality, not the resolution... and there shouldn't be much difference between 95 and 100%.

Related

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

How do you save images of detected objects in OpenCV?

I am trying to add "if, then logic" to the opencv face detection code such that if and when a face is detected through a camera or webcam, an image of the detected face is saved to a pre-determined file path on the controller or computer such as C:\Users\Public\Desktop.
I have looked everywhere for examples of anything that can help but I cant find anything.
If anyone knows any codes, research articles, websites, people I can contact, that would be very helpful.
Thanks
The function call that detects the face will most probably contain a boundingRectangle parameter of type vector<Rect>. Use the data present in it to select the Region of Interest(ROI) if a face is detected. This selected ROI can then be saved using this function.
These are the very basics of OpenCV and hence I am not including any code snippets along with my answer.
Based on the detected portion(pending whether you are getting a region or point) from the image, define an roi(if needed). Then copy the roi to new image and then save to path?

Customize my own HOG Object Detection

I want to build my own Object Detection algorithm using HOG feature. Since OpenCV has its own framework to do this for pedestrian detect, I think I can just modify some of the parameters to customize my own one. But I have a few questions about it after reading this.
1.Prepare my own dataset:
Do I have to make all the pos and neg images the same size? Sometimes resize the image may lead to image deformation and affect the hog result. If not, then I have to change the HOG parameters to adapt every image(for example: set the window size = image size and generate a 3780 vector). which one is better.(I prefer the second one)
2.For training the SVM.
In opencv, I think they use SVMLight for training which has been integrated in OpenCV. Can I use Libsvm or other package (compatible with function hog.setSVMDetector())?
3.For hog.detectMultiScale() function
After doing all these above, can I get the result (rectangles) just calling this function?
Thanks!

'creating' images effectively

I'll first tell you the problem and then I'll tell you my solution.
Problem: I have a blank white PNG image approximately 900x900 pixels. I want to copy circles 30x30 pixels in size, which are essentially circles with a different colour. There are 8 different circles, and placed on the image depending on data values which I've created elsewhere.
Solution: I've used ImageMagicK, it's suppose to be good for general purpose image editing etc. I created a blank image
Image.outimage("900x900","white");
I upload all other small 30x30 pixel images with 'read' function.
I upload the data and extract vales.
I place the small 'circle' images on the blank one using the composite command.
outimage.composite("some file.png",pixelx,pixely,InCompositeOp);
This all works fine and the images come up the way I want them too.
However its painfully SLOW. It takes 20 seconds to do one image, and I have 1000 of them. Surely there must be a better way to do this. I've seen other researchers simulate images way more complex and way faster. It's quite possible I took the wrong approach. Maybe I sould be 'drawing' circles instead of 'pasting' them or something. I'm quite baffled. Any input is appreciated.
I suspect that you just need some library that is capable of drawing circles on bitmap and saving that bitmap as png.
For example my Graphin library: http://code.google.com/p/graphin/
Or some such. With Graphin you can also draw one PNG on surface of another as in your case.
You did not give any information about the platform you are using (only "C++"), so if you are looking for a platform independent solution, the CImg library might be worth a try.
http://cimg.sourceforge.net/
By the way, did you try drawing the circles using the ImageMagick C++ API Magick++ instead of "composing" them? I cannot believe that it is that slow.

Assessing the quality of an image with respect to compression?

I have images that I am using for a computer vision task. The task is sensitive to image quality. I'd like to remove all images that are below a certain threshold, but I am unsure if there is any method/heuristic to automatically detect images that are heavily compressed via JPEG. Anyone have an idea?
Image Quality Assessment is a rapidly developing research field. As you don't mention being able to access the original (uncompressed) images, you are interested in no reference image quality assessment. This is actually a pretty hard problem, but here are some points to get you started:
Since you mention JPEG, there are two major degradation features that manifest themselves in JPEG-compressed images: blocking and blurring
No-reference image quality assessment metrics typically look for those two features
Blocking is fairly easy to pick up, as it appears only on macroblock boundaries. Macroblocks are a fixed size -- 8x8 or 16x16 depending on what the image was encoded with
Blurring is a bit more difficult. It occurs because higher frequencies in the image have been attenuated (removed). You can break up the image into blocks, DCT (Discrete Cosine Transform) each block and look at the high-frequency components of the DCT result. If the high-frequency components are lacking for a majority of blocks, then you are probably looking at a blurry image
Another approach to blur detection is to measure the average width of edges of the image. Perform Sobel edge detection on the image and then measure the distance between local minima/maxima on each side of the edge. Google for "A no-reference perceptual blur metric" by Marziliano -- it's a famous approach. "No Reference Block Based Blur Detection" by Debing is a more recent paper
Regardless of what metric you use, think about how you will deal with false positives/negatives. As opposed to simple thresholding, I'd use the metric result to sort the images and then snip the end of the list that looks like it contains only blurry images.
Your task will be a lot simpler if your image set contains fairly similar content (e.g. faces only). This is because the image quality assessment metrics
can often be influenced by image content, unfortunately.
Google Scholar is truly your friend here. I wish I could give you a concrete solution, but I don't have one yet -- if I did, I'd be a very successful Masters student.
UPDATE:
Just thought of another idea: for each image, re-compress the image with JPEG and examine the change in file size before and after re-compression. If the file size after re-compression is significantly smaller than before, then it's likely the image is not heavily compressed, because it had some significant detail that was removed by re-compression. Otherwise (very little difference or file size after re-compression is greater) it is likely that the image was heavily compressed.
The use of the quality setting during re-compression will allow you to determine what exactly heavily compressed means.
If you're on Linux, this shouldn't be too hard to implement using bash and imageMagick's convert utility.
You can try other variations of this approach:
Instead of JPEG compression, try another form of degradation, such as Gaussian blurring
Instead of merely comparing file-sizes, try a full reference metric such as SSIM -- there's an OpenCV implementation freely available. Other implementations (e.g. Matlab, C#) also exist, so look around.
Let me know how you go.
I had many photos shot to an ancient book (so similar layout, two pages per image), but some were much blurred, to the point that the text could not be read. I searched for a ready-made batch script to find the most blurred one, but I didn't find any useful, so I used another part of script got on the net (based on ImageMagick, but no longer working; I couldn't retrieve the author for the credits!), useful to assessing the blur level of a single image, tweaked it, and automatised it over a whole folder. I uploaded here:
https://gist.github.com/888239
hoping it will be useful for someone else. It works on a Linux system, and uses ImageMagick (and some usually command line installed tools, as gawk, sort, grep, etc.).
One simple heuristic could be to look at width * height * color depth < sigma * file size. You would have to determine a good value for sigma, of course. sigma would be dependent on the expected entropy of the images you are looking at.