OpenCV Histogram to Image Conversion - c++

I've tryout some tutorial of converting Grayscale image to Histogram and thus perform comparision between the histogram. So, I've obtained the value returned from compare function in double datatype. Like this.
My problem here now is, how can I visualize the "non-match/ error" detected between images? Can I like obtained back the coordinates of those non-match pixels and draw a rectangle or circle at that particular coordinate?
Or any suggestion on algorithm I can take?

You can't do that from histogram comparison directly. As stated in the documentation,
Use the function compareHist to get a numerical parameter that express how well two histograms match with each other.
This measure is just a distance value which tells you how similar are the two histograms (or how similar are the two images in terms of color distribution).
However, you can use histogram backprojection to visualize how well each pixel in image A
fits the color distribution (histogram) of an image B. Take a look to that OpenCV example.

Related

Is camera pose estimation with SOLVEPNP_EPNP sensitive to outliers and can this be rectified?

I have to do an assignment in which I should compare the function solvePnP() used with SOLVEPNP_EPNP and solvePnPRansac() used with SOLVEPNP_ITERATIVE. The goal is to calculate a warped image from an input image.
To do this, I get an RGB input image, the same image as 16bit depth information image, the camera intrinsics and a list of feature match points between the given image and the wanted resulting warped image (which is the same scene from a different perspective.
This is how I went about this task so far:
Calculate a list of 3D object points form the depth image and the the intrinsics which correspond to the list of feature matches.
use solvePnP() and solvePnPRansac() with the respective algorithms where the calculated 3D object points and the feature match points of the resulting image are the inputs. As a result I get a rotation and a translation vector for both methods.
As sanity check I calculate the average reprojection error using projectPoints() for all feature match points and comparing the resulting projected points to the feature match points of the resulting image.
Finally I calculate 3D object points for each pixel of the input image and again project them using the rotation and translation vector from before. Each projected point will get the color from the corresponding pixel in the input image resulting in the final warped image.
These are my inputs:
Using steps described above I get the following output with the Ransac Method:
This looks pretty much like the reference solution I have, so this should be mostly correct.
However with the solvePnP() method using SOLVEPNP_EPNP the resulting rotation and translation vectors look like this, which doesn't make sense at all:
================ solvePnP using SOVLEPNP_EPNP results: ===============
Rotation: [-4.3160208e+08; -4.3160208e+08; -4.3160208e+08]
Translation: [-4.3160208e+08; -4.3160208e+08; -4.3160208e+08]
The assignment sheet states, that the list of feature matches contain some miss - matches, so basically outliers. As far as I know, Ransac handles outliers better, however can this be the reason for this weird results for the other method? I was expecting some anomalies, but this is completely wrong and the resulting image is completely black since no points are inside the image area.
Maybe someone can point me into the right direction.
OK, I could solve the issue. I used float for all all calculation (3D object points, matches, ...) beforehand and tried to change everything to double - it did the trick.
The warped perspective is still off and I get a rather high re-projection error, however this should be due to the nature of the algorithm itself, which doesn't handle outliers very well.
The weird thing about this is, that in the OpenCV documentation on solvePnP() it states that vector<Point3f> and vector<Point2f>can be passed as arguments for object points and image points respectively.

How to get the corresponding pixel color of disparity map coordinate

I'm currently working on an assignment, consisting of camera calibration, stereo-calibration and finally stereo matching. For this I am allowed to use the available OpenCV samples and tutorials and adapt them to our needs.
While the first two parts weren't much of a problem, a Question about the stereo matching part:
We should create a colored point cloud in .ply format from the disparity map of two provided images.
I'm using this code as a template:
https://github.com/opencv/opencv/blob/master/samples/cpp/stereo_match.cpp
I get the intrinsic and extrinsic files from the first two parts of the assignment.
My question is, how to get the corresponding color for each 3D-point from the original images and the disparity map?
I'm guessing that each coordinate of the disparity map corresponds to a pixel both input-images share. But how to get those pixelvalues?
EDIT: I know that the value of each element of the disparity map represents the disparity the corresponding pixel between the left and right image. But how do I get the corresponding pixels from the coordinates of the disparity map?
Example:
my disparity value at coordinates (x, y) is 128. 128 represents the depth. But how do I know which pixel in the original left or right image this corresponds to?
Additional Questions
I'm having further questions about StereoSGBM and which parameters make sense.
Here are my (downscaled for upload) input-images:
left:
right
Which give me these rectified images:
left
right
From this I get this disparity image:
For the disparity image: this is the best result I could achieve using blocksize=3 and numDisparities=512. However I'm not at all sure if those parameters make any sense. Are these values sensible?
My question is, how to get the corresponding color for each 3D-point from the original images and the disparity map?
So a disparity map is nothing but distance between matching pixels in the epipolar plane in the left and right images. This means, you just need the pixel intensity to compute the disparity which in turn implies, you could do this computation on either just the grey-scale left-right image or any of the channels of the left-right images.
I am pretty sure the disparity image you are computing operates on grey-scale images obtained from the original rgb images. If you want to compute a color disparity image, you just need to extract the individual color channels of the left-right images, and compute the corresponding disparity map channel. The outcome will then be a 3 channel disparity map.
Additional Questions I'm having further questions about StereoSGBM and which parameters make sense. Here are my (downscaled for upload) input-images:
There is never a good answer to this for the most general case. You need a parameter tuner for this. See https://github.com/guimeira/stereo-tuner as an example. You should be able to write your own in open cv pretty easily if you want.
Ok the solution to this problem is to use the projectpoint() function from OpenCV.
Basically calculate 3D-Points from the disparity image and project them onto the 2D image and use the color you hit.

Build-in function for interpolating single pixels and small blobs

Problem
Is there a build-in function for interpolating single pixels?
Given a normal image as Mat and a Point, e.g. an anomaly of the sensor or an outlier, is their some function to repair this Point?
Furthermore, if I have more than one Point connected (let's say a blob with area smaller 10x10) is there a possibility to fix them too?
Trys but not really solutions
It seems that interpolation is implemented in the geometric transformations including resizing images and to extrapolate pixels outside of the image with borderInterpolate, but I haven't found a possibility for single pixels or small clusters of pixels.
A solution with medianBlur like suggested here does not seem appropriate as it changes the whole image.
Alternative
If there isn't a build-in function, my idea would be to look at all 8-connected surrounding pixels which are not part of the blob and calculate the mean or weighted mean. If doing this iteratively, all missing or erroneous pixel should be filled. But this method would be dependent of the applied order to correct each pixel. Are there other suggestions?
Update
Here is an image to illustrate the problem. Left the original image with a contour marking the pixels to fix. Right side shows the fixed pixels. I hope to find some sophisticated algorithms to fix the pixel.
The build-in function inpaint of OpenCV does the desired interpolation of chosen pixels. Simply create a mask with all pixels to be repaired.
See the documentation here: OpenCV 3.2. Description: inpaint and Function: inpaint

How to find that image is more or less homogeneous w.r.t color (hue)?

UPDATE:
I have segmented the image into different regions. For each region, I need to know whether it is more or less homogeneous in terms of color.
What could be the possible strategies to do so?
previous:
I want to check the color variance (preferably hue variance) of an image to find out the images made up of homogeneous colors (i.e. the images which have only one or two color).
I understand that one strategy could be to create a hue-histogram for that and then I can found the count of each color but I have several images altogether and I cannot create a hue-histogram of 180 bins for each image because then it would be computationally expensive for whole code.
Is there any inbuilt openCV method OR other simpler method to find out whether the image consist of homogeneous color only OR several colors?
Something, which can calculate the variance of hue-image would also be fine. I could not find something like variance(image);
PS: I am writing the code in C++.
The variance can be computed without an histogram, as the average squared values minus the square of the averaged values. It takes a single pass over the image, with two accumulators. Choose a data type that will not overflow.

Apply GaussianBlur to Individual Pixels

I am writing an application in C++ using OpenCV to apply a Gaussian filter to individual pixels in an image. For example, I loop through each pixel in the image and if they match a particular RGB value, I want to apply the Gaussian algorithm to only those pixels so that blurring only occurs around those parts of an image.
However, I'm having trouble finding a way to do this. The GaussianBlur() function offered by the OpenCV library only allows me to blur an entire image and not simply apply the algorithm and kernel to one pixel at a time. Does anyone have any ideas on how I could achieve this (e.g. is there another method I don't know about)? I'm hoping I don't have to write the entire algorithm out myself to apply it to just a single pixel.
A friend of mine came up with a good solution:
clone the original matrix
apply GaussianBlur() to the clone
compare each pixel to the RGB value
if match, replace the original's pixel with the clone's pixel
Can't believe how simple that was.
You can code the gaussian blur yourself if you need to apply it only on a few pixels. It is much easier than you think and only takes a few lines. It is a simple stencil operator using a gaussian function for its kernel. All you need is the coordinates of the pixel and its neighbors.
The rest is straight forward. Here is an example of Gaussian matrix, which you can easily code, or generate using a Gaussian function:
In short, it is just a weighed average of the neighboring values.