creating 3D histogram using 2D data (OpenCV?) - c++

I have data set 1 and 2. Those have 2D data.
For example,
Data1 :
(x1, y1), (x2, y2), (x3, y3) .... (xn, yn)
Data2 : (x1', y1'), (x2', y2'), .... (xm', ym')
I'd like to compare them using histogram and Earth Mover's Distance(EMD) if possible.
Because I have 2D data, the data should be placed on 2D map, and the height of the histogram on 2D map has the frequency of the data, thus it should be 3D histogram I guess. Even though I success to create example to draw histogram and compare them using 1D data, I failed to try to change it to 2D data. How it works?
For example,
calcHist(&greyImg, 1, channel_numbers, Mat(), histogram1, 1, &number_bins, &channel_ranges);
This code makes tha Image's grayscale intensity(1D data) to histogram. But I could not change it to 2D data.
my Idea is this :
I create cv::Mat Data1Mat, Data2Mat; (Mat size is set as maximum value of x and y)
Then, push the Data1's x values to Mat1's first channel, push y values to second channel. (Same to Data2 and Data2Mat)
For example, for (x1, y1), set
Data1Mat.at(x1,y1)[0] = x1, Data1Mat.at(x1, y1)[1] = y1;
like this.Then create Histogram of them and compare. Do I think correctly?

I think it is more correct to say: histogram of 1D data, of histogram of 2D data.
You need histogram of 2D data.
1D histogram computes number of scalar values hit bin intervals.
2D histogram divides plane by regions and compute number of 2D points
hit each region.
Here computed H,S 2D histogram for an image: Calculate HSV histogram of a coloured image is it different from H-S histogram?
You have near the same problem, but put your x to instead of H, and y instead of S.

Related

Creation of a compression algorithm so I can access the data to interpolate later?

The following is a more elaborative conjecture on what i wish to achieve; here is how far I reached;
A 3d grid, about 303030, or a 3d array, so i can define a function of R3 -> R f(x, y, z) = v More precisely, where x, y, z € [0, N] of float values so for f(0.5, 0.5, 0.5) the result would be the trilinear interpolation for the points (0,0,0), (0,0,1), (0,1,0), (0,1,1), (1,0,0), (1,0,1), (1,1,0) and (1,1,1). With v is equal to the value stored in the array if x, y, and z are integer values, or the trilinear interpolation of the closest points in the array where N_i is the number of points - 1 in the i dimension of the array; x € [0, N_x], y € [0, N_y], and z € [0, N_z]. Now let's Imagine a 1d array(which does not exist, only integer indices), one can make up a value by interpolation between closest actual values, and can extend this to 2d, though if you try to get a value for the position 0.3864 for positions 0 and 1 you need the 4 closest points in the end you can extend to any number of dimensions. Providing the values at (0,0), (0,1), (1,0) and (1,1). n is the number of dimensions which have a non-integer coordinate, but you get the point with a bilinear interpolation, and you'll need exactly 2n points where n is the number of dimensions.
Simplified;
I have a 3d grid of floats which via I wish to access this values in parallel by the thousands In random positions. To which then I want to convert this memory bound process into cpu bound; by flattening the 3d array, and approximate it with a finite Fourier expansion or something similar. Then calculate the values at the required positions of this flattened data and use the calculated values to do the trilinear interpolation. Conclusively, the original code would just access the values by their array indices, one by one. as the values are being accessed randomly and they are far away from each other in memory; which i'm looking for a suitable strategy to access (or calculate if possible) the values based on an index.

8-neighboorhood in vector from nonZeroCoordinates

From an edge image 8UC1 obtained from Canny operator, I want to go through all white pixels and find their 8-neighboorhood.
As a first step, I apply
findNonZero(edgesFromCanny, nonZeroCoordinates);
to obtain just all white pixels to increase computational time.
The coordinates of those pixels in nonZeroCoordinates are then ordered in a row-wise manner, so that p(x=100, y =1) can be far away from p(x=100, y=2) in the nonZeroCoordinates Mat (column-wise), while p(x=100, y =1) and p(x=101, y =1) are subsequent in nonZeroCoordinates (if they are edges).
How can I (fast) retrieve the 8-neighboorhood of p(x=100, y=1) taking into account, it is an edge, too?
I found a solution using kNN, but I am not sure if this solution does not take too much computation or there can be a simpler one:
vector<Point2f> edgesVec; //Insert all 2D points to this vector
flann::KDTreeIndexParams indexParams;
flann::Index kdtree(Mat(edgesVec).reshape(1), indexParams);
vector<float> query;
query.push_back(i); //Insert the 2D point we need to find neighbours to the query
query.push_back(j); //Insert the 2D point we need to find neighbours to the query
vector<int> indices;
vector<float> dists;
kdtree.radiusSearch(query, indices, dists, 1.5, 8);

Compare intensity pixel value Vec3b in OpenCV

I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.

Zero out portion of multidim numpy array

I have an numpy array with dimensions (200, 200, 3). It is an RGB image.
I also have the (xmin,ymin,xmax,ymax) coordinates of a region of this image that I would like to set to zero. This region should be zero in all three channels.
I can of course solve this with a loop, but that would be wasteful.
Is there a simple way to mask the array using numpy?
Use array slicing. If xmin, xmax, ymin and ymax are the indices of area of the array you want to set to zero, then:
a[xmin:xmax,ymin:ymax,:] = 0.

How to detect image gradient or normal using OpenCV

I wanted to detect ellipse in an image. Since I was learning Mathematica at that time, I asked a question here and got a satisfactory result from the answer below, which used the RANSAC algorithm to detect ellipse.
However, recently I need to port it to OpenCV, but there are some functions that only exist in Mathematica. One of the key function is the "GradientOrientationFilter" function.
Since there are five parameters for a general ellipse, I need to sample five points to determine one. Howevere, the more sampling points indicates the lower chance to have a good guess, which leads to the lower success rate in ellipse detection. Therefore, the answer from Mathematica add another condition, that is the gradient of the image must be parallel to the gradient of the ellipse equation. Anyway, we'll only need three points to determine one ellipse using least square from the Mathematica approach. The result is quite good.
However, when I try to find the image gradient using Sobel or Scharr operator in OpenCV, it is not good enough, which always leads to the bad result.
How to calculate the gradient or the tangent of an image accurately? Thanks!
Result with gradient, three points
Result without gradient, five points
----------updated----------
I did some edge detect and median blur beforehand and draw the result on the edge image. My original test image is like this:
In general, my final goal is to detect the ellipse in a scene or on an object. Something like this:
That's why I choose to use RANSAC to fit the ellipse from edge points.
As for your final goal, you may try
findContours and [fitEllipse] in OpenCV
The pseudo code will be
1) some image process
2) find all contours
3) fit each contours by fitEllipse
here is part of code I use before
[... image process ....you get a bwimage ]
vector<vector<Point> > contours;
findContours(bwimage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(size_t i = 0; i < contours.size(); i++)
{
size_t count = contours[i].size();
Mat pointsf;
Mat(contours[i]).convertTo(pointsf, CV_32F);
RotatedRect box = fitEllipse(pointsf);
/* You can put some limitation about size and aspect ratio here */
if( box.size.width > 20 &&
box.size.height > 20 &&
box.size.width < 80 &&
box.size.height < 80 )
{
if( MAX(box.size.width, box.size.height) > MIN(box.size.width, box.size.height)*30 )
continue;
//drawContours(SrcImage, contours, (int)i, Scalar::all(255), 1, 8);
ellipse(SrcImage, box, Scalar(0,0,255), 1, CV_AA);
ellipse(SrcImage, box.center, box.size*0.5f, box.angle, 0, 360, Scalar(200,255,255), 1, CV_AA);
}
}
imshow("result", SrcImage);
If you focus on ellipse(no other shape), you can treat the value of the pixels of the ellipse as mass of the points.
Then you can calculate the moment of inertial Ixx, Iyy, Ixy to find out the angle, theta, which can rotate a general ellipse back to a canonical form (X-Xc)^2/a + (Y-Yc)^2/b = 1.
Then you can find out Xc and Yc by the center of mass.
Then you can find out a and b by min X and min Y.
--------------- update -----------
This method can apply to filled ellipse too.
More than one ellipse on a single image will fail unless you segment them first.
Let me explain more,
I will use C to represent cos(theta) and S to represent sin(theta)
After rotation to canonical form, the new X is [eq0] X=xC-yS and Y is Y=xS+yC where x and y are original positions.
The rotation will give you min IYY.
[eq1]
IYY= Sum(m*Y*Y) = Sum{m*(xS+yC)(xS+yC)} = Sum{ m(xxSS+yyCC+xySC) = Ixx*S^2 + Iyy*C^2 + Ixy*S*C
For min IYY, d(IYY)/d(theta) = 0 that is
2IxxSC - 2IyySC + Ixy(CC-SS) = 0
2(Ixx-Iyy)/Ixy = (SS-CC)/SC = S/C+C/S = Z+1/Z
While programming, the LHS is just a number, let's said N
Z^2 - NZ +1 =0
So there are two roots of Z hence theta, let's said Z1 and Z2, one will min the IYY and the other will max the IYY.
----------- pseudo code --------
Compute Ixx, Iyy, Ixy for a hollow or filled ellipse.
Compute theta1=atan(Z1) and theta2=atan(Z2)
Put These two theta into eq1 find which is smaller. Then you get theta.
Go back to those non-zero pixels, transfer them to new X and Y by the theta you found.
Find center of mass Xc Yc and min X and min Y by sort().
-------------- by hand -----------
If you need the original equation of the ellipse
Just put [eq0] into the canonical form
You're using terms in an unusual way.
Normally for images, the term "gradient" is interpreted as if the image is a mathematical function f(x,y). This gives us a (df/dx, df/dy) vector in each point.
Yet you're looking at the image as if it's a function y = f(x) and the gradient would be f(x)/dx.
Now, if you look at your image, you'll see that the two interpretations are definitely related. Your ellipse is drawn as a set of contrasting pixels, and as a result there are two sharp gradients in the image - the inner and outer. These of course correspond to the two normal vectors, and therefore are in opposite directions.
Also note that your image has pixels. The gradient is also pixelated. The way your ellipse is drawn, with a single pixel width means that your local gradient takes on only values that are a multiple of 45 degrees:
▄▄ ▄▀ ▌ ▀▄