cvGetHistValue_1D is deprecated. What is to be used instead? - c++

According to latest OpenCV (OpenCV 2.4.5) documentation, cvGetHistValue_1D has been deprecated from imgproc module, and is now part of the legacy module.
I would like to know what should be used instead of cvGetHistValue_1D if I do not plan to use the legacy module.
My previous code is as under, which needs to be rewritten without the use of cvGetHistValue_1D
CvHistogram *hist = cvCreateHist(1, &numBins, CV_HIST_ARRAY, ranges, 1);
cvClearHist(hist);
cvCalcHist(&oDepth,hist);
cvNormalizeHist(hist, 1.0f);
float *cumHist = new float[numBins];
cumHist[0] = *cvGetHistValue_1D(hist, 0);
for(int i = 1; i<numBins; i++)
{
cumHist[i] = cumHist[i-1] + *cvGetHistValue_1D(hist, i);
if (cumHist[i] > 0.95)
{
oMaxDisp = i;
break;
}
}

I am assuming you are interested in using the C++ API. Matrix element access is easily accomplished using cv::Mat::at().
Your code might then look like this:
cv::Mat image; //Already in memory
size_t oMaxDisp = 0;
cv::Mat hist;
//Setup the histogram parameters
const int channels = 0;
const int numBins = 256;
const float rangevals[2] = {0.f, 256.f};
const float* ranges = rangevals;
cv::calcHist(&image, 1, &channels, cv::noArray(), hist, 1, &numBins, &ranges);
cv::normalize(hist, hist,1,0,cv::NORM_L1);
float cumHist[numBins];
float sum = 0.f;
for (size_t i = 0; i < numBins; ++i)
{
float val = hist.at<float>(i);
sum += val;
cumHist[i] = sum;
if (cumHist[i] > 0.95)
{
oMaxDisp = i;
break;
}
}
As a side note, it's a good idea not to use new unless it is necessary.

Related

What's the equivalent of pmdGet3DCoordinates for Intel RealSense 3D camera (SR300)?

I am trying to write the similar code for a PMD camera now for an Intel RealSense 3D camera (SR300). However, I can't find a method that does the same thing in the realsense SDK. Is there a hack for this or what do you suggest?
pmdGet3DCoordinates(PMDHandle hnd, float * data, size_t size);
an example usage is:
float * cartesianDist = new float[nRows*nCols*3];
res = pmdGet3DCoordinates(hnd, cartesianDist, nCols*nRows*3*sizeOf(float));
I need this function in order to create an xyzmap as in the code below:
int res = pmdGet3DCoordinates(hnd, dists, 3 * numPixels * sizeof(float));
xyzMap = cv::Mat(xyzMap.size(), xyzMap.type(), dists);
I have written the following code so far, I don't know if it make any sense in terms of Intel RealSense. Please feel free to comment.
void SR300Camera::fillInZCoords()
{
//int res = pmdGet3DCoordinates(hnd, dists, 3 * numPixels * sizeof(float)); //store x,y,z coordinates dists (type: float*)
////float * zCoords = new float[1]; //store z-Coordinates of dists in zCoords
//xyzMap = cv::Mat(xyzMap.size(), xyzMap.type(), dists);
Projection *projection = pp->QueryCaptureManager()->QueryDevice()->CreateProjection();
std::vector<Point3DF32> vertices;
vertices.resize(bufferSize.width * bufferSize.height);
projection->QueryVertices(sample->depth, &vertices[0]);
xyzBuffer.clear();
for (int i = 0; i < bufferSize.width*bufferSize.height; i++) {
//cv::Point3f p;
//p.x = vertices[i].x;
//p.y = vertices[i].y;
//p.z = vertices[i].z;
//xyzBuffer.push_back(p);
xyzBuffer.push_back(cv::Point3f(vertices[i].x, vertices[i].y, vertices[i].z));
}
xyzMap = cv::Mat(xyzBuffer);
projection->Release();
}
sts = projection->QueryVertices(depthMap, &pos3D[0]); is an almost equivalent and does the job of converting depth UV-map to real-world xyz map.
Status sts = sm ->AcquireFrame(true);
if (sts < STATUS_NO_ERROR) {
if (sts == Status::STATUS_STREAM_CONFIG_CHANGED) {
wprintf_s(L"Stream configuration was changed, re-initilizing\n");
sm ->Close();
}
}
sample = sm->QuerySample();
PXCImage *depthMap = sample->depth;
renderd.RenderFrame(sample->depth);
PXCImage::ImageData depthImage;
depthMap->AcquireAccess(PXCImage::ACCESS_READ, &depthImage);
PXCImage::ImageInfo imgInfo = depthMap->QueryInfo();
int depth_stride = depthImage.pitches[0] / sizeof(pxcU16);
PXCProjection * projection = device->CreateProjection();
pxcU16 *dpixels = (pxcU16*)depthImage.planes[0];
unsigned int dpitch = depthImage.pitches[0] / sizeof(pxcU16);
PXCPoint3DF32 *pos3D = new PXCPoint3DF32[num_pixels];
sts = projection->QueryVertices(depthMap, &pos3D[0]);
if (sts < Status::STATUS_NO_ERROR) {
wprintf_s(L"Projection was unsuccessful! \n");
sm->Close();
}

How do I pass an OpenCV Mat into a C++ Tensorflow graph?

In Tensorflow C++ I can load an image file into the graph using
tensorflow::Node* file_reader = tensorflow::ops::ReadFile(tensorflow::ops::Const(IMAGE_FILE_NAME, b.opts()),b.opts().WithName(input_name));
tensorflow::Node* image_reader = tensorflow::ops::DecodePng(file_reader, b.opts().WithAttr("channels", 3).WithName("png_reader"));
tensorflow::Node* float_caster = tensorflow::ops::Cast(image_reader, tensorflow::DT_FLOAT, b.opts().WithName("float_caster"));
tensorflow::Node* dims_expander = tensorflow::ops::ExpandDims(float_caster, tensorflow::ops::Const(0, b.opts()), b.opts());
tensorflow::Node* resized = tensorflow::ops::ResizeBilinear(dims_expander, tensorflow::ops::Const({input_height, input_width},b.opts().WithName("size")),b.opts());
For an embedded application I would like to instead pass an OpenCV Mat into this graph.
How would I convert the Mat to a tensor that could be used as input to tensorflow::ops::Cast or tensorflow::ops::ExpandDims?
It's not directly from a CvMat, but you can see an example of how to initialize a Tensor from an in-memory array in the TensorFlow Android example:
https://github.com/tensorflow/tensorflow/blob/0.6.0/tensorflow/examples/android/jni/tensorflow_jni.cc#L173
You would start off by creating a new tensorflow::Tensor object, with something like this (all code untested):
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, height, width, depth}));
This creates a Tensor object with float values, with a batch size of 1, and a size of widthxheight, and with depth channels. For example a 128 wide by 64 high image with 3 channels would pass in a shape of {1, 64, 128, 3}. The batch size is just used when you need to pass in multiple images in a single call, and for simple uses you can leave it as 1.
Then you would get the underlying array behind the tensor using a line like this:
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
The input_tensor_mapped object is an interface to the data in your newly-created tensor, and you can then copy your own data into it. Here I'm assuming you've set source_data as a pointer to your source data, for example:
const float* source_data = some_structure.imageData;
You can then loop through your data and copy it over:
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
for (int c = 0; c < depth; ++c) {
const float* source_value = source_pixel + c;
input_tensor_mapped(0, y, x, c) = *source_value;
}
}
}
There are obvious opportunities to optimize this naive approach, and I don't have sample code on hand to show how to deal with the OpenCV side of getting the source data, but hopefully this is helpful to get you started.
Here is complete example to read and feed:
Mat image;
image = imread("flowers.jpg", CV_LOAD_IMAGE_COLOR);
cv::resize(image, image, cv::Size(input_height, input_width), 0, 0, CV_INTER_CUBIC);
int depth = 3;
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, image.rows, image.cols, depth}));
for (int y = 0; y < image.rows; y++) {
for (int x = 0; x < image.cols; x++) {
Vec3b pixel = image.at<Vec3b>(y, x);
input_tensor_mapped(0, y, x, 0) = pixel.val[2]; //R
input_tensor_mapped(0, y, x, 1) = pixel.val[1]; //G
input_tensor_mapped(0, y, x, 2) = pixel.val[0]; //B
}
}
auto result = Sub(root.WithOpName("subtract_mean"), input_tensor, {input_mean});
ClientSession session(root);
TF_CHECK_OK(session.Run({result}, out_tensors));
I had tried to run inception model on the opencv Mat file and following code worked for me https://gist.github.com/kyrs/9adf86366e9e4f04addb. Although there are some issue with integration of opencv and tensorflow. Code worked without any issue for .png files but failed to load .jpg and .jpeg. You can follow this for more info https://github.com/tensorflow/tensorflow/issues/1924
Tensor convertMatToTensor(Mat &input)
{
int height = input.rows;
int width = input.cols;
int depth = input.channels();
Tensor imgTensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({height, width, depth}));
float* p = imgTensor.flat<float>().data();
Mat outputImg(height, width, CV_32FC3, p);
input.convertTo(outputImg, CV_32FC3);
return imgTensor;
}

clustering image segments in opencv

I am working on motion detection with non-static camera using opencv.
I am using a pretty basic background subtraction and thresholding approach to get a broad sense of all that's moving in a sample video. After thresholding, I enlist all separable "patches" of white pixels, store them as independent components and color them randomly with red, green or blue. The image below shows this for a football video where all such components are visible.
I create rectangles over these detected components and I get this image:
So I can see the challenge here. I want to cluster all the "similar" and close-by components into a single entity so that the rectangles in the output image show a player moving as a whole (and not his independent limbs). I tried doing K-means clustering but since ideally I would not know the number of moving entities, I could not make any progress.
Please guide me on how I can do this. Thanks
this problem can be almost perfectly solved by dbscan clustering algorithm. Below, I provide the implementation and result image. Gray blob means outlier or noise according to dbscan. I simply used boxes as input data. Initially, box centers were used for distance function. However for boxes, it is insufficient to correctly characterize distance. So, the current distance function use the minimum distance of all 8 corners of two boxes.
#include "opencv2/opencv.hpp"
using namespace cv;
#include <map>
#include <sstream>
template <class T>
inline std::string to_string (const T& t)
{
std::stringstream ss;
ss << t;
return ss.str();
}
class DbScan
{
public:
std::map<int, int> labels;
vector<Rect>& data;
int C;
double eps;
int mnpts;
double* dp;
//memoization table in case of complex dist functions
#define DP(i,j) dp[(data.size()*i)+j]
DbScan(vector<Rect>& _data,double _eps,int _mnpts):data(_data)
{
C=-1;
for(int i=0;i<data.size();i++)
{
labels[i]=-99;
}
eps=_eps;
mnpts=_mnpts;
}
void run()
{
dp = new double[data.size()*data.size()];
for(int i=0;i<data.size();i++)
{
for(int j=0;j<data.size();j++)
{
if(i==j)
DP(i,j)=0;
else
DP(i,j)=-1;
}
}
for(int i=0;i<data.size();i++)
{
if(!isVisited(i))
{
vector<int> neighbours = regionQuery(i);
if(neighbours.size()<mnpts)
{
labels[i]=-1;//noise
}else
{
C++;
expandCluster(i,neighbours);
}
}
}
delete [] dp;
}
void expandCluster(int p,vector<int> neighbours)
{
labels[p]=C;
for(int i=0;i<neighbours.size();i++)
{
if(!isVisited(neighbours[i]))
{
labels[neighbours[i]]=C;
vector<int> neighbours_p = regionQuery(neighbours[i]);
if (neighbours_p.size() >= mnpts)
{
expandCluster(neighbours[i],neighbours_p);
}
}
}
}
bool isVisited(int i)
{
return labels[i]!=-99;
}
vector<int> regionQuery(int p)
{
vector<int> res;
for(int i=0;i<data.size();i++)
{
if(distanceFunc(p,i)<=eps)
{
res.push_back(i);
}
}
return res;
}
double dist2d(Point2d a,Point2d b)
{
return sqrt(pow(a.x-b.x,2) + pow(a.y-b.y,2));
}
double distanceFunc(int ai,int bi)
{
if(DP(ai,bi)!=-1)
return DP(ai,bi);
Rect a = data[ai];
Rect b = data[bi];
/*
Point2d cena= Point2d(a.x+a.width/2,
a.y+a.height/2);
Point2d cenb = Point2d(b.x+b.width/2,
b.y+b.height/2);
double dist = sqrt(pow(cena.x-cenb.x,2) + pow(cena.y-cenb.y,2));
DP(ai,bi)=dist;
DP(bi,ai)=dist;*/
Point2d tla =Point2d(a.x,a.y);
Point2d tra =Point2d(a.x+a.width,a.y);
Point2d bla =Point2d(a.x,a.y+a.height);
Point2d bra =Point2d(a.x+a.width,a.y+a.height);
Point2d tlb =Point2d(b.x,b.y);
Point2d trb =Point2d(b.x+b.width,b.y);
Point2d blb =Point2d(b.x,b.y+b.height);
Point2d brb =Point2d(b.x+b.width,b.y+b.height);
double minDist = 9999999;
minDist = min(minDist,dist2d(tla,tlb));
minDist = min(minDist,dist2d(tla,trb));
minDist = min(minDist,dist2d(tla,blb));
minDist = min(minDist,dist2d(tla,brb));
minDist = min(minDist,dist2d(tra,tlb));
minDist = min(minDist,dist2d(tra,trb));
minDist = min(minDist,dist2d(tra,blb));
minDist = min(minDist,dist2d(tra,brb));
minDist = min(minDist,dist2d(bla,tlb));
minDist = min(minDist,dist2d(bla,trb));
minDist = min(minDist,dist2d(bla,blb));
minDist = min(minDist,dist2d(bla,brb));
minDist = min(minDist,dist2d(bra,tlb));
minDist = min(minDist,dist2d(bra,trb));
minDist = min(minDist,dist2d(bra,blb));
minDist = min(minDist,dist2d(bra,brb));
DP(ai,bi)=minDist;
DP(bi,ai)=minDist;
return DP(ai,bi);
}
vector<vector<Rect> > getGroups()
{
vector<vector<Rect> > ret;
for(int i=0;i<=C;i++)
{
ret.push_back(vector<Rect>());
for(int j=0;j<data.size();j++)
{
if(labels[j]==i)
{
ret[ret.size()-1].push_back(data[j]);
}
}
}
return ret;
}
};
cv::Scalar HSVtoRGBcvScalar(int H, int S, int V) {
int bH = H; // H component
int bS = S; // S component
int bV = V; // V component
double fH, fS, fV;
double fR, fG, fB;
const double double_TO_BYTE = 255.0f;
const double BYTE_TO_double = 1.0f / double_TO_BYTE;
// Convert from 8-bit integers to doubles
fH = (double)bH * BYTE_TO_double;
fS = (double)bS * BYTE_TO_double;
fV = (double)bV * BYTE_TO_double;
// Convert from HSV to RGB, using double ranges 0.0 to 1.0
int iI;
double fI, fF, p, q, t;
if( bS == 0 ) {
// achromatic (grey)
fR = fG = fB = fV;
}
else {
// If Hue == 1.0, then wrap it around the circle to 0.0
if (fH>= 1.0f)
fH = 0.0f;
fH *= 6.0; // sector 0 to 5
fI = floor( fH ); // integer part of h (0,1,2,3,4,5 or 6)
iI = (int) fH; // " " " "
fF = fH - fI; // factorial part of h (0 to 1)
p = fV * ( 1.0f - fS );
q = fV * ( 1.0f - fS * fF );
t = fV * ( 1.0f - fS * ( 1.0f - fF ) );
switch( iI ) {
case 0:
fR = fV;
fG = t;
fB = p;
break;
case 1:
fR = q;
fG = fV;
fB = p;
break;
case 2:
fR = p;
fG = fV;
fB = t;
break;
case 3:
fR = p;
fG = q;
fB = fV;
break;
case 4:
fR = t;
fG = p;
fB = fV;
break;
default: // case 5 (or 6):
fR = fV;
fG = p;
fB = q;
break;
}
}
// Convert from doubles to 8-bit integers
int bR = (int)(fR * double_TO_BYTE);
int bG = (int)(fG * double_TO_BYTE);
int bB = (int)(fB * double_TO_BYTE);
// Clip the values to make sure it fits within the 8bits.
if (bR > 255)
bR = 255;
if (bR < 0)
bR = 0;
if (bG >255)
bG = 255;
if (bG < 0)
bG = 0;
if (bB > 255)
bB = 255;
if (bB < 0)
bB = 0;
// Set the RGB cvScalar with G B R, you can use this values as you want too..
return cv::Scalar(bB,bG,bR); // R component
}
int main(int argc,char** argv )
{
Mat im = imread("c:/data/football.png",0);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
findContours(im.clone(), contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Rect> boxes;
for(size_t i = 0; i < contours.size(); i++)
{
Rect r = boundingRect(contours[i]);
boxes.push_back(r);
}
DbScan dbscan(boxes,20,2);
dbscan.run();
//done, perform display
Mat grouped = Mat::zeros(im.size(),CV_8UC3);
vector<Scalar> colors;
RNG rng(3);
for(int i=0;i<=dbscan.C;i++)
{
colors.push_back(HSVtoRGBcvScalar(rng(255),255,255));
}
for(int i=0;i<dbscan.data.size();i++)
{
Scalar color;
if(dbscan.labels[i]==-1)
{
color=Scalar(128,128,128);
}else
{
int label=dbscan.labels[i];
color=colors[label];
}
putText(grouped,to_string(dbscan.labels[i]),dbscan.data[i].tl(), FONT_HERSHEY_COMPLEX,.5,color,1);
drawContours(grouped,contours,i,color,-1);
}
imshow("grouped",grouped);
imwrite("c:/data/grouped.jpg",grouped);
waitKey(0);
}
I agree with Sebastian Schmitz: you probably shouldn't be looking for clustering.
Don't expect an uninformed method such as k-means to work magic for you. In particular one that is as crude a heuristic as k-means, and which lives in an idealized mathematical world, not in messy, real data.
You have a good understanding of what you want. Try to put this intuition into code. In your case, you seem to be looking for connected components.
Consider downsampling your image to a lower resolution, then rerunning the same process! Or running it on the lower resolution right away (to reduce compression artifacts, and improve performance). Or adding filters, such as blurring.
I'd expect best and fastest results by looking at connected components in the downsampled/filtered image.
I am not entirely sure if you are really looking for clustering (in the Data Mining sense).
Clustering is used to group similar objects according to a distance function. In your case the distance function would only use the spatial qualities. Besides, in k-means clustering you have to specify a k, that you probably don't know beforehand.
It seems to me you just want to merge all rectangles whose borders are closer together than some predetermined threshold. So as a first idea try to merge all rectangles that are touching or that are closer together than half a players height.
You probably want to include a size check to minimize the risk of merging two players into one.
Edit: If you really want to use a clustering algorithm use one that estimates the number of clusters for you.
I guess you can improve your original attempt by using morphological transformations. Take a look at http://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0. Probably you can deal with a closed set for each entity after that, specially with separate players as you got in your original image.

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}

Is there an easy way/algorithm to match 2 clouds of 2D points?

I am wondering if there is an easy way to match (register) 2 clouds of 2d points.
Let's say I have an object represented by points and an cluttered 2nd image with the object points and noise (noise in a way of points that are useless).
Basically the object can be 2d rotated as well as translated and scaled.
I know there is the ICP - Algorithm but I think that this is not a good approach due to high noise.
I hope that you understand what i mean. please ask if (im sure it is) anything is unclear.
cheers
Here is the function that finds translation and rotation. Generalization to scaling, weighted points, and RANSAC are straight forward. I used openCV library for visualization and SVD. The function below combines data generation, Unit Test , and actual solution.
// rotation and translation in 2D from point correspondences
void rigidTransform2D(const int N) {
// Algorithm: http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
const bool debug = false; // print more debug info
const bool add_noise = true; // add noise to imput and output
srand(time(NULL)); // randomize each time
/*********************************
* Creat data with some noise
**********************************/
// Simulated transformation
Point2f T(1.0f, -2.0f);
float a = 30.0; // [-180, 180], see atan2(y, x)
float noise_level = 0.1f;
cout<<"True parameters: rot = "<<a<<"deg., T = "<<T<<
"; noise level = "<<noise_level<<endl;
// noise
vector<Point2f> noise_src(N), noise_dst(N);
for (int i=0; i<N; i++) {
noise_src[i] = Point2f(randf(noise_level), randf(noise_level));
noise_dst[i] = Point2f(randf(noise_level), randf(noise_level));
}
// create data with noise
vector<Point2f> src(N), dst(N);
float Rdata = 10.0f; // radius of data
float cosa = cos(a*DEG2RAD);
float sina = sin(a*DEG2RAD);
for (int i=0; i<N; i++) {
// src
float x1 = randf(Rdata);
float y1 = randf(Rdata);
src[i] = Point2f(x1,y1);
if (add_noise)
src[i] += noise_src[i];
// dst
float x2 = x1*cosa - y1*sina;
float y2 = x1*sina + y1*cosa;
dst[i] = Point2f(x2,y2) + T;
if (add_noise)
dst[i] += noise_dst[i];
if (debug)
cout<<i<<": "<<src[i]<<"---"<<dst[i]<<endl;
}
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
/*********************************
* Visualize data
**********************************/
// Visualization
namedWindow("data", 1);
float w = 400, h = 400;
Mat Mdata(w, h, CV_8UC3); Mdata = Scalar(0);
Point2f center_img(w/2, h/2);
float scl = 0.4*min(w/Rdata, h/Rdata); // compensate for noise
scl/=sqrt(2); // compensate for rotation effect
Point2f dT = (center_src+center_dst)*0.5; // compensate for translation
for (int i=0; i<N; i++) {
Point2f p1(scl*(src[i] - dT));
Point2f p2(scl*(dst[i] - dT));
// invert Y axis
p1.y = -p1.y; p2.y = -p2.y;
// add image center
p1+=center_img; p2+=center_img;
circle(Mdata, p1, 1, Scalar(0, 255, 0));
circle(Mdata, p2, 1, Scalar(0, 0, 255));
line(Mdata, p1, p2, Scalar(100, 100, 100));
}
/*********************************
* Get 2D rotation and translation
**********************************/
markTime();
// subtract centroids from data
for (int i=0; i<N; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<N; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
if (debug)
cout<<"Covariance Matrix "<<Mcov<<endl;
// SVD
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation = V*Ut
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation = mean_dst - R*mean_src
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// RMSE
double RMSE = 0.0;
for (int i=0; i<N; i++) {
Point2f dst_est(
cos_est*src[i].x - sin_est*src[i].y,
sin_est*src[i].x + cos_est*src[i].y);
RMSE += SQR(dst[i].x - dst_est.x) + SQR(dst[i].y - dst_est.y);
}
if (N>0)
RMSE = sqrt(RMSE/N);
// Final estimate msg
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<"; RMSE = "<<RMSE<<endl;
// show image
printTime(1);
imshow("data", Mdata);
waitKey(-1);
return;
} // rigidTransform2D()
// --------------------------- 3DOF
// calculates squared error from two point mapping; assumes rotation around Origin.
inline float sqErr_3Dof(Point2f p1, Point2f p2,
float cos_alpha, float sin_alpha, Point2f T) {
float x2_est = T.x + cos_alpha * p1.x - sin_alpha * p1.y;
float y2_est = T.y + sin_alpha * p1.x + cos_alpha * p1.y;
Point2f p2_est(x2_est, y2_est);
Point2f dp = p2_est-p2;
float sq_er = dp.dot(dp); // squared distance
//cout<<dp<<endl;
return sq_er;
}
// calculate RMSE for point-to-point metrics
float RMSE_3Dof(const vector<Point2f>& src, const vector<Point2f>& dst,
const float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float ang_rad = param[0];
Point2f T(param[1], param[2]);
float cos_alpha = cos(ang_rad);
float sin_alpha = sin(ang_rad);
double RMSE = 0.0;
int ninliers = 0;
for (unsigned int i=0; i<n; i++) {
if (all_inliers || inliers[i]) {
RMSE += sqErr_3Dof(src[i]-center, dst[i]-center, cos_alpha, sin_alpha, T);
ninliers++;
}
}
//cout<<"RMSE = "<<RMSE<<endl;
if (ninliers>0)
return sqrt(RMSE/ninliers);
else
return LARGE_NUMBER;
}
// Sets inliers and returns their count
inline int setInliers3Dof(const vector<Point2f>& src, const vector <Point2f>& dst,
bool* inliers,
const float* param,
const float max_er,
const Point2f center) {
float ang_rad = param[0];
Point2f T(param[1], param[2]);
// set inliers
unsigned int ninliers = 0;
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float cos_ang = cos(ang_rad);
float sin_ang = sin(ang_rad);
float max_sqErr = max_er*max_er; // comparing squared values
if (inliers==NULL) {
// just get the number of inliers (e.g. after QUADRATIC fit only)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr)
ninliers++;
}
} else {
// get the number of inliers and set them (e.g. for RANSAC)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr) {
inliers[i] = 1;
ninliers++;
} else {
inliers[i] = 0;
}
}
}
return ninliers;
}
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATICold(const vector<Point2f>& src, const vector<Point2f>& dst,
float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(dst.size() == n);
// count inliers
int ninliers;
if (all_inliers) {
ninliers = n;
} else {
ninliers = 0;
for (unsigned int i=0; i<n; i++){
if (inliers[i])
ninliers++;
}
}
// under-dermined system
if (ninliers<2) {
// param[0] = 0.0f; // ?
// param[1] = 0.0f;
// param[2] = 0.0f;
return LARGE_NUMBER;
}
/*
* x1*cosx(a)-y1*sin(a) + Tx = X1
* x1*sin(a)+y1*cos(a) + Ty = Y1
*
* approximation for small angle a (radians) sin(a)=a, cos(a)=1;
*
* x1*1 - y1*a + Tx = X1
* x1*a + y1*1 + Ty = Y1
*
* in matrix form M1*h=M2
*
* 2n x 4 4 x 1 2n x 1
*
* -y1 1 0 x1 * a = X1
* x1 0 1 y1 Tx Y1
* Ty
* 1=Z
* ----------------------------
* src1 res src2
*/
// 4 x 1
float res_ar[4]; // alpha, Tx, Ty, 1
Mat res(4, 1, CV_32F, res_ar); // 4 x 1
// 2n x 4
Mat src1(2*ninliers, 4, CV_32F); // 2n x 4
// 2n x 1
Mat src2(2*ninliers, 1, CV_32F); // 2n x 1: [X1, Y1, X2, Y2, X3, Y3]'
for (unsigned int i=0, row_cnt = 0; i<n; i++) {
// use inliers only
if (all_inliers || inliers[i]) {
float x = src[i].x - center.x;
float y = src[i].y - center.y;
// first row
// src1
float* rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = -y;
rowPtr[1] = 1.0f;
rowPtr[2] = 0.0f;
rowPtr[3] = x;
// src2
src2.at<float> (0, row_cnt) = dst[i].x - center.x;
// second row
row_cnt++;
// src1
rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = x;
rowPtr[1] = 0.0f;
rowPtr[2] = 1.0f;
rowPtr[3] = y;
// src2
src2.at<float> (0, row_cnt) = dst[i].y - center.y;
}
}
cv::solve(src1, src2, res, DECOMP_SVD);
// estimators
float alpha_est;
Point2f T_est;
// original
alpha_est = res.at<float>(0, 0);
T_est = Point2f(res.at<float>(1, 0), res.at<float>(2, 0));
float Z = res.at<float>(3, 0);
if (abs(Z-1.0) > 0.1) {
//cout<<"Bad Z in fit3DOF(), Z should be close to 1.0 = "<<Z<<endl;
//return LARGE_NUMBER;
}
param[0] = alpha_est; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src, dst, param, inliers, center);
return RMSE;
} // fit3DofQUADRATICOLd()
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATIC(const vector<Point2f>& src_, const vector<Point2f>& dst_,
float* param, const bool* inliers, const Point2f center) {
const bool debug = false; // print more debug info
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
assert(dst_.size() == src_.size());
int N = src_.size();
// collect inliers
vector<Point2f> src, dst;
int ninliers;
if (all_inliers) {
ninliers = N;
src = src_; // copy constructor
dst = dst_;
} else {
ninliers = 0;
for (int i=0; i<N; i++){
if (inliers[i]) {
ninliers++;
src.push_back(src_[i]);
dst.push_back(dst_[i]);
}
}
}
if (ninliers<2) {
param[0] = 0.0f; // default return when there is not enough points
param[1] = 0.0f;
param[2] = 0.0f;
return LARGE_NUMBER;
}
/* Algorithm: Least-Square Rigid Motion Using SVD by Olga Sorkine
* http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
*
* Subtract centroids, calculate SVD(cov),
* R = V[1, det(VU')]'U', T = mean_q-R*mean_p
*/
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
// subtract centroids from data
for (int i=0; i<ninliers; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<ninliers; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
Mcov /= (ninliers-1);
if (debug)
cout<<"Covariance-like Matrix "<<Mcov<<endl;
// SVD of covariance
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation (V*Ut)
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation (mean_dst - R*mean_src)
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// Final estimate msg
if (debug)
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<endl;
param[0] = ang; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src_, dst_, param, inliers, center);
return RMSE;
} // fit3DofQUADRATIC()
// RANSAC fit in 3DOF: 1D rot and 2D translation (maximizes the number of inliers)
// NOTE: no data normalization is currently performed
float fit3DofRANSAC(const vector<Point2f>& src, const vector<Point2f>& dst,
float* best_param, bool* inliers,
const Point2f center ,
const float inlierMaxEr,
const int niter) {
const int ITERATION_TO_SETTLE = 2; // iterations to settle inliers and param
const float INLIERS_RATIO_OK = 0.95f; // stopping criterion
// size of data vector
unsigned int N = src.size();
assert(N==dst.size());
// unrealistic case
if(N<2) {
best_param[0] = 0.0f; // ?
best_param[1] = 0.0f;
best_param[2] = 0.0f;
return LARGE_NUMBER;
}
unsigned int ninliers; // current number of inliers
unsigned int best_ninliers = 0; // number of inliers
float best_rmse = LARGE_NUMBER; // error
float cur_rmse; // current distance error
float param[3]; // rad, Tx, Ty
vector <Point2f> src_2pt(2), dst_2pt(2);// min set of 2 points (1 correspondence generates 2 equations)
srand (time(NULL));
// iterations
for (int iter = 0; iter<niter; iter++) {
#ifdef DEBUG_RANSAC
cout<<"iteration "<<iter<<": ";
#endif
// 1. Select a random set of 2 points (not obligatory inliers but valid)
int i1, i2;
i1 = rand() % N; // [0, N[
i2 = i1;
while (i2==i1) {
i2 = rand() % N;
}
src_2pt[0] = src[i1]; // corresponding points
src_2pt[1] = src[i2];
dst_2pt[0] = dst[i1];
dst_2pt[1] = dst[i2];
bool two_inliers[] = {true, true};
// 2. Quadratic fit for 2 points
cur_rmse = fit3DofQUADRATIC(src_2pt, dst_2pt, param, two_inliers, center);
// 3. Recalculate to settle params and inliers using a larger set
for (int iter2=0; iter2<ITERATION_TO_SETTLE; iter2++) {
ninliers = setInliers3Dof(src, dst, inliers, param, inlierMaxEr, center); // changes inliers
cur_rmse = fit3DofQUADRATIC(src, dst, param, inliers, center); // changes cur_param
}
// potential ill-condition or large error
if (ninliers<2) {
#ifdef DEBUG_RANSAC
cout<<" !!! less than 2 inliers "<<endl;
#endif
continue;
} else {
#ifdef DEBUG_RANSAC
cout<<" "<<ninliers<<" inliers; ";
#endif
}
#ifdef DEBUG_RANSAC
cout<<"; recalculate: RMSE = "<<cur_rmse<<", "<<ninliers <<" inliers";
#endif
// 4. found a better solution?
if (ninliers > best_ninliers) {
best_ninliers = ninliers;
best_param[0] = param[0];
best_param[1] = param[1];
best_param[2] = param[2];
best_rmse = cur_rmse;
#ifdef DEBUG_RANSAC
cout<<" --- Solution improved: "<<
best_param[0]<<", "<<best_param[1]<<", "<<param[2]<<endl;
#endif
// exit condition
float inlier_ratio = (float)best_ninliers/N;
if (inlier_ratio > INLIERS_RATIO_OK) {
#ifdef DEBUG_RANSAC
cout<<"Breaking early after "<< iter+1<<
" iterations; inlier ratio = "<<inlier_ratio<<endl;
#endif
break;
}
} else {
#ifdef DEBUG_RANSAC
cout<<endl;
#endif
}
} // iterations
// 5. recreate inliers for the best parameters
ninliers = setInliers3Dof(src, dst, inliers, best_param, inlierMaxEr, center);
return best_rmse;
} // fit3DofRANSAC()
Let me first make sure I'm interpreting your question correctly. You have two sets of 2D points, one of which contains all "good" points corresponding to some object of interest, and one of which contains those points under an affine transformation with noisy points added. Right?
If that's correct, then there is a fairly reliable and efficient way to both reject noisy points and determine the transformation between your points of interest. The algorithm that is usually used to reject noisy points ("outliers") is known as RANSAC, and the algorithm used to determine the transformation can take several forms, but the most current state of the art is known as the five-point algorithm and can be found here -- a MATLAB implementation can be found here.
Unfortunately I don't know of a mature implementation of both of those combined; you'll probably have to do some work of your own to implement RANSAC and integrate it with the five point algorithm.
Edit:
Actually, OpenCV has an implementation that is overkill for your task (meaning it will work but will take more time than necessary) but is ready to work out of the box. The function of interest is called cv::findFundamentalMat.
I believe you are looking for something like David Lowe's SIFT (Scale Invariant Feature Transform). Other option is SURF (SIFT is patent protected). The OpenCV computer library presents a SURF implementation
I would try and use distance geometry (http://en.wikipedia.org/wiki/Distance_geometry) for this
Generate a scalar for each point by summing its distances to all neighbors within a certain radius. Though not perfect, this will be good discriminator for each point.
Then put all the scalars in a map that allows a point (p) to be retrieve by its scalar (s) plus/minus some delta
M(s+delta) = p (e.g K-D Tree) (http://en.wikipedia.org/wiki/Kd-tree)
Put all the reference set of 2D points in the map
On the other (test) set of 2D points:
foreach test scaling (esp if you have a good idea what typical scaling values are)
...scale each point by S
...recompute the scalars of the test set of points
......for each point P in test set (or perhaps a sample for faster method)
.........lookup point in reference scalar map within some delta
.........discard P if no mapping found
.........else foreach P' point found
............examine neighbors of P and see if they have corresponding scalars in the reference map within some delta (i.e reference point has neighbors with approx same value)
......... if all points tested have a mapping in the reference set, you have found a mapping of test point P onto reference point P' -> record mapping of test point to reference point
......discard scaling if no mappings recorded
Note this is trivially parallelized in several different places
This is off the top of my head, drawing from research I did years ago. It lacks fine details but the general idea is clear: find points in the noisy (test) graph whose distances to their closest neighbors are roughly the same as the reference set. Noisy graphs will have to measure the distances with a larger allowed error that less noisy graphs.
The algorithm works perfectly for graphs with no noise.
Edit: there is a refinement for the algorithm that doesn't require looking at different scalings. When computing the scalar for each point, use a relative distance measure instead. This will be invariant of transform
From C++, you could use ITK to do the image registration. It includes many registration functions that will work in the presence of noise.
The KLT (Kanade Lucas Tomasi) Feature Tracker makes a Affine Consistency Check of tracked features. The Affine Consistency Check takes into account translation, rotation and scaling. I don't know if it is of help to you, because you can't use the function (which calculates the affine transformation of a rectangular region) directly. But maybe you can learn from the documentation and source-code, how the affine transformation can be calculated and adapt it to your problem (clouds of points instead of a rectangular region).
You want want the Denton-Beveridge point matching algorithm. Source code at the bottom of the page linked below, and there is also a paper that explain the algorithm and why Ransac is a bad choice for this problem.
http://jasondenton.me/pntmatch.html