OpenCasCade BrepFeat_MakeCylindricalHole(); function - c++

Currently I'm working on a project to drill some cylindrical holes on to a given shape.
So for do that I did use BRepFeat_MakeCylindricalHole(); function and used Standard_EXPORT void PerformThruNext (const Standard_Real Radius, const Standard_Boolean WithControl = Standard_True);
Because in this scenario I haven't the depth of the cylinder. Using performThruNext it will drilled, Top face to Bottom face.
User will be given the start and the end points, then using that will be drilling holes.
**1.when start and end points having decimal fraction part (gp_pnt StartPnt (328.547,-4.325,97.412);
gp_pnt EndPnt
(291.841,-7.586,101.651);), it doesn't drill the face correctly.
2.when diameter getting to a higher value,happens the same thing,
,
** following is my code,any help would be highly appreciated. Thanks!
void COCCAPPDoc::OnBox2()
{
ofstream out;
out.open("C:/Users/Dell/Desktop/Blade/areaInit.txt");
CString str;
TopoDS_Shape bladeWithCore,DrilledShape,DrilledShape2;
/* IMPORT bladeWithCore.brep FILE */
Standard_CString strFile = "C:/Users/Dell/Desktop/Blade/bladeWithCore.brep";
BRep_Builder brepS;
BRepTools::Read(bladeWithCore,strFile,brepS);
TopoDS_Face Face_52;
Standard_CString strFace = "C:/Users/Dell/Desktop/Blade/Face_52.brep";
BRep_Builder brepF;
BRepTools::Read(Face_52,strFace,brepF);
BRepClass3d_SolidClassifier classify;
classify.Load(bladeWithCore);
classify.PerformInfinitePoint(1.0e-6);
TopAbs_State state = classify.State();
if(state==TopAbs_IN) bladeWithCore.Reverse();
DrilledShape = bladeWithCore;
gp_Vec v1(gp_Pnt( 330,11,108),gp_Pnt(291,4,109)); //FINDING VECTOR FROM START TO THE END POINT
gp_Pnt point(330,11,108);
gp_Pnt startpoint (330,11,108);
TopoDS_Vertex aV1 = BRepBuilderAPI_MakeVertex (startpoint);
BRepTools::Write(aV1,"C:/Users/Dell/Desktop/Blade/startpoint.brep");
/* gp_Vec v1(gp_Pnt( 330.208114,11.203318,108.480255),gp_Pnt(291.103558,4.265285,109.432663));
gp_Pnt point(330.208114,11.203318,108.480255);*/
gp_Dir d1(v1); //GETTING DIRECTION USING FOUND VECTOR
gp_Vec UnitVector(d1); //FINDING UNITVECTOR
gp_Dir dir_line(0,0,1);
gp_Vec UnitVector_line(dir_line);
gp_Pnt minPoint;
Standard_Real diameter = 0.1;
int noOfInstances =3;
int gap = -10;
for(int i=0; i {
point.Translate(UnitVector);
gp_Pnt newPoint = point;
newPoint.Translate(UnitVector_line*10);
BRepBuilderAPI_MakeEdge LINE(newPoint, point);
BRepBuilderAPI_MakeWire lineWire(LINE);
TopoDS_Wire wire = TopoDS::Wire(lineWire);
TopoDS_Shape lineShape = TopoDS_Shape(wire);
BRepTools::Write(lineShape,"C:/Users/Dell/Desktop/Blade/linlineShapee.brep");
intersection( bladeWithCore,lineShape, point, minPoint);
TopoDS_Vertex checkPoint = BRepBuilderAPI_MakeVertex (minPoint);
BRepTools::Write(checkPoint,"C:/Users/Dell/Desktop/Blade/checkPoint.brep");
gp_Ax1 location =gp_Ax1(minPoint,gp_Dir(0,0,1));
BRepFeat_MakeCylindricalHole drilled;
drilled.Init(DrilledShape,location);
drilled.PerformThruNext(diameter/2.0);
BRepFeat_Status status = drilled.Status();
if(status==BRepFeat_NoError)
{
DrilledShape = drilled.Shape();
BRepTools::Write(DrilledShape,"C:/Users/Dell/Desktop/Blade/bladeWithCoreDrilled.brep");
}
}
Handle(AIS_InteractiveObject) obj3 = (new AIS_Shape(DrilledShape));
myAISContext->Display(obj3);
COCCAPPDoc:: Fit();
}
void COCCAPPDoc::intersection(TopoDS_Shape bladeWithCore,TopoDS_Shape lineShape,gp_Pnt point,gp_Pnt& minPoint)
{
BRepExtrema_DistShapeShape interSection( bladeWithCore, lineShape, Extrema_ExtFlag_MIN, Extrema_ExtAlgo_Grad);
interSection.Perform();
Standard_Integer Solutions = interSection.NbSolution();
CArray disArray;
int minindex = 1;
for(int i = 1; i {
double distance = point.Distance((interSection.PointOnShape1(i)));
disArray.Add(distance);
}
for(int i = 1; i {
if(disArray[0] > disArray[i])
{
disArray[0] = disArray[i];
minindex = i;
}
}
minPoint = interSection.PointOnShape1(minindex);
TopoDS_Vertex aV = BRepBuilderAPI_MakeVertex (minPoint);
BRepTools::Write(aV,"C:/Users/Dell/Desktop/Blade/minPoint.brep");
}

Related

pcl supervoxel clustering: unable to visualise voxels

I am trying to incorporate supervoxel clustering into my 3D pointcloud application. However, the point clouds can't be visualised on RVIZ. When i tried debugging using gdb, i realised that my map object still has no data inside even after the line
std::multimap<uint32_t, uint32_t> supervoxel_adjacency; super.getSupervoxelAdjacency(supervoxel_adjacency);
The tutorial i am referencing is this https://pointclouds.org/documentation/tutorials/supervoxel_clustering.html
What is the problem? It should fill the map struct with data for me to iterate over but it isn't.
void PointCloudProcessor::voxel_clustering(type_pclcloudptr& input, ros::Publisher& Publisher)
{
float voxel_resolution = 0.008f;
float seed_resolution = 0.1f;
float color_importance = 0.2f;
float spatial_importance = 0.4f;
float normal_importance = 1.0f;
pcl::SupervoxelClustering<pcl::PointXYZ> super(voxel_resolution, seed_resolution);
super.setUseSingleCameraTransform(false);
super.setInputCloud(input);
super.setColorImportance(color_importance);
super.setSpatialImportance(spatial_importance);
super.setNormalImportance(normal_importance);
std::map <uint32_t, pcl::Supervoxel<pcl::PointXYZ>::Ptr > supervoxel_clusters;
if ( !( input->points.empty() ) )
{
super.extract(supervoxel_clusters);
type_pclcloudptr voxel_centroid_cloud = super.getVoxelCentroidCloud();
pcl::PointCloud<pcl::PointXYZL>::Ptr labeled_voxel_cloud = super.getLabeledVoxelCloud();
std::multimap<uint32_t, uint32_t> supervoxel_adjacency;
super.getSupervoxelAdjacency(supervoxel_adjacency);
// To make a graph of the supervoxel adjacency, we need to iterate through the supervoxel adjacency multimap
std::multimap<uint32_t, uint32_t>::iterator label_itr = supervoxel_adjacency.begin();
int idx = 0;
//PointCloudProcessor::mMarkerArray.markers.resize(supervoxel_adjacency.size());
for (; label_itr != supervoxel_adjacency.end(); )
{
//First get the label
uint32_t supervoxel_label = label_itr->first;
Eigen::Vector4f centroid;
Eigen::Vector4f min;
Eigen::Vector4f max;
//Now get the supervoxel corresponding to the label
pcl::Supervoxel<pcl::PointXYZ>::Ptr supervoxel = supervoxel_clusters.at(supervoxel_label);
//Now we need to iterate through the adjacent supervoxels and make a point cloud of them
type_pclcloudptr adjacent_supervoxel_centers;
std::multimap<uint32_t, uint32_t>::iterator adjacent_itr = supervoxel_adjacency.equal_range(supervoxel_label).first;
for (; adjacent_itr != supervoxel_adjacency.equal_range(supervoxel_label).second; ++adjacent_itr)
{
pcl::Supervoxel<pcl::PointXYZ>::Ptr neighbor_supervoxel = supervoxel_clusters.at(adjacent_itr->second);
pcl::PointXYZ neighbour_centroid_xyz;
pcl::PointXYZRGBA neighbour_centroid_xyzrgba = neighbor_supervoxel->centroid_;
pcl::copyPoint(neighbour_centroid_xyzrgba, neighbour_centroid_xyz);
adjacent_supervoxel_centers->push_back(neighbour_centroid_xyz);
}
// pcl::compute3DCentroid(*adjacent_supervoxel_centers,centroid);
// pcl::getMinMax3D(*adjacent_supervoxel_centers, min, max);
// PointCloudProcessor::mark_cluster(centroid,min, max, idx);
adjacent_supervoxel_centers->width = adjacent_supervoxel_centers->size ();
adjacent_supervoxel_centers->height = 1;
adjacent_supervoxel_centers->is_dense = true;
adjacent_supervoxel_centers->header.frame_id = "rearAxle";
Publisher.publish(adjacent_supervoxel_centers);
//Move iterator forward to next label
label_itr = supervoxel_adjacency.upper_bound(supervoxel_label);
idx++;
}
}
}

gluPartialDisk in osg/osgEarth

I've been trying to create an openGL gluDisk like object in osgEarth. So far I've attempted to do the following (Edited, this is the correct answer):
void ViewDriver::drawCircZone(double lat, double lon, double innerRadius, double outerRadius, QColor color, double beginAngle, double endAngle){
GeometryFactory g;
osg::ref_ptr<osgEarth::Geometry> outerCircleGeom = g.createArc(osg::Vec3d(lat, lon, 0), convertFromMetersToMercDeg(outerRadius), beginAngle, endAngle);
osg::ref_ptr<osgEarth::Geometry> innerCircleGeom = g.createArc(osg::Vec3d(lat, lon, 0), convertFromMetersToMercDeg(innerRadius), beginAngle, endAngle);
osg::Vec3dArray* outerCircArray = outerCircleGeom->createVec3dArray();
osg::Vec3dArray* innerCircArray = innerCircleGeom->createVec3dArray();
Vec3dVector* diskVec = new Vec3dVector;
for(int i = 0; i < outerCircArray->size() - 1; i++){
diskVec->push_back((*outerCircArray)[i]);
}
//This is important for closing the shape and not giving it a Pac-Man-like mouth
diskVec->push_back((*outerCircArray)[0]);
//This is how you make a "hole", by iterating backwards
for(int i = innerCircArray->size() - 1; i >= 0; i--){
diskVec->push_back((*innerCircArray)[i]);
}
osg::ref_ptr<osgEarth::Symbology::Ring> diskRing = new Ring(diskVec);
diskRing->close();
osg::ref_ptr<Feature> circFeature = new Feature(diskRing, view->getMapViewer()->geoSRS);
Style circStyle;
circStyle.getOrCreate<PolygonSymbol>()->outline() = true;
circStyle.getOrCreate<PolygonSymbol>()->fill()->color() = Color(color.red()/255.0, color.green()/255.0, color.blue()/255.0, 1.0);
circStyle.getOrCreate<AltitudeSymbol>()->clamping() = AltitudeSymbol::CLAMP_RELATIVE_TO_TERRAIN;
circStyle.getOrCreate<AltitudeSymbol>()->technique() = AltitudeSymbol::TECHNIQUE_DRAPE;
osg::ref_ptr<FeatureNode> circNode = new FeatureNode(circFeature, circStyle);
circNode->setDynamic(true);
view->getMapNode()->addChild(circNode);
}
What stumped me originally was that I don't have a lot of graphics knowledge. Somewhere I read that when drawing outlines, do it in clockwise direction. When drawing outlines in counter clockwise direction they will "cut-out" or create a "hole" when combined with clockwise-drawn points. I was filling the "hole" outline with the smaller circle's point in a clockwise direction originally when testing that method which was why it didn't work.
void ViewDriver::drawCircZone(double lat, double lon, double innerRadius, double outerRadius, QColor color, double beginAngle, double endAngle){
GeometryFactory g;
osg::ref_ptr<osgEarth::Geometry> outerCircleGeom = g.createArc(osg::Vec3d(lat, lon, 0), convertFromMetersToMercDeg(outerRadius), beginAngle, endAngle);
osg::ref_ptr<osgEarth::Geometry> innerCircleGeom = g.createArc(osg::Vec3d(lat, lon, 0), convertFromMetersToMercDeg(innerRadius), beginAngle, endAngle);
osg::Vec3dArray* outerCircArray = outerCircleGeom->createVec3dArray();
osg::Vec3dArray* innerCircArray = innerCircleGeom->createVec3dArray();
Vec3dVector* diskVec = new Vec3dVector;
for(int i = 0; i < outerCircArray->size() - 1; i++){
diskVec->push_back((*outerCircArray)[i]);
}
//This is important for closing the shape and not giving it a Pac-Man-like mouth
diskVec->push_back((*outerCircArray)[0]);
for(int i = innerCircArray->size() - 1; i >= 0; i--){
diskVec->push_back((*innerCircArray)[i]);
}
osg::ref_ptr<osgEarth::Symbology::Ring> diskRing = new Ring(diskVec);
diskRing->close();
osg::ref_ptr<Feature> circFeature = new Feature(diskRing, view->getMapViewer()->geoSRS);
Style circStyle;
circStyle.getOrCreate<PolygonSymbol>()->outline() = true;
circStyle.getOrCreate<PolygonSymbol>()->fill()->color() = Color(color.red()/255.0, color.green()/255.0, color.blue()/255.0, 1.0);
circStyle.getOrCreate<AltitudeSymbol>()->clamping() = AltitudeSymbol::CLAMP_RELATIVE_TO_TERRAIN;
circStyle.getOrCreate<AltitudeSymbol>()->technique() = AltitudeSymbol::TECHNIQUE_DRAPE;
osg::ref_ptr<FeatureNode> circNode = new FeatureNode(circFeature, circStyle);
circNode->setDynamic(true);
view->getMapNode()->addChild(circNode);
}

clustering image segments in opencv

I am working on motion detection with non-static camera using opencv.
I am using a pretty basic background subtraction and thresholding approach to get a broad sense of all that's moving in a sample video. After thresholding, I enlist all separable "patches" of white pixels, store them as independent components and color them randomly with red, green or blue. The image below shows this for a football video where all such components are visible.
I create rectangles over these detected components and I get this image:
So I can see the challenge here. I want to cluster all the "similar" and close-by components into a single entity so that the rectangles in the output image show a player moving as a whole (and not his independent limbs). I tried doing K-means clustering but since ideally I would not know the number of moving entities, I could not make any progress.
Please guide me on how I can do this. Thanks
this problem can be almost perfectly solved by dbscan clustering algorithm. Below, I provide the implementation and result image. Gray blob means outlier or noise according to dbscan. I simply used boxes as input data. Initially, box centers were used for distance function. However for boxes, it is insufficient to correctly characterize distance. So, the current distance function use the minimum distance of all 8 corners of two boxes.
#include "opencv2/opencv.hpp"
using namespace cv;
#include <map>
#include <sstream>
template <class T>
inline std::string to_string (const T& t)
{
std::stringstream ss;
ss << t;
return ss.str();
}
class DbScan
{
public:
std::map<int, int> labels;
vector<Rect>& data;
int C;
double eps;
int mnpts;
double* dp;
//memoization table in case of complex dist functions
#define DP(i,j) dp[(data.size()*i)+j]
DbScan(vector<Rect>& _data,double _eps,int _mnpts):data(_data)
{
C=-1;
for(int i=0;i<data.size();i++)
{
labels[i]=-99;
}
eps=_eps;
mnpts=_mnpts;
}
void run()
{
dp = new double[data.size()*data.size()];
for(int i=0;i<data.size();i++)
{
for(int j=0;j<data.size();j++)
{
if(i==j)
DP(i,j)=0;
else
DP(i,j)=-1;
}
}
for(int i=0;i<data.size();i++)
{
if(!isVisited(i))
{
vector<int> neighbours = regionQuery(i);
if(neighbours.size()<mnpts)
{
labels[i]=-1;//noise
}else
{
C++;
expandCluster(i,neighbours);
}
}
}
delete [] dp;
}
void expandCluster(int p,vector<int> neighbours)
{
labels[p]=C;
for(int i=0;i<neighbours.size();i++)
{
if(!isVisited(neighbours[i]))
{
labels[neighbours[i]]=C;
vector<int> neighbours_p = regionQuery(neighbours[i]);
if (neighbours_p.size() >= mnpts)
{
expandCluster(neighbours[i],neighbours_p);
}
}
}
}
bool isVisited(int i)
{
return labels[i]!=-99;
}
vector<int> regionQuery(int p)
{
vector<int> res;
for(int i=0;i<data.size();i++)
{
if(distanceFunc(p,i)<=eps)
{
res.push_back(i);
}
}
return res;
}
double dist2d(Point2d a,Point2d b)
{
return sqrt(pow(a.x-b.x,2) + pow(a.y-b.y,2));
}
double distanceFunc(int ai,int bi)
{
if(DP(ai,bi)!=-1)
return DP(ai,bi);
Rect a = data[ai];
Rect b = data[bi];
/*
Point2d cena= Point2d(a.x+a.width/2,
a.y+a.height/2);
Point2d cenb = Point2d(b.x+b.width/2,
b.y+b.height/2);
double dist = sqrt(pow(cena.x-cenb.x,2) + pow(cena.y-cenb.y,2));
DP(ai,bi)=dist;
DP(bi,ai)=dist;*/
Point2d tla =Point2d(a.x,a.y);
Point2d tra =Point2d(a.x+a.width,a.y);
Point2d bla =Point2d(a.x,a.y+a.height);
Point2d bra =Point2d(a.x+a.width,a.y+a.height);
Point2d tlb =Point2d(b.x,b.y);
Point2d trb =Point2d(b.x+b.width,b.y);
Point2d blb =Point2d(b.x,b.y+b.height);
Point2d brb =Point2d(b.x+b.width,b.y+b.height);
double minDist = 9999999;
minDist = min(minDist,dist2d(tla,tlb));
minDist = min(minDist,dist2d(tla,trb));
minDist = min(minDist,dist2d(tla,blb));
minDist = min(minDist,dist2d(tla,brb));
minDist = min(minDist,dist2d(tra,tlb));
minDist = min(minDist,dist2d(tra,trb));
minDist = min(minDist,dist2d(tra,blb));
minDist = min(minDist,dist2d(tra,brb));
minDist = min(minDist,dist2d(bla,tlb));
minDist = min(minDist,dist2d(bla,trb));
minDist = min(minDist,dist2d(bla,blb));
minDist = min(minDist,dist2d(bla,brb));
minDist = min(minDist,dist2d(bra,tlb));
minDist = min(minDist,dist2d(bra,trb));
minDist = min(minDist,dist2d(bra,blb));
minDist = min(minDist,dist2d(bra,brb));
DP(ai,bi)=minDist;
DP(bi,ai)=minDist;
return DP(ai,bi);
}
vector<vector<Rect> > getGroups()
{
vector<vector<Rect> > ret;
for(int i=0;i<=C;i++)
{
ret.push_back(vector<Rect>());
for(int j=0;j<data.size();j++)
{
if(labels[j]==i)
{
ret[ret.size()-1].push_back(data[j]);
}
}
}
return ret;
}
};
cv::Scalar HSVtoRGBcvScalar(int H, int S, int V) {
int bH = H; // H component
int bS = S; // S component
int bV = V; // V component
double fH, fS, fV;
double fR, fG, fB;
const double double_TO_BYTE = 255.0f;
const double BYTE_TO_double = 1.0f / double_TO_BYTE;
// Convert from 8-bit integers to doubles
fH = (double)bH * BYTE_TO_double;
fS = (double)bS * BYTE_TO_double;
fV = (double)bV * BYTE_TO_double;
// Convert from HSV to RGB, using double ranges 0.0 to 1.0
int iI;
double fI, fF, p, q, t;
if( bS == 0 ) {
// achromatic (grey)
fR = fG = fB = fV;
}
else {
// If Hue == 1.0, then wrap it around the circle to 0.0
if (fH>= 1.0f)
fH = 0.0f;
fH *= 6.0; // sector 0 to 5
fI = floor( fH ); // integer part of h (0,1,2,3,4,5 or 6)
iI = (int) fH; // " " " "
fF = fH - fI; // factorial part of h (0 to 1)
p = fV * ( 1.0f - fS );
q = fV * ( 1.0f - fS * fF );
t = fV * ( 1.0f - fS * ( 1.0f - fF ) );
switch( iI ) {
case 0:
fR = fV;
fG = t;
fB = p;
break;
case 1:
fR = q;
fG = fV;
fB = p;
break;
case 2:
fR = p;
fG = fV;
fB = t;
break;
case 3:
fR = p;
fG = q;
fB = fV;
break;
case 4:
fR = t;
fG = p;
fB = fV;
break;
default: // case 5 (or 6):
fR = fV;
fG = p;
fB = q;
break;
}
}
// Convert from doubles to 8-bit integers
int bR = (int)(fR * double_TO_BYTE);
int bG = (int)(fG * double_TO_BYTE);
int bB = (int)(fB * double_TO_BYTE);
// Clip the values to make sure it fits within the 8bits.
if (bR > 255)
bR = 255;
if (bR < 0)
bR = 0;
if (bG >255)
bG = 255;
if (bG < 0)
bG = 0;
if (bB > 255)
bB = 255;
if (bB < 0)
bB = 0;
// Set the RGB cvScalar with G B R, you can use this values as you want too..
return cv::Scalar(bB,bG,bR); // R component
}
int main(int argc,char** argv )
{
Mat im = imread("c:/data/football.png",0);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
findContours(im.clone(), contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Rect> boxes;
for(size_t i = 0; i < contours.size(); i++)
{
Rect r = boundingRect(contours[i]);
boxes.push_back(r);
}
DbScan dbscan(boxes,20,2);
dbscan.run();
//done, perform display
Mat grouped = Mat::zeros(im.size(),CV_8UC3);
vector<Scalar> colors;
RNG rng(3);
for(int i=0;i<=dbscan.C;i++)
{
colors.push_back(HSVtoRGBcvScalar(rng(255),255,255));
}
for(int i=0;i<dbscan.data.size();i++)
{
Scalar color;
if(dbscan.labels[i]==-1)
{
color=Scalar(128,128,128);
}else
{
int label=dbscan.labels[i];
color=colors[label];
}
putText(grouped,to_string(dbscan.labels[i]),dbscan.data[i].tl(), FONT_HERSHEY_COMPLEX,.5,color,1);
drawContours(grouped,contours,i,color,-1);
}
imshow("grouped",grouped);
imwrite("c:/data/grouped.jpg",grouped);
waitKey(0);
}
I agree with Sebastian Schmitz: you probably shouldn't be looking for clustering.
Don't expect an uninformed method such as k-means to work magic for you. In particular one that is as crude a heuristic as k-means, and which lives in an idealized mathematical world, not in messy, real data.
You have a good understanding of what you want. Try to put this intuition into code. In your case, you seem to be looking for connected components.
Consider downsampling your image to a lower resolution, then rerunning the same process! Or running it on the lower resolution right away (to reduce compression artifacts, and improve performance). Or adding filters, such as blurring.
I'd expect best and fastest results by looking at connected components in the downsampled/filtered image.
I am not entirely sure if you are really looking for clustering (in the Data Mining sense).
Clustering is used to group similar objects according to a distance function. In your case the distance function would only use the spatial qualities. Besides, in k-means clustering you have to specify a k, that you probably don't know beforehand.
It seems to me you just want to merge all rectangles whose borders are closer together than some predetermined threshold. So as a first idea try to merge all rectangles that are touching or that are closer together than half a players height.
You probably want to include a size check to minimize the risk of merging two players into one.
Edit: If you really want to use a clustering algorithm use one that estimates the number of clusters for you.
I guess you can improve your original attempt by using morphological transformations. Take a look at http://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0. Probably you can deal with a closed set for each entity after that, specially with separate players as you got in your original image.

3D picking lwjgl

I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.)
This is how the code looks like:
if(Mouse.getEventButton() == 1) {
if (!Mouse.getEventButtonState()) {
Camera.get().generateViewMatrix();
float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio();
float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f));
float displacementRate = (float)Math.tan(Camera.get().getFovy()/2);
screenSpaceX *= displacementRate;
screenSpaceY *= displacementRate;
Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1);
Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1);
Matrix4f tmpView = new Matrix4f();
Camera.get().getViewMatrix().transpose(tmpView);
Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert();
Vector4f worldSpaceNear = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear);
Vector4f worldSpaceFar = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar);
Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z);
Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z);
rayDirection.normalise();
Ray clickRay = new Ray(rayPosition, rayDirection);
Vector tMin = new Vector(), tMax = new Vector(), tempPoint;
float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f;
Drawable closestDrawableHit = null;
for(Drawable d : this.worldModel.getDrawableThings()) {
// Calcualte AABB for each object... needs to be moved later...
firstVertex = true;
for(Surface surface : d.getSurfaces()) {
for(Vertex v : surface.getVertices()) {
worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x;
worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y;
worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z;
worldPosition = worldPosition.rotate(d.getRotation());
if (firstVertex) {
maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z;
minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z;
firstVertex = false;
} else {
if (worldPosition.x > maxX) {
maxX = worldPosition.x;
}
if (worldPosition.x < minX) {
minX = worldPosition.x;
}
if (worldPosition.y > maxY) {
maxY = worldPosition.y;
}
if (worldPosition.y < minY) {
minY = worldPosition.y;
}
if (worldPosition.z > maxZ) {
maxZ = worldPosition.z;
}
if (worldPosition.z < minZ) {
minZ = worldPosition.z;
}
}
}
}
// ray/slabs intersection test...
// clickRay.getOrigin().x + clickRay.getDirection().x * f = minX
// clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f
// clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f
// -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f
largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y;
if(smallestExitingValue > temp) {
smallestExitingValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z;
if(smallestExitingValue < temp) {
smallestExitingValue = temp;
}
if(largestEnteringValue > smallestExitingValue) {
//System.out.println("Miss!");
} else {
if (largestEnteringValue < closestEnteringValue) {
closestEnteringValue = largestEnteringValue;
closestDrawableHit = d;
}
}
}
if(closestDrawableHit != null) {
System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z);
this.worldModel.removeDrawableThing(closestDrawableHit);
}
}
}
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all.
Edit:
Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center and always shots to the same position no matter where i direct my camera...
My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the glulookat method in lwjgl; I have to build it my self and I guess thats where the problem is at)...
Edit2:
This is how i calculate it currently:
private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}};
public Vector getCameraDirectionVector() {
Vector actualEye = this.getActualEyePosition();
return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z);
}
public Vector getActualEyePosition() {
return eye.rotate(this.getRotation());
}
public void generateViewMatrix() {
Vector cameraDirectionVector = getCameraDirectionVector().normalize();
Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize();
Vector up = Vector.cross(side, cameraDirectionVector);
viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x;
viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y;
viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z;
/*
Vector actualEyePosition = this.getActualEyePosition();
Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize();
Vector xaxis = Vector.cross(upVector, zaxis).normalize();
Vector yaxis = Vector.cross(zaxis, xaxis);
viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x;
viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y;
viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z;
viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition);
*/
viewMatrix = new Matrix4f();
viewMatrix.load(getViewMatrixAsFloatBuffer());
}
Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it...
I have read alot of threads and documentations about this but i can't seam to wrapp my head around it...
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but things are not disappearing where i press on the screen.
OpenGL is not a scene graph, it's a drawing library. So after removing something from your internal representation you must redraw the scene. And your code is missing some call to a function that triggers a redraw.
Okay so i finaly solved it with the help from the guys at gamedev and a friend, here is a link to the answer where i have posted the code!

Is there an easy way/algorithm to match 2 clouds of 2D points?

I am wondering if there is an easy way to match (register) 2 clouds of 2d points.
Let's say I have an object represented by points and an cluttered 2nd image with the object points and noise (noise in a way of points that are useless).
Basically the object can be 2d rotated as well as translated and scaled.
I know there is the ICP - Algorithm but I think that this is not a good approach due to high noise.
I hope that you understand what i mean. please ask if (im sure it is) anything is unclear.
cheers
Here is the function that finds translation and rotation. Generalization to scaling, weighted points, and RANSAC are straight forward. I used openCV library for visualization and SVD. The function below combines data generation, Unit Test , and actual solution.
// rotation and translation in 2D from point correspondences
void rigidTransform2D(const int N) {
// Algorithm: http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
const bool debug = false; // print more debug info
const bool add_noise = true; // add noise to imput and output
srand(time(NULL)); // randomize each time
/*********************************
* Creat data with some noise
**********************************/
// Simulated transformation
Point2f T(1.0f, -2.0f);
float a = 30.0; // [-180, 180], see atan2(y, x)
float noise_level = 0.1f;
cout<<"True parameters: rot = "<<a<<"deg., T = "<<T<<
"; noise level = "<<noise_level<<endl;
// noise
vector<Point2f> noise_src(N), noise_dst(N);
for (int i=0; i<N; i++) {
noise_src[i] = Point2f(randf(noise_level), randf(noise_level));
noise_dst[i] = Point2f(randf(noise_level), randf(noise_level));
}
// create data with noise
vector<Point2f> src(N), dst(N);
float Rdata = 10.0f; // radius of data
float cosa = cos(a*DEG2RAD);
float sina = sin(a*DEG2RAD);
for (int i=0; i<N; i++) {
// src
float x1 = randf(Rdata);
float y1 = randf(Rdata);
src[i] = Point2f(x1,y1);
if (add_noise)
src[i] += noise_src[i];
// dst
float x2 = x1*cosa - y1*sina;
float y2 = x1*sina + y1*cosa;
dst[i] = Point2f(x2,y2) + T;
if (add_noise)
dst[i] += noise_dst[i];
if (debug)
cout<<i<<": "<<src[i]<<"---"<<dst[i]<<endl;
}
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
/*********************************
* Visualize data
**********************************/
// Visualization
namedWindow("data", 1);
float w = 400, h = 400;
Mat Mdata(w, h, CV_8UC3); Mdata = Scalar(0);
Point2f center_img(w/2, h/2);
float scl = 0.4*min(w/Rdata, h/Rdata); // compensate for noise
scl/=sqrt(2); // compensate for rotation effect
Point2f dT = (center_src+center_dst)*0.5; // compensate for translation
for (int i=0; i<N; i++) {
Point2f p1(scl*(src[i] - dT));
Point2f p2(scl*(dst[i] - dT));
// invert Y axis
p1.y = -p1.y; p2.y = -p2.y;
// add image center
p1+=center_img; p2+=center_img;
circle(Mdata, p1, 1, Scalar(0, 255, 0));
circle(Mdata, p2, 1, Scalar(0, 0, 255));
line(Mdata, p1, p2, Scalar(100, 100, 100));
}
/*********************************
* Get 2D rotation and translation
**********************************/
markTime();
// subtract centroids from data
for (int i=0; i<N; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<N; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
if (debug)
cout<<"Covariance Matrix "<<Mcov<<endl;
// SVD
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation = V*Ut
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation = mean_dst - R*mean_src
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// RMSE
double RMSE = 0.0;
for (int i=0; i<N; i++) {
Point2f dst_est(
cos_est*src[i].x - sin_est*src[i].y,
sin_est*src[i].x + cos_est*src[i].y);
RMSE += SQR(dst[i].x - dst_est.x) + SQR(dst[i].y - dst_est.y);
}
if (N>0)
RMSE = sqrt(RMSE/N);
// Final estimate msg
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<"; RMSE = "<<RMSE<<endl;
// show image
printTime(1);
imshow("data", Mdata);
waitKey(-1);
return;
} // rigidTransform2D()
// --------------------------- 3DOF
// calculates squared error from two point mapping; assumes rotation around Origin.
inline float sqErr_3Dof(Point2f p1, Point2f p2,
float cos_alpha, float sin_alpha, Point2f T) {
float x2_est = T.x + cos_alpha * p1.x - sin_alpha * p1.y;
float y2_est = T.y + sin_alpha * p1.x + cos_alpha * p1.y;
Point2f p2_est(x2_est, y2_est);
Point2f dp = p2_est-p2;
float sq_er = dp.dot(dp); // squared distance
//cout<<dp<<endl;
return sq_er;
}
// calculate RMSE for point-to-point metrics
float RMSE_3Dof(const vector<Point2f>& src, const vector<Point2f>& dst,
const float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float ang_rad = param[0];
Point2f T(param[1], param[2]);
float cos_alpha = cos(ang_rad);
float sin_alpha = sin(ang_rad);
double RMSE = 0.0;
int ninliers = 0;
for (unsigned int i=0; i<n; i++) {
if (all_inliers || inliers[i]) {
RMSE += sqErr_3Dof(src[i]-center, dst[i]-center, cos_alpha, sin_alpha, T);
ninliers++;
}
}
//cout<<"RMSE = "<<RMSE<<endl;
if (ninliers>0)
return sqrt(RMSE/ninliers);
else
return LARGE_NUMBER;
}
// Sets inliers and returns their count
inline int setInliers3Dof(const vector<Point2f>& src, const vector <Point2f>& dst,
bool* inliers,
const float* param,
const float max_er,
const Point2f center) {
float ang_rad = param[0];
Point2f T(param[1], param[2]);
// set inliers
unsigned int ninliers = 0;
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float cos_ang = cos(ang_rad);
float sin_ang = sin(ang_rad);
float max_sqErr = max_er*max_er; // comparing squared values
if (inliers==NULL) {
// just get the number of inliers (e.g. after QUADRATIC fit only)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr)
ninliers++;
}
} else {
// get the number of inliers and set them (e.g. for RANSAC)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr) {
inliers[i] = 1;
ninliers++;
} else {
inliers[i] = 0;
}
}
}
return ninliers;
}
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATICold(const vector<Point2f>& src, const vector<Point2f>& dst,
float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(dst.size() == n);
// count inliers
int ninliers;
if (all_inliers) {
ninliers = n;
} else {
ninliers = 0;
for (unsigned int i=0; i<n; i++){
if (inliers[i])
ninliers++;
}
}
// under-dermined system
if (ninliers<2) {
// param[0] = 0.0f; // ?
// param[1] = 0.0f;
// param[2] = 0.0f;
return LARGE_NUMBER;
}
/*
* x1*cosx(a)-y1*sin(a) + Tx = X1
* x1*sin(a)+y1*cos(a) + Ty = Y1
*
* approximation for small angle a (radians) sin(a)=a, cos(a)=1;
*
* x1*1 - y1*a + Tx = X1
* x1*a + y1*1 + Ty = Y1
*
* in matrix form M1*h=M2
*
* 2n x 4 4 x 1 2n x 1
*
* -y1 1 0 x1 * a = X1
* x1 0 1 y1 Tx Y1
* Ty
* 1=Z
* ----------------------------
* src1 res src2
*/
// 4 x 1
float res_ar[4]; // alpha, Tx, Ty, 1
Mat res(4, 1, CV_32F, res_ar); // 4 x 1
// 2n x 4
Mat src1(2*ninliers, 4, CV_32F); // 2n x 4
// 2n x 1
Mat src2(2*ninliers, 1, CV_32F); // 2n x 1: [X1, Y1, X2, Y2, X3, Y3]'
for (unsigned int i=0, row_cnt = 0; i<n; i++) {
// use inliers only
if (all_inliers || inliers[i]) {
float x = src[i].x - center.x;
float y = src[i].y - center.y;
// first row
// src1
float* rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = -y;
rowPtr[1] = 1.0f;
rowPtr[2] = 0.0f;
rowPtr[3] = x;
// src2
src2.at<float> (0, row_cnt) = dst[i].x - center.x;
// second row
row_cnt++;
// src1
rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = x;
rowPtr[1] = 0.0f;
rowPtr[2] = 1.0f;
rowPtr[3] = y;
// src2
src2.at<float> (0, row_cnt) = dst[i].y - center.y;
}
}
cv::solve(src1, src2, res, DECOMP_SVD);
// estimators
float alpha_est;
Point2f T_est;
// original
alpha_est = res.at<float>(0, 0);
T_est = Point2f(res.at<float>(1, 0), res.at<float>(2, 0));
float Z = res.at<float>(3, 0);
if (abs(Z-1.0) > 0.1) {
//cout<<"Bad Z in fit3DOF(), Z should be close to 1.0 = "<<Z<<endl;
//return LARGE_NUMBER;
}
param[0] = alpha_est; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src, dst, param, inliers, center);
return RMSE;
} // fit3DofQUADRATICOLd()
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATIC(const vector<Point2f>& src_, const vector<Point2f>& dst_,
float* param, const bool* inliers, const Point2f center) {
const bool debug = false; // print more debug info
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
assert(dst_.size() == src_.size());
int N = src_.size();
// collect inliers
vector<Point2f> src, dst;
int ninliers;
if (all_inliers) {
ninliers = N;
src = src_; // copy constructor
dst = dst_;
} else {
ninliers = 0;
for (int i=0; i<N; i++){
if (inliers[i]) {
ninliers++;
src.push_back(src_[i]);
dst.push_back(dst_[i]);
}
}
}
if (ninliers<2) {
param[0] = 0.0f; // default return when there is not enough points
param[1] = 0.0f;
param[2] = 0.0f;
return LARGE_NUMBER;
}
/* Algorithm: Least-Square Rigid Motion Using SVD by Olga Sorkine
* http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
*
* Subtract centroids, calculate SVD(cov),
* R = V[1, det(VU')]'U', T = mean_q-R*mean_p
*/
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
// subtract centroids from data
for (int i=0; i<ninliers; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<ninliers; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
Mcov /= (ninliers-1);
if (debug)
cout<<"Covariance-like Matrix "<<Mcov<<endl;
// SVD of covariance
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation (V*Ut)
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation (mean_dst - R*mean_src)
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// Final estimate msg
if (debug)
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<endl;
param[0] = ang; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src_, dst_, param, inliers, center);
return RMSE;
} // fit3DofQUADRATIC()
// RANSAC fit in 3DOF: 1D rot and 2D translation (maximizes the number of inliers)
// NOTE: no data normalization is currently performed
float fit3DofRANSAC(const vector<Point2f>& src, const vector<Point2f>& dst,
float* best_param, bool* inliers,
const Point2f center ,
const float inlierMaxEr,
const int niter) {
const int ITERATION_TO_SETTLE = 2; // iterations to settle inliers and param
const float INLIERS_RATIO_OK = 0.95f; // stopping criterion
// size of data vector
unsigned int N = src.size();
assert(N==dst.size());
// unrealistic case
if(N<2) {
best_param[0] = 0.0f; // ?
best_param[1] = 0.0f;
best_param[2] = 0.0f;
return LARGE_NUMBER;
}
unsigned int ninliers; // current number of inliers
unsigned int best_ninliers = 0; // number of inliers
float best_rmse = LARGE_NUMBER; // error
float cur_rmse; // current distance error
float param[3]; // rad, Tx, Ty
vector <Point2f> src_2pt(2), dst_2pt(2);// min set of 2 points (1 correspondence generates 2 equations)
srand (time(NULL));
// iterations
for (int iter = 0; iter<niter; iter++) {
#ifdef DEBUG_RANSAC
cout<<"iteration "<<iter<<": ";
#endif
// 1. Select a random set of 2 points (not obligatory inliers but valid)
int i1, i2;
i1 = rand() % N; // [0, N[
i2 = i1;
while (i2==i1) {
i2 = rand() % N;
}
src_2pt[0] = src[i1]; // corresponding points
src_2pt[1] = src[i2];
dst_2pt[0] = dst[i1];
dst_2pt[1] = dst[i2];
bool two_inliers[] = {true, true};
// 2. Quadratic fit for 2 points
cur_rmse = fit3DofQUADRATIC(src_2pt, dst_2pt, param, two_inliers, center);
// 3. Recalculate to settle params and inliers using a larger set
for (int iter2=0; iter2<ITERATION_TO_SETTLE; iter2++) {
ninliers = setInliers3Dof(src, dst, inliers, param, inlierMaxEr, center); // changes inliers
cur_rmse = fit3DofQUADRATIC(src, dst, param, inliers, center); // changes cur_param
}
// potential ill-condition or large error
if (ninliers<2) {
#ifdef DEBUG_RANSAC
cout<<" !!! less than 2 inliers "<<endl;
#endif
continue;
} else {
#ifdef DEBUG_RANSAC
cout<<" "<<ninliers<<" inliers; ";
#endif
}
#ifdef DEBUG_RANSAC
cout<<"; recalculate: RMSE = "<<cur_rmse<<", "<<ninliers <<" inliers";
#endif
// 4. found a better solution?
if (ninliers > best_ninliers) {
best_ninliers = ninliers;
best_param[0] = param[0];
best_param[1] = param[1];
best_param[2] = param[2];
best_rmse = cur_rmse;
#ifdef DEBUG_RANSAC
cout<<" --- Solution improved: "<<
best_param[0]<<", "<<best_param[1]<<", "<<param[2]<<endl;
#endif
// exit condition
float inlier_ratio = (float)best_ninliers/N;
if (inlier_ratio > INLIERS_RATIO_OK) {
#ifdef DEBUG_RANSAC
cout<<"Breaking early after "<< iter+1<<
" iterations; inlier ratio = "<<inlier_ratio<<endl;
#endif
break;
}
} else {
#ifdef DEBUG_RANSAC
cout<<endl;
#endif
}
} // iterations
// 5. recreate inliers for the best parameters
ninliers = setInliers3Dof(src, dst, inliers, best_param, inlierMaxEr, center);
return best_rmse;
} // fit3DofRANSAC()
Let me first make sure I'm interpreting your question correctly. You have two sets of 2D points, one of which contains all "good" points corresponding to some object of interest, and one of which contains those points under an affine transformation with noisy points added. Right?
If that's correct, then there is a fairly reliable and efficient way to both reject noisy points and determine the transformation between your points of interest. The algorithm that is usually used to reject noisy points ("outliers") is known as RANSAC, and the algorithm used to determine the transformation can take several forms, but the most current state of the art is known as the five-point algorithm and can be found here -- a MATLAB implementation can be found here.
Unfortunately I don't know of a mature implementation of both of those combined; you'll probably have to do some work of your own to implement RANSAC and integrate it with the five point algorithm.
Edit:
Actually, OpenCV has an implementation that is overkill for your task (meaning it will work but will take more time than necessary) but is ready to work out of the box. The function of interest is called cv::findFundamentalMat.
I believe you are looking for something like David Lowe's SIFT (Scale Invariant Feature Transform). Other option is SURF (SIFT is patent protected). The OpenCV computer library presents a SURF implementation
I would try and use distance geometry (http://en.wikipedia.org/wiki/Distance_geometry) for this
Generate a scalar for each point by summing its distances to all neighbors within a certain radius. Though not perfect, this will be good discriminator for each point.
Then put all the scalars in a map that allows a point (p) to be retrieve by its scalar (s) plus/minus some delta
M(s+delta) = p (e.g K-D Tree) (http://en.wikipedia.org/wiki/Kd-tree)
Put all the reference set of 2D points in the map
On the other (test) set of 2D points:
foreach test scaling (esp if you have a good idea what typical scaling values are)
...scale each point by S
...recompute the scalars of the test set of points
......for each point P in test set (or perhaps a sample for faster method)
.........lookup point in reference scalar map within some delta
.........discard P if no mapping found
.........else foreach P' point found
............examine neighbors of P and see if they have corresponding scalars in the reference map within some delta (i.e reference point has neighbors with approx same value)
......... if all points tested have a mapping in the reference set, you have found a mapping of test point P onto reference point P' -> record mapping of test point to reference point
......discard scaling if no mappings recorded
Note this is trivially parallelized in several different places
This is off the top of my head, drawing from research I did years ago. It lacks fine details but the general idea is clear: find points in the noisy (test) graph whose distances to their closest neighbors are roughly the same as the reference set. Noisy graphs will have to measure the distances with a larger allowed error that less noisy graphs.
The algorithm works perfectly for graphs with no noise.
Edit: there is a refinement for the algorithm that doesn't require looking at different scalings. When computing the scalar for each point, use a relative distance measure instead. This will be invariant of transform
From C++, you could use ITK to do the image registration. It includes many registration functions that will work in the presence of noise.
The KLT (Kanade Lucas Tomasi) Feature Tracker makes a Affine Consistency Check of tracked features. The Affine Consistency Check takes into account translation, rotation and scaling. I don't know if it is of help to you, because you can't use the function (which calculates the affine transformation of a rectangular region) directly. But maybe you can learn from the documentation and source-code, how the affine transformation can be calculated and adapt it to your problem (clouds of points instead of a rectangular region).
You want want the Denton-Beveridge point matching algorithm. Source code at the bottom of the page linked below, and there is also a paper that explain the algorithm and why Ransac is a bad choice for this problem.
http://jasondenton.me/pntmatch.html