Adjust a detected 2D map to a reference 2D map - c++

I have a map with some reference positions that correspond to the center (small cross) of some objects like this:
I take pictures to find my objects but in the pictures I have some noise so I can't always find all of the objects, it can be something like this:
From the few found positions I need to know where in the picture the other not found objects should be. I've being reading about this for the last couple of days and experimenting but I can't find a proper way of doing this. In some examples they start by calculating the center of masses and translating them together, then rotating, some other examples use least squares minimization and start by a rotation. I can't use OpenCV or any other APIs, just plain C++. I can use Eigen library if that helps. Can anyone give me some pointers on this?
EDIT:
I've solved the correspondence between points, the picture is never very different from the reference so for each found position I can search for its corresponding reference. In brief, I have one 2D matrix with reference points and another 2D matrix with found points. In the found matrix of points, the not found points are saved as NaN just to keep the same matrix size, the NaN points are not used in the calculations.

Since you have already matched the points to one another, finding the transform is straight forward:
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
refCloud is your reference point set and targetCloud is the set of points you have found in your image. It is important that the clouds match index wise, so refCloud[n] must be the corresponding point to targetCloud[n]. This means that you have to remove all NaNs from your matrix and cherry pick the correspondances in your reference point set.
Here is a full example. I'm using OpenCV to draw the stuff:
#include <Eigen/Dense>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
using Point = Eigen::Vector2d;
template <typename TMatrix>
Point centerOfMass(TMatrix const& points)
{
return points.rowwise().sum() / points.cols();
}
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
void drawCloud(cv::Mat& img, Eigen::Matrix2Xd const& cloud, Point const& origin, Point const& scale, cv::Scalar const& color, int thickness = cv::FILLED)
{
for (int c = 0; c < cloud.cols(); c++)
{
auto p = origin + cloud.col(c).cwiseProduct(scale);
cv::circle(img, {int(p.x()), int(p.y())}, 5, color, thickness, cv::LINE_AA);
}
}
int main()
{
// generate sample reference
std::vector<Point> points = {{4, 9}, {4, 4}, {6, 9}, {6, 4}, {8, 9}, {8, 4}, {10, 9}, {10, 4}, {12, 9}, {12, 4}};
Eigen::Matrix2Xd fullRefCloud(2, points.size());
for (int i = 0; i < points.size(); i++)
fullRefCloud.col(i) = points[i];
// generate sample target
Eigen::Matrix2Xd refCloud = fullRefCloud.leftCols(fullRefCloud.cols() * 0.6);
Eigen::Affine2d refTransformation = Eigen::Affine2d::Identity();
refTransformation.translate(Point(8, -4)).rotate(4.3).translate(-centerOfMass(refCloud)).scale(1.5);
Eigen::Matrix2Xd targetCloud = refTransformation * refCloud;
// find the transformation
auto transform = findAffine(refCloud, targetCloud);
std::cout << "Original: \n" << refTransformation.matrix() << "\n\nComputed: \n" << transform.matrix() << "\n";
// apply the computed transformation
Eigen::Matrix2Xd queryCloud = fullRefCloud.rightCols(fullRefCloud.cols() - refCloud.cols());
queryCloud = transform * queryCloud;
// draw it
Point scale = {15, 15}, origin = {100, 300};
cv::Mat img(600, 600, CV_8UC3);
cv::line(img, {0, int(origin.y())}, {800, int(origin.y())}, {});
cv::line(img, {int(origin.x()), 0}, {int(origin.x()), 800}, {});
drawCloud(img, refCloud, origin, scale, {0, 255, 0});
drawCloud(img, fullRefCloud, origin, scale, {255, 0, 0}, 1);
drawCloud(img, targetCloud, origin, scale, {0, 0, 255});
drawCloud(img, queryCloud, origin, scale, {255, 0, 255}, 1);
cv::flip(img, img, 0);
cv::imshow("img", img);
cv::waitKey();
return 0;
}

I managed to make it work with the code from here:
https://github.com/oleg-alexandrov/projects/blob/master/eigen/Kabsch.cpp
I'm calling the Find3DAffineTransform function and passing it my 2D maps, as this function expects 3D maps I've made all z coordinates = 0 and it works. If I have some time I'll try to adapt it to 2D.
Meanwhile a fellow programmer (Regis :-) found also this, that should work:
https://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60
Its the function umeyama() that returns the transformation between two point sets. Its part of Eigen library. Didn't have the time to test this.

Related

What define's Boost's svg_mapper scaling and translation?

This code:
#include <fstream>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
namespace bg = boost::geometry;
int main()
{
std::ofstream svg ( "test.svg" );
boost::geometry::svg_mapper<bg::model::d2::point_xy<double>, true, double> mapper ( svg, 6000, 3000 );
bg::model::polygon<bg::model::d2::point_xy<double>> square{
{{0, 0}, {0, 1000}, {1000, 1000}, {1000, 0}, {0, 0}}};
const std::string style{"fill-opacity:1.0;fill:rgb(128,128,128);stroke:rgb(0,0,0);stroke-width:5"};
mapper.add ( square );
mapper.map ( square, style, 1.0 );
}
Produces this svg:
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="100%" height="100%" version="1.1"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink">
<g fill-rule="evenodd"><path d="M 1500,3000 L 1500,0 L 4500,0 L 4500,3000 L 1500,3000 z " style="fill-opacity:1.0;fill:rgb(128,128,128);stroke:rgb(0,0,0);stroke-width:5"/></g>
</svg>
The following conversions happen from the input polygon to the mapped svg geometries:
(0, 0) -> (1500,3000)
(0, 1000) -> (1500,0)
(1000, 1000) -> (4500,0)
(1000, 0) -> (4500,3000)
(0, 0) -> (1500,3000)
Staring at it a bit you see there is some transformation applied, something like this:
+1500 in x
+3000 in y
3x scale in x
-3x scale in y
My question is - What drives that transformation and can I prevent it? And if I can't prevent it, can I retrieve it or calculate it myself?
Reason being is I'm producing many complex SVG's and would like them to all be in the same frame. So if there is a circle at pixels (10,10) in one, I would like all the images to be of the same size with the circle in the exact same location. I tried to accomplish this with viewBox but the scaling and translation was too hard to predict to keep the images consistent.
svg_mapper calculates a bounding box from all add-ed geometries.
Then, a map_transformer is used to scale down to the desired width/height.
Contrary to what you might expect, add doesn't do anything besides expanding the bounding box. Likewise, after the first map call, no other add has any effect on the bounding-box used for the transformations.
In other words, you can use some kind of fixed bounding box, add only that, and then map your geometries into that "canvas":
Demo
#include <fstream>
#include <iostream>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
namespace bg = boost::geometry;
using V = /*long*/ double;
using P = bg::model::d2::point_xy<V>;
using B = bg::model::box<P>;
int main()
{
auto verify = [](auto& g) {
if (std::string r; !bg::is_valid(g, r)) {
std::cout << "Correcting " << r << "\n";
bg::correct(g);
}
};
V side = 1000;
bg::model::polygon<P> square{
{{0, 0}, {0, side}, {side, side}, {side, 0}, {0, 0}},
};
verify(square);
std::array steps {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1,};
for (unsigned i = 0; i < steps.size(); ++i) {
{
std::ofstream svg("test" + std::to_string(i) + ".svg");
bg::svg_mapper<P, true, V> mapper(svg, 400, 400);
auto clone = square;
V ofs = (steps[i] / 5. - 1.0) * side;
for (auto& p : boost::make_iterator_range(bg::points_begin(clone), bg::points_end(clone)))
bg::add_point(p, P{ofs, ofs});
std::cout << i << ": " << bg::wkt(square) << " " << bg::wkt(clone) << "\n";
mapper.add(B{{-side, -side}, {2 * side, 2 * side}});
//mapper.add(square); // no effect, already within bounding box
//mapper.add(clone); // no effect, already within bounding box
mapper.map(square, "fill-opacity:0.1;fill:rgb(128,0,0)", 1.0);
mapper.map(clone, "fill-opacity:0.1;fill:rgb(0,0,128)", 1.0);
}
}
}
Which creates a series of svgs that I can show as a poor man's animation to show that the positioning of the square is constant:

C++: Get list of simple polygons from polygon with holes

I'm struggling with Boost::Polygon - apparently it can do everything except the thing I want. I have a few boundaries describing set of polygons and their holes (in 2d space). In general we can even have hole in a hole (smaller polygon in hole of a bigger polygon), or many holes in one polygon. If it's necessary I can check which boundary describes a hole and which describes a polygon. Sometimes boundaries are separate (and not containing each other), which means we have many polygons. What I want is a method which gives me a set of simple, not containing any holes polygons, which together form input 'holey' polygon.
This is possible with Boost Polygon. You need polygon_set_data::get(), which does the hole fracturing for you in case you convert from a polygon concept supporting holes to one that does not. See: http://www.boost.org/doc/libs/1_65_0/libs/polygon/doc/gtl_polygon_set_concept.htm for more details.
The following is an example, where we represent a polygon with a hole first, then convert it to a simple polygon with only one ring:
#include <boost/polygon/polygon.hpp>
namespace bp = boost::polygon;
int main(void)
{
using SimplePolygon = bp::polygon_data<int>;
using ComplexPolygon = bp::polygon_with_holes_data<int>;
using Point = bp::point_data<int>;
using PolygonSet = bp::polygon_set_data<int>;
using SimplePolygons = std::vector<bp::polygon_data<int>>;
using namespace boost::polygon::operators;
std::vector<Point> points{{5, 0}, {10, 5}, {5, 10}, {0, 5}};
ComplexPolygon p;
bp::set_points(p, points.begin(), points.end());
{
std::vector<Point> innerPoints{{4, 4}, {6, 4}, {6, 6}, {4, 6}};
std::vector<SimplePolygon> inner(1, SimplePolygon{});
bp::set_points(inner.front(), innerPoints.begin(), innerPoints.end());
bp::set_holes(p, inner.begin(), inner.end());
}
PolygonSet complexPolygons;
complexPolygons += p;
SimplePolygons simplePolygons;
complexPolygons.get<SimplePolygons>(simplePolygons);
std::cout << "Fractured:\n";
for (const auto& polygon : simplePolygons)
{
for (const Point& p : polygon)
{
std::cout << '\t' << std::to_string(p.x()) << ", " << std::to_string(p.y())
<< '\n';
}
}
return 0;
}

opencv: how to draw arrows on orientation image

I'm trying to perform orientation estimation on an input image in OpenCV. I used sobel function to get gradients of the image, and used another function called calculateOrientations, which I found on the internet, to calculate orientations.
The code is as follows:
void computeGradient(cv::Mat inputImg)
{
// Gradient X
cv::Sobel(inputImg, grad_x, CV_16S, 1, 0, 5, 1, 0, cv::BORDER_DEFAULT);
cv::convertScaleAbs(grad_x, abs_grad_x);
// Gradient Y
cv::Sobel(inputImg, grad_y, CV_16S, 0, 1, 5, 1, 0, cv::BORDER_DEFAULT);
cv::convertScaleAbs(grad_y, abs_grad_y);
// convert from CV_8U to CV_32F
abs_grad_x.convertTo(abs_grad_x2, CV_32F, 1. / 255);
abs_grad_y.convertTo(abs_grad_y2, CV_32F, 1. / 255);
// calculate orientations
calculateOrientations(abs_grad_x2, abs_grad_y2);
}
void calculateOrientations(cv::Mat gradientX, cv::Mat gradientY)
{
// Create container element
orientation = cv::Mat(gradientX.rows, gradientX.cols, CV_32F);
// Calculate orientations of gradients --> in degrees
// Loop over all matrix values and calculate the accompagnied orientation
for (int i = 0; i < gradientX.rows; i++){
for (int j = 0; j < gradientX.cols; j++){
// Retrieve a single value
float valueX = gradientX.at<float>(i, j);
float valueY = gradientY.at<float>(i, j);
// Calculate the corresponding single direction, done by applying the arctangens function
float result = cv::fastAtan2(valueX, valueY);
// Store in orientation matrix element
orientation.at<float>(i, j) = result;
}
}
}
Now, I need to make sure whether the obtained orientation is correct or not. For that I want to draw arrows for each block of size 5x5 on the orientation matrix. Could someone advice me on how to draw arrows on this? Thank you.
The simplest way for OpenCV to distinguish direction is to draw little circle or square at a start or end point of line. There are no function for arrows afaik. If you need arrow you have to write this (it is simple but takes time too). Once I did it this way (not openCV, but I hope you convert it):
double arrow_pos = 0.5; // 0.5 = at the center of line
double len = sqrt((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1));
double co = (x2-x1)/len, si = (y2-y1)/len; // line coordinates are (x1,y1)-(x2,y2)
double const l = 15, sz = linewidth*2; // l - arrow length
double x0 = x2 - len*arrow_pos*co;
double y0 = y2 - len*arrow_pos*si;
double x = x2 - (l+len*arrow_pos)*co;
double y = y2 - (l+len*arrow_pos)*si;
TPoint tp[4] = {TPoint(x+sz*si, y-sz*co), TPoint(x0, y0), TPoint(x-sz*si, y+sz*co), TPoint(x+l*0.3*co, y+0.3*l*si)};
Polygon(tp, 3);
Canvas->Polyline(tp, 2);
UPDATE: arrowedLine(...) function added since OpenCV 2.4.10 and 3.0
The easiest way to draw an arrow in opencv is:
arrowedLine(img, pointStart, pointFinish, colorScalar, thickness, line_type, shift, tipLength);
thickness, line_type, shift and tipLength have already default values, so can be omitted

Oriented Bounding Box is Misshapen and the Wrong Size in OpenGL

I'm writing a program in OpenGL to load a mesh and draw an oriented bounding box around said mesh. The mesh loads correctly but when I draw the bounding box the box is the wrong shape and far too small.
The process I used to calculate this box was to use principle component analysis to find a covariance matrix. I then got the eigenvectors of that matrix and used those as a local co-ordinate system to find the 8 vertices of the cube. Then I calculated a transformation to move the cube from the local co-ordinate system to the global co-ordinate system.
The code for calculating the covariance is here:
std::array<std::array<double, 3>, 3> covarianceCalc2()
{
std::array<std::array<double, 3>, 3> sum = {{{0, 0, 0}, {0, 0, 0}, {0, 0, 0,}}};
std::array<double, 3> tempVec;
double mean = 0;
for(int i = 0; i < meshVertices.size(); i++)
{
mean += meshVertices[i].x;
mean += meshVertices[i].y;
mean += meshVertices[i].z;
}
mean = mean/(meshVertices.size() * 3);
for(int i = 0; i < meshVertices.size(); i++)
{
//mean = (meshVertices[i].x + meshVertices[i].y + meshVertices[i].z)/3;
tempVec[0] = meshVertices[i].x - mean;
tempVec[1] = meshVertices[i].y - mean;
tempVec[2] = meshVertices[i].z - mean;
sum = matrixAdd(sum, vectorTranposeMult(tempVec));
}
sum = matrixMultNum(sum,(double) 1/(meshVertices.size()));
return sum;
}
The code for calculating the eigenvectors is here:
void Compute_EigenV(std::array<std::array<double, 3>, 3> covariance, double eigenValues[3], double eigenVectors_1[3], double eigenVectors_2[3], double eigenVectors_3[3])
{
printf("Matrix Stuff\n");
MatrixXd m(3, 3);
m << covariance[0][0], covariance[0][1], covariance[0][2],
covariance[1][0], covariance[1][1], covariance[1][2],
covariance[2][0], covariance[2][1], covariance[2][2];
// volving SVD
printf("EigenSolver\n");
EigenSolver<MatrixXd> solver(m);
MatrixXd all_eigenVectors = solver.eigenvectors().real();
MatrixXd all_eigenValues = solver.eigenvalues().real();
// find the max index
printf("Find Max Index\n");
int INDEX[3];
double max;
max=all_eigenValues(0,0);
int index=0;
for (int i=1;i<3;i++){
if (max<all_eigenValues(i,0)){
max=all_eigenValues(i,0);
index=i;
}
}
INDEX[0]=index;
// find the min index
printf("Find Min Index\n");
double min;
min=all_eigenValues(0,0);
index=0;
for (int i=1;i<3;i++){
if (min>all_eigenValues(i,0)){
min=all_eigenValues(i,0);
index=i;
}
}
INDEX[1]=3-index-INDEX[0];
INDEX[2]=index;
// giave eigenvalues and eien vectors to matrix
printf("Give values and vector to matrix\n");
eigenValues[0]=all_eigenValues(INDEX[0],0);
printf("1");
eigenValues[1]=all_eigenValues(INDEX[1],0);
printf("1\n");
eigenValues[2]=all_eigenValues(INDEX[2],0);
printf("Vector 1\n");
VectorXd featureVector_1 = all_eigenVectors.col(INDEX[0]);
eigenVectors_1[0]=featureVector_1(0);
eigenVectors_1[1]=featureVector_1(1);
eigenVectors_1[2]=featureVector_1(2);
printf("Vector 2\n");
VectorXd featureVector_2 = all_eigenVectors.col(INDEX[1]);
eigenVectors_2[0]=featureVector_2(0);
eigenVectors_2[1]=featureVector_2(1);
eigenVectors_2[2]=featureVector_2(2);
printf("Vector 3\n");
VectorXd featureVector_3 = all_eigenVectors.col(INDEX[2]);
eigenVectors_3[0]=featureVector_3(0);
eigenVectors_3[1]=featureVector_3(1);
eigenVectors_3[2]=featureVector_3(2);
}
The code that finds the global co-ordinates is this:
std::array<double, 3> localToGlobal(std::array<double, 3> vec, double eigenVector1[3], double eigenVector2[3], double eigenVector3[3], double mean)
{
std::array<double, 3> tempVec;
std::array<std::array<double, 3>, 3> eigenArray;
eigenArray[0][0] = eigenVector1[0]; eigenArray[0][1] = eigenVector2[0]; eigenArray[0][2] = eigenVector3[0];
eigenArray[1][0] = eigenVector1[1]; eigenArray[1][1] = eigenVector2[1]; eigenArray[1][2] = eigenVector3[1];
eigenArray[2][0] = eigenVector1[2]; eigenArray[2][1] = eigenVector2[2]; eigenArray[2][2] = eigenVector3[2];
tempVec = matrixVectorMult(eigenArray, vec);
tempVec[0] += mean;
tempVec[1] += mean;
tempVec[2] += mean;
return tempVec;
}
The code that calls all of these and draws the box is:
void obbBoundingBox()
{
double eigenValues[3] = {0, 0, 0};
double eigenVectors_1[3] = {0, 0, 0}, eigenVectors_2[3] = {0, 0, 0}, eigenVectors_3[3] = {0, 0, 0};
Compute_EigenV(covarianceCalc2(), eigenValues, eigenVectors_1, eigenVectors_2, eigenVectors_3);
std::array<double, 3> point1 = {findVectorMax(eigenVectors_1), findVectorMax(eigenVectors_2), findVectorMax(eigenVectors_3)};
std::array<double, 3> point2 = {findVectorMax(eigenVectors_1), findVectorMax(eigenVectors_2), findVectorMin(eigenVectors_3)};
std::array<double, 3> point3 = {findVectorMax(eigenVectors_1), findVectorMin(eigenVectors_2), findVectorMin(eigenVectors_3)};
std::array<double, 3> point4 = {findVectorMax(eigenVectors_1), findVectorMin(eigenVectors_2), findVectorMin(eigenVectors_3)};
std::array<double, 3> point5 = {findVectorMin(eigenVectors_1), findVectorMax(eigenVectors_2), findVectorMax(eigenVectors_3)};
std::array<double, 3> point6 = {findVectorMin(eigenVectors_1), findVectorMax(eigenVectors_2), findVectorMin(eigenVectors_3)};
std::array<double, 3> point7 = {findVectorMin(eigenVectors_1), findVectorMin(eigenVectors_2), findVectorMax(eigenVectors_3)};
std::array<double, 3> point8 = {findVectorMin(eigenVectors_1), findVectorMin(eigenVectors_2), findVectorMin(eigenVectors_3)};
double mean = 0;
for(int i = 0; i < meshVertices.size(); i++)
{
mean += meshVertices[i].x;
mean += meshVertices[i].y;
mean += meshVertices[i].z;
}
mean = mean/(meshVertices.size() * 3);
point1 = localToGlobal(point1, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point2 = localToGlobal(point2, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point3 = localToGlobal(point3, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point4 = localToGlobal(point4, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point5 = localToGlobal(point5, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point6 = localToGlobal(point6, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point7 = localToGlobal(point7, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
point8 = localToGlobal(point8, eigenVectors_1, eigenVectors_2, eigenVectors_3, mean);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(1.0f, 1.0f, 1.0f, 0.5f);
glBegin(GL_QUADS);
//Front Face
glVertex3f(point1[0], point1[1], point1[2]);
glVertex3f(point3[0], point3[1], point3[2]);
glVertex3f(point7[0], point7[1], point7[2]);
glVertex3f(point5[0], point5[1], point5[2]);
glEnd();
glBegin(GL_QUADS);
//Left Face
glVertex3f(point5[0], point5[1], point5[2]);
glVertex3f(point7[0], point7[1], point7[2]);
glVertex3f(point8[0], point8[1], point8[2]);
glVertex3f(point6[0], point6[1], point6[2]);
glEnd();
glBegin(GL_QUADS);
//Back Face
glVertex3f(point6[0], point6[1], point6[2]);
glVertex3f(point8[0], point8[1], point8[2]);
glVertex3f(point4[0], point4[1], point4[2]);
glVertex3f(point2[0], point2[1], point2[2]);
glEnd();
glBegin(GL_QUADS);
//Right Face
glVertex3f(point2[0], point2[1], point2[2]);
glVertex3f(point4[0], point4[1], point4[2]);
glVertex3f(point3[0], point3[1], point3[2]);
glVertex3f(point1[0], point1[1], point1[2]);
glEnd();
glBegin(GL_QUADS);
//Top Face
glVertex3f(point2[0], point2[1], point2[2]);
glVertex3f(point1[0], point1[1], point1[2]);
glVertex3f(point5[0], point5[1], point5[2]);
glVertex3f(point6[0], point6[1], point6[2]);
glEnd();
glBegin(GL_QUADS);
//Bottom Face
glVertex3f(point4[0], point4[1], point4[2]);
glVertex3f(point3[0], point3[1], point3[2]);
glVertex3f(point7[0], point7[1], point7[2]);
glVertex3f(point8[0], point8[1], point8[2]);
glEnd();
}
The code looks right (I haven't checked all the functions like matrixmult and like that but I assume you have tested and checked them).
Problem: you have a small misunderstanding of what are you doing there.
So, to help you a bit, but not code your coursework by myself, as that would get us both into trouble, I decided to make a small tutorial in Matlab (as I know you have access to) of what you want to do and why. Some functions are missing, but you should be able to understand what's going on:
clear;clc;
%% your "bunny"
vtx= [ 1 0
1 1
2 2
3 3
1 3];
% Lets draw it
hold on
plot(vtx(:,1),vtx(:,2),'.')
% lets draw XY axis also
plot([0 4],[0 0],'k')
plot([0 0],[0 4],'k')
axis([-1 5 -1 5])
axis square
%% Draw abb
maxX=max(vtx(:,1));
maxY=max(vtx(:,2));
minX=min(vtx(:,1));
minY=min(vtx(:,2));
% Mising: Create a square and draw it
cub=createcube(maxX,maxY,minX,minY)
drawabb(cub);
%% Create obb
C=cov(vtx);
vtxmean=mean(vtx);
[eVect,~]=eig(C);
% Draw new local coord system
plot([vtxmean(1) vtxmean(1)+eVect(1,1)],[vtxmean(2) vtxmean(2)+eVect(1,2)],'k')
plot([vtxmean(1) vtxmean(1)+eVect(2,1)],[vtxmean(2) vtxmean(2)+eVect(2,2)],'k')
Now you can see that if we get the max and min of the eigenvector, it
doesnt make too much sense. Thats NOT the obb. So what are we
suposed to do? Well, we can create an abb, but alligned to the NEW axis,
not the XY axis!
What would we need for that? well we need to know the values of our
points in the new coordinate axis, dont we?
Localvtx=fromGlobalToLocal(vtx,eVect,vtxmean);
% get the max and min of the points IN THE NEW COORD SYSTEM!
maxX=max(Localvtx(:,1));
maxY=max(Localvtx(:,2));
minX=min(Localvtx(:,1));
minY=min(Localvtx(:,2));
Fantastic!!!
Now , we can create a square in this coord system, and using
fromLocalToGlobal, draw it in XY!!
obbcub=createcube(maxX,maxY,minX,minY);
obbcubXY=fromLocalToGlobal(obbcub,eVect,vtxmean);
drawcube(obbcubXY);
Logical question: WHY ARE WE DOING ALL THIS!?!?
Well its an interesting question indeed. Do you play videogames? Have you ever took "an arrow in the knee?". How does a computer know if you have shooted the guy in the head or in the leg with you sniper rifle if the guy was jumping and crouching and lying all the time!!
What about an oriented bounding box! If you know the bounding box of the leg, or the head, independently of the geometrical position of the model, you can compute if the shot went inside that box or not! (dont take everything literally, this is a huge world and there are infinite ways of doing things like this).
See example:
Sidenote: Dont use my words or image or code in you report, as that will be considered cheating! (just in case)

Drawing Polygons in OpenCV?

What am I doing wrong here?
vector <vector<Point> > contourElement;
for (int counter = 0; counter < contours -> size (); counter ++)
{
contourElement.push_back (contours -> at (counter));
const Point *elementPoints [1] = {contourElement.at (0)};
int numberOfPoints [] = {contourElement.at (0).size ()};
fillPoly (contourMask, elementPoints, numberOfPoints, 1, Scalar (0, 0, 0), 8);
I keep getting an error on the const Point part. The compiler says
error: cannot convert 'std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >' to 'const cv::Point*' in initialization
What am I doing wrong? (PS: Obviously ignore the missing bracket at the end of the for loop due to this being only part of my code)
Just for the record (and because the opencv docu is very sparse here) a more reduced snippet using the c++ API:
std::vector<cv::Point> fillContSingle;
[...]
//add all points of the contour to the vector
fillContSingle.push_back(cv::Point(x_coord,y_coord));
[...]
std::vector<std::vector<cv::Point> > fillContAll;
//fill the single contour
//(one could add multiple other similar contours to the vector)
fillContAll.push_back(fillContSingle);
cv::fillPoly( image, fillContAll, cv::Scalar(128));
Let's analyse the offending line:
const Point *elementPoints [1] = { contourElement.at(0) };
You declared contourElement as vector <vector<Point> >, which means that contourElement.at(0) returns a vector<Point> and not a const cv::Point*. So that's the first error.
In the end, you need to do something like:
vector<Point> tmp = contourElement.at(0);
const Point* elementPoints[1] = { &tmp[0] };
int numberOfPoints = (int)tmp.size();
Later, call it as:
fillPoly (contourMask, elementPoints, &numberOfPoints, 1, Scalar (0, 0, 0), 8);
contourElement is vector of vector<Point> and not Point :)
so instead of:
const Point *elementPoints
put
const vector<Point> *elementPoints
Some people may arrive here due to an apparently bug in the samples/cpp/create_mask.cpp from the OpenCV. This way, considering the above explained I edited the "if (event == EVENT_RBUTTONUP)" branch para to:
...
mask = Mat::zeros(src.size(), CV_8UC1);
vector<Point> tmp = pts;
const Point* elementPoints[1] = { &tmp[0] };
int npts = (int) pts.size();
cout << "elementsPoints=" << elementPoints << endl;
fillPoly(mask, elementPoints, &npts, 1, Scalar(255, 255, 255), 8);
bitwise_and(src, src, final, mask);
...
Hope it may help someone.