Multiple convex shape corner connection - c++

I have an immutable array structure holding convex shapes as in the image above (they may vary in size and count, however they are always convex and never overlapping). What I want to do is connect the corners between them that can be connected without overlaping any edge, as in the image bellow where the blue lines represent the connections.
The data I have available are data structures holding the corner positions in the convex shapes, represented as a Vector structure similar to the following:
class Vector2
{
public:
float x, y;
}
The convex shape structure looks something like this:
class ConvexShape
{
public:
std::vector<Vector2> edges;
}
What I want to return from the function is an std::vector of a structure similar to the following:
class LinkedVector2 : public Vector2
{
public:
std::vector<LinkedVector2*> links;
}
So each linked vector is supposed to have a pointer to each other linked vector it is connected to.
The final function will thereby have this format:
std::vector<LinkedVector>* generateLinks(const std::vector<ConvexShape>& shapes)
{
std::vector<LinkedVector> links{ new std::vector<LinkedVector>{} };
// Create a linked vector for each shape's corner.
// Calculate links.
return links;
}
All of these links I then want to save for use in a later function which connects two points to the already linked shapes, along the lines of this:
The function should not alter the already existing connections and should look something like this:
// Argument 'links' will contain the previously generated links.
std::vector<LinkedVector>* connectPoints(const Vector2& a, const Vector2& b, const std::vector<LinkedVector>& links)
{
std::vector<LinkedVector>* connections{ new std::vector<LinkedVector>{} };
// Add old links to 'connections'.
// Connect the new links to the old.
// Add the new links to 'connections'.
return connections;
}
Could someone help me with how this could be done?

This is a description for an algorithm with example implementation to get you going.
Step 1
Preprocess every edge of the two shapes (s0 and s1) and extract the following information:
Distances from every edge in one shape to the vertices in the other
An ordered set of the vertices in one shape facing towards the other
Finding the distances is an exhaustive task (O(|V(s0)| * |V(s1)|)), it is also very cheap (line-point distance) and embarrassingly parallelisable. The facing vertices are found using the distances from above:
Start with the first vertex on the first shape, where the other shape is completely outside of its two adjacent edges (i.e. for any adjacent edge there exist outside values in its distances).
Since the facing set is a unique sequential set of vertices for convex polygons, continue adding vertices...
...until you reach a vertex where all vertices from the other shape lie inside of its adjacent edges
Doing this for both sides results in two sequences of facing vertices in each shape (the green dots per shape):
Step 2
To connect the two facing sets a scanline approach can be used:
In the ordered set of facing vertices the first vertex from one shape is always in line of sight of the last vertex from the other shape (first and last in case the shapes are oriented the same). From there we'll search sequentially, using the angle criteria from above for both the query from the first and the candidate vertex from the other shape, in the facing set to initialise our loop.
Looping sequentially over the facing vertices of the first shape, remove vertices that have broken line of sight (red line) and add vertices that came within line of sight (green line).
Step 3
Connecting the two outside points with the shapes is equivalent to finding the facing set of one shape in step 1 but instead of to another shape now there are only those individual outside points.
I've implemented step 1 and 2 in the following little browser demo as a prove of concept:
Click on the canvas and drag to move the camera
Click inside a shape and drag to move the shape
(function(canvas) {
function v2(x, y) { return { x: x, y: y }; }
function v2mul(lhs, rhs) { lhs.x *= rhs.x; lhs.y *= rhs.y; }
function v2subed(lhs, rhs) { return v2(lhs.x - rhs.x, lhs.y - rhs.y); }
function v2dot(lhs, rhs) { return lhs.x * rhs.x + lhs.y * rhs.y; }
function v2normalized(v) { var len = Math.sqrt(v2dot(v, v)); if(len < 1e-7) len = 1; return v2(v.x / len, v.y / len); }
function v2perped(v) { return v2(-v.y, v.x); }
// Line from origin o : v2 and direction d : v2
function Line(o, d) {
this.o = o;
this.d = d;
}
// Signed distance to a point v : v2, in units of direction this.d
Line.prototype.distance = function(v) {
var o = v2subed(v, this.o);
var d = v2perped(this.d);
return v2dot(o, d);
};
// A polygon is made up of a sequence of points (arguments[i] : v2)
function Polygon() {
this.positions = [].slice.call(arguments);
}
// Transform polygon to new base [bx, by] and translation t
Polygon.prototype.transform = function(bx, by, t) {
this.positions.forEach(function(v) {
var x = bx.x * v.x + by.x * v.y + t.x;
var y = bx.y * v.x + by.y * v.y + t.y;
v.x = x;
v.y = y;
});
};
// Naive point inside polygon test for polygon picking
Polygon.prototype.isInside = function(v) {
if(this.positions.length < 3)
return false;
var o0 = this.positions[this.positions.length - 1];
for(var i = 0, imax = this.positions.length; i < imax; ++i) {
var o1 = this.positions[i];
var line = new Line(o0, v2normalized(v2subed(o1, o0)));
if(line.distance(v) <= 0)
return false;
o0 = o1;
}
return true;
};
// A camera positioned at eye : v2
function Camera(eye) {
this.eye = eye;
}
// Prepare temporaries for screen conversions
Camera.prototype.prepare = function(w, h) {
this.screen = {
off: v2(w / 2, h / 2),
};
};
Camera.prototype.toScreenX = function(x) { return x + this.screen.off.x - this.eye.x; }
Camera.prototype.toScreenY = function(y) { return this.screen.off.y - y + this.eye.y; }
Camera.prototype.fromScreenX = function(x) { return x - this.screen.off.x + this.eye.x; }
Camera.prototype.fromScreenY = function(y) { return this.screen.off.y - y + this.eye.y; }
Camera.prototype.toScreen = function(v) { return v2(this.toScreenX(v.x), this.toScreenY(v.y)); };
Camera.prototype.fromScreen = function(v) { return v2(this.fromScreenX(v.x), this.fromScreenY(v.y)); }
// Compute the distances of the line through e0 in p0 to each vertex in p1
// #post e0.distances.length === p1.positions.length
function computeEdge(e0, p0, p1) {
var line = new Line(p0.positions[e0.start], v2normalized(v2subed(p0.positions[e0.end], p0.positions[e0.start])));
var distances = [];
p1.positions.forEach(function(v) { distances.push(line.distance(v)); });
e0.line = line;
e0.distances = distances;
return e0;
}
// Find vertices in a convex polygon p0 that face p1
// #pre edges.length === p0.positions.length
function computeFacing(edges, p0, p1) {
var facing = [];
var count0 = p0.positions.length;
var count1 = p1.positions.length;
function isFacingVertex(i0) {
var e0 = edges[(i0 + count0 - 1) % count0];
var e1 = edges[i0];
for(var i1 = 0; i1 < count1; ++i1)
if(e0.distances[i1] < 0 || e1.distances[i1] < 0)
return true;
return false;
}
// Find the first vertex in the facing set of two non-intersecting, convex polygons
for(var i0 = 0; i0 < count0; ++i0) {
// For the first chance facing vertex
if(isFacingVertex(i0)) {
if(i0 === 0) {
// Search backwards here, s.t. we can complete the loop in one sitting
var iStart = count0;
for(; iStart > 1 && isFacingVertex(iStart - 1); --iStart);
while(iStart < count0)
facing.push(iStart++);
}
facing.push(i0++);
// In a convex polygon the (single) set of facing vertices is sequential
while(i0 < count0 && isFacingVertex(i0))
facing.push(i0++);
break;
}
}
return facing;
}
// Preprocesses the convex polygon p0 building the edges and facing lists
function preprocessPolygon(p0, p1) {
var result = {
edges: [],
facing: null,
};
for(var i = 0, imax = p0.positions.length; i < imax; ++i)
result.edges.push(computeEdge({ start: i, end: (i + 1) % imax }, p0, p1));
result.facing = computeFacing(result.edges, p0, p1);
return result;
}
// Scanline-approach to find all line of sight connections between the facing vertices of two preprocessed convex polygons p0 : Polygon and p1 : Polygon
// Output is prep.connections where prep.connections[i] : { v0, v1 } describes an unobstructed line of sight edge between vertex index v0 in p0 and v1 in p1
function computeConnections(prep, p0, p1) {
var connections = [];
var facing1count = prep.p1.facing.length;
// For oriented polygons the first facing vertex in p0 must surely face the last facing vertex in p1
var facing1begin = facing1count - 1, facing1end = facing1count;
prep.p0.facing.forEach(function(v0) {
function isConnectingVertex(v1) {
// Is v1 outside of adjacent edge-lines from v0?
var count0 = prep.p0.edges.length;
var ep = prep.p0.edges[(v0 + count0 - 1) % count0];
var en = prep.p0.edges[v0];
if(!(ep.distances[v1] < 0 || en.distances[v1] < 0)) return false;
// Is v0 outside of adjacent edge-lines from v1?
var count1 = prep.p1.edges.length;
ep = prep.p1.edges[(v1 + count1 - 1) % count1];
en = prep.p1.edges[v1];
return ep.distances[v0] < 0 || en.distances[v0] < 0;
}
// Throw away vertices that are no longer facing the current vertex
for(; facing1end > 0 && !isConnectingVertex(prep.p1.facing[facing1end - 1]); --facing1end);
// Add newly facing vertices
for(; facing1begin > 0 && isConnectingVertex(prep.p1.facing[facing1begin - 1]); --facing1begin);
// Generate the connections in facing range
for(var facing1 = facing1begin; facing1 < facing1end; ++facing1)
connections.push({ v0: v0, v1: prep.p1.facing[facing1] });
});
prep.connections = connections;
}
function process(prep, p0, p1) {
delete prep.p0;
delete prep.p1;
delete prep.connections;
prep.p0 = preprocessPolygon(p0, p1);
prep.p1 = preprocessPolygon(p1, p0);
computeConnections(prep, p0, p1);
}
var polygons = null;
var prep = null;
var camera = null;
var ui = null;
function reset() {
polygons = [
new Polygon(v2(25, -75), v2(50, -175), v2(140, -225), v2(255, -200), v2(195, -65), v2(140, -40)),
new Polygon(v2(400, -100), v2(295, -70), v2(260, -80), v2(310, -220), v2(425, -230)),
];
// Scale to a fitting size and move to center
var bx = v2(0.5, 0), by = v2(0, 0.5), off = v2(-120, 70);
polygons[0].transform(bx, by, off);
polygons[1].transform(bx, by, off);
prep = {};
camera = new Camera(v2(0, 0));
ui = { pickedPolygon: -1 };
update();
draw();
}
function update() {
// Reprocess polygons
process(prep, polygons[0], polygons[1]);
}
function draw() {
var g = canvas.getContext("2d");
var w = canvas.width;
var h = canvas.height;
camera.prepare(w, h);
g.fillStyle = "linen";
g.fillRect(0, 0, w, h);
var iPick = 0;
polygons.forEach(function(polygon) {
var highlight = iPick++ === ui.pickedPolygon;
var positions = polygon.positions;
if(positions.length > 2) {
g.beginPath();
g.lineWidth = highlight ? 2 : 1;
g.strokeStyle = "black";
var pLast = camera.toScreen(positions[positions.length - 1]);
g.moveTo(pLast.x, pLast.y);
positions.forEach(function(pos) {
var pScreen = camera.toScreen(pos);
g.lineTo(pScreen.x, pScreen.y);
});
g.stroke();
}
});
prep.connections.forEach(function(connection) {
var v0 = camera.toScreen(polygons[0].positions[connection.v0]);
var v1 = camera.toScreen(polygons[1].positions[connection.v1]);
g.beginPath();
g.lineWidth = 2;
g.strokeStyle = "cyan";
g.moveTo(v0.x, v0.y);
g.lineTo(v1.x, v1.y);
g.stroke();
});
}
(function(c) {
reset();
var dragStartPos = null, dragLastPos = null;
var pickedPolygon = null;
var cameraStartPos = v2(0, 0);
function toScreen(client) {
var rect = c.getBoundingClientRect();
return v2(client.x - rect.left, client.y - rect.top);
}
function startDragging(x, y) {
dragStartPos = v2(x, y);
dragLastPos = v2(x, y);
if(pickedPolygon !== null) {
// Nothing to prepare
} else {
cameraStartPos.x = camera.eye.x;
cameraStartPos.y = camera.eye.y;
}
}
function continueDragging(x, y) {
if(pickedPolygon !== null) {
var dx = x - dragLastPos.x, dy = -(y - dragLastPos.y);
pickedPolygon.transform(v2(1, 0), v2(0, 1), v2(dx, dy));
update();
} else {
var dx = -(x - dragStartPos.x), dy = y - dragStartPos.y;
camera.eye.x = cameraStartPos.x + dx;
camera.eye.y = cameraStartPos.y + dy;
}
dragLastPos.x = x;
dragLastPos.y = y;
}
function stopDragging() {
dragStartPos = null;
dragLastPos = null;
if(pickedPolygon !== null) {
// Nothing to do here...
} else {
cameraStartPos.x = 0;
cameraStartPos.y = 0;
}
}
c.onmousemove = function(e) {
if(dragStartPos !== null)
continueDragging(e.clientX, e.clientY);
else {
pickedPolygon = null;
var iPick = 0;
var cursorPos = camera.fromScreen(toScreen(v2(e.clientX, e.clientY)));
for(var imax = polygons.length; iPick < imax; ++iPick) {
if(polygons[iPick].isInside(cursorPos)) {
pickedPolygon = polygons[iPick];
break;
}
}
ui.pickedPolygon = pickedPolygon !== null ? iPick : -1;
}
draw();
};
c.onmouseleave = function(e) {
if(dragStartPos !== null)
stopDragging();
pickedPolygon = null;
ui.pickedPolygon = -1;
draw();
};
c.onmousedown = function(e) {
if(e.button === 0)
startDragging(e.clientX, e.clientY);
draw();
};
c.onmouseup = function(e) {
if(e.button === 0 && dragStartPos !== null)
stopDragging();
draw();
};
})(canvas);
})(document.getElementById("screen"));
<canvas id="screen" width="300" height="300"></canvas>

Related

Outline of pixels after detecting object (without convex hull)

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?
For anyone interested, the solution is OpenCV contours. Working example here.

Why ball1.boundingrect.center returns the same value as ball2.boundingrect.center?

I'm programming a physis simulation with circles.
Ball.cpp Code:
Ball::Ball()
{
angle = 0;
setRotation(angle);
//set the speed
speed = 5;
double StartX = 720;
double StartY = 80;
StartX = (qrand() % 800);
StartY = (qrand() % 400);
radius = 40;
setTransformOriginPoint(radius,radius);
setPos (StartX,StartY);
}
QRectF Ball::boundingRect() const
{
return QRect(0,0,2*radius,2*radius);
}
bool Ball:: circCollide(QList <QGraphicsItem *> items) {
QPointF c1 = mapToParent(this->boundingRect().center());
foreach (QGraphicsItem * t, items) {
Ball * CastBall = dynamic_cast<Ball *>(t);
if(CastBall)
{
QPointF t1 = mapToScene(CastBall->boundingRect().center());
double distance = QLineF(c1,t1).length();
double radius1 = this->boundingRect().width() / 2;
double radius2 = CastBall->boundingRect().width() / 2;
double radii = radius1 + radius2;
if ( distance <= radii )
{
// qDebug() << "true collision";
return true;
}
}
}
// qDebug() << "false collision";
return false;
}
I've got the problem that this string of code returns always the same values for the position of the center for both objects, (t1.x == c1.x , t1.y == c1.y) but this == CastBall returns false, so it wasn't the same object, it just has the same coordinates for the centerpoint of the boundingRect.
The coordinates are already equal before this function is called and that for all 3 objects I generate, although the sets always have a different value.
First I thought it was a problem because boundingRect is defined as a const, so I made this function in my class
QRectF Ball:: centerRect()
{
return QRect(0,0,2*radius,2*radius);
}
and just replaced every use of boundingRect with it (was no problem since I already cast it in the method), but it still returned the same value for both centers.
Im really at my wits end with this one and hope to find some help.
The problem was following: the center of the bounding rectangle was not mapped to the coordinates of the ball. Following statement should work:
mapToScene(mapToItem(castBall, castBall->boundingRect().center()));

OpenCV groupRectangles - getting grouped and ungrouped rectangles

I'm using OpenCV and want to group together rectangles that have significant overlap. I've tried using groupRectangles for this, which takes a group threshold argument. With a threshold of 0 it doesn't do any grouping at all, and with a threshold of 1 is only returns rectangles that were the result of at least 2 rectangles. For example, given the rectangles on the left in the image below you end up with the 2 rectangles on the right:
What I'd like to end up with is 3 rectangles. The 2 on the right in the image above, plus the rectangle in the top right of the image to the left that doesn't overlap with any other rectangles. What's the best way to achieve this?
The solution I ended up going with was to duplicate all of the initial rectangles before calling groupRectangles. That way every input rectangle is guaranteed to be grouped with at least one other rectangle, and will appear in the output:
int size = rects.size();
for( int i = 0; i < size; i++ )
{
rects.push_back(Rect(rects[i]));
}
groupRectangles(rects, 1, 0.2);
A little late to the party, however "duplicating" solution did not properly work for me. I also had another problem where merged rectangles would overlap and would need to be merged.
So I came up with an overkill solution (might require C++14 compiler). Here's usage example:
std::vector<cv::Rect> rectangles, test1, test2, test3;
rectangles.push_back(cv::Rect(cv::Point(5, 5), cv::Point(15, 15)));
rectangles.push_back(cv::Rect(cv::Point(14, 14), cv::Point(26, 26)));
rectangles.push_back(cv::Rect(cv::Point(24, 24), cv::Point(36, 36)));
rectangles.push_back(cv::Rect(cv::Point(37, 20), cv::Point(40, 40)));
rectangles.push_back(cv::Rect(cv::Point(20, 37), cv::Point(40, 40)));
test1 = rectangles;
test2 = rectangles;
test3 = rectangles;
//Output format: {Rect(x, y, width, height), ...}
//Merge once
mergeRectangles(test1);
//Output rectangles: test1 = {Rect(5, 5, 31, 31), Rect(20, 20, 20, 20)}
//Merge until there are no rectangles to merge
mergeRectangles(test2, true);
//Output rectangles: test2 = {Rect(5, 5, 35, 35)}
//Override default merge (intersection) function to merge all rectangles
mergeRectangles(test3, false, [](const cv::Rect& r1, const cv::Rect& r2) {
return true;
});
//Output rectangles: test3 = {Rect(5, 5, 35, 35)}
Function:
void mergeRectangles(std::vector<cv::Rect>& rectangles, bool recursiveMerge = false, std::function<bool(const cv::Rect& r1, const cv::Rect& r2)> mergeFn = nullptr) {
static auto defaultFn = [](const cv::Rect& r1, const cv::Rect& r2) {
return (r1.x < (r2.x + r2.width) && (r1.x + r1.width) > r2.x && r1.y < (r2.y + r2.height) && (r1.y + r1.height) > r2.y);
};
static auto innerMerger = [](std::vector<cv::Rect>& rectangles, std::function<bool(const cv::Rect& r1, const cv::Rect& r2)>& mergeFn) {
std::vector<std::vector<std::vector<cv::Rect>::const_iterator>> groups;
std::vector<cv::Rect> mergedRectangles;
bool merged = false;
static auto findIterator = [&](std::vector<cv::Rect>::const_iterator& iteratorToFind) {
for (auto groupIterator = groups.begin(); groupIterator != groups.end(); ++groupIterator) {
auto foundIterator = std::find(groupIterator->begin(), groupIterator->end(), iteratorToFind);
if (foundIterator != groupIterator->end()) {
return groupIterator;
}
}
return groups.end();
};
for (auto rect1_iterator = rectangles.begin(); rect1_iterator != rectangles.end(); ++rect1_iterator) {
auto groupIterator = findIterator(rect1_iterator);
if (groupIterator == groups.end()) {
groups.push_back({rect1_iterator});
groupIterator = groups.end() - 1;
}
for (auto rect2_iterator = rect1_iterator + 1; rect2_iterator != rectangles.end(); ++rect2_iterator) {
if (mergeFn(*rect1_iterator, *rect2_iterator)) {
groupIterator->push_back(rect2_iterator);
merged = true;
}
}
}
for (auto groupIterator = groups.begin(); groupIterator != groups.end(); ++groupIterator) {
auto groupElement = groupIterator->begin();
int x1 = (*groupElement)->x;
int x2 = (*groupElement)->x + (*groupElement)->width;
int y1 = (*groupElement)->y;
int y2 = (*groupElement)->y + (*groupElement)->height;
while (++groupElement != groupIterator->end()) {
if (x1 > (*groupElement)->x)
x1 = (*groupElement)->x;
if (x2 < (*groupElement)->x + (*groupElement)->width)
x2 = (*groupElement)->x + (*groupElement)->width;
if (y1 >(*groupElement)->y)
y1 = (*groupElement)->y;
if (y2 < (*groupElement)->y + (*groupElement)->height)
y2 = (*groupElement)->y + (*groupElement)->height;
}
mergedRectangles.push_back(cv::Rect(cv::Point(x1, y1), cv::Point(x2, y2)));
}
rectangles = mergedRectangles;
return merged;
};
if (!mergeFn)
mergeFn = defaultFn;
while (innerMerger(rectangles, mergeFn) && recursiveMerge);
}
By checking out groupRectangles() in opencv-3.3.0 source code:
if( groupThreshold <= 0 || rectList.empty() )
{
// ......
return;
}
I saw that if groupThreshold is set to less than or equal to 0, the function would just return without doing any grouping.
On the other hand, the following code removed all rectangles which don't have more than groupThreshold similarities.
// filter out rectangles which don't have enough similar rectangles
if( n1 <= groupThreshold )
continue;
That explains why with groupThreshold=1 only rectangles with at least 2 overlappings are in the output.
One possible solution could be to modify the source code shown above (replacing n1 <= groupThreshold with n1 < groupThreshold) and re-compile OpenCV.

How to unify normal orientation

I've been trying to realize a mesh that has all face normals pointing outward.
In order to realize this, I load a mesh from a *.ctm file, then walk over all
triangles to determine the normal using a cross product and if the normal
is pointing to the negative z direction, I flip v1 and v2 (thus the normal orientation).
After this is done I save the result to a *.ctm file and view it with Meshlab.
The result in Meshlab still shows that normals are pointing in both positive and
negative z direction ( can be seen from the black triangles). Also when viewing
the normals in Meshlab they are really pointing backwards.
Can anyone give me some advice on how to solve this?
The source code for the normalization part is:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);for(std::vector<pcl::Vertices>::iterator it = meshFixed.polygons.begin(); it != meshFixed.polygons.end(); ++it)
{
alglib::real_2d_array v0;
double _v0[] = {cloud1->points[it->vertices[0]].x,cloud1->points[it->vertices[0]].y,cloud1->points[it->vertices[0]].z};
v0.setcontent(3,1,_v0); //3 rows, 1col
alglib::real_2d_array v1;
double _v1[] = {cloud1->points[it->vertices[1]].x,cloud1->points[it->vertices[1]].y,cloud1->points[it->vertices[1]].z};
v1.setcontent(3,1,_v1); //3 rows, 1col
alglib::real_2d_array v2;
double _v2[] = {cloud1->points[it->vertices[2]].x,cloud1->points[it->vertices[2]].y,cloud1->points[it->vertices[2]].z};
v2.setcontent(1,3,_v2); //3 rows, 1col
alglib::real_2d_array normal;
normal = cross(v1-v0,v2-v0);
//if z<0 change indices order v1->v2 and v2->v1
alglib::real_2d_array normalizedNormal;
if(normal[2][0]<0)
{
int index1,index2;
index1 = it->vertices[1];
index2 = it->vertices[2];
it->vertices[1] = index2;
it->vertices[2] = index1;
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normal[0][0] = -1*normal[0][0];
normal[1][0] = -1*normal[1][0];
normal[2][0] = -1*normal[2][0];
normalizedNormal = normalScaling * normal;
}
else
{
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normalizedNormal = normalScaling * normal;
}
//add to normal cloud
pcl::Normal pclNormalizedNormal;
pclNormalizedNormal.normal_x = normalizedNormal[0][0];
pclNormalizedNormal.normal_y = normalizedNormal[1][0];
pclNormalizedNormal.normal_z = normalizedNormal[2][0];
normalsFixed.push_back(pclNormalizedNormal);
}
The result from this code is:
I've found some code in the VCG library to orient the face and vertex normals.
After using this a large part of the mesh has correct face normals, but not all.
The new code:
// VCG library implementation
MyMesh m;
// Convert pcl::PolygonMesh to VCG MyMesh
m.Clear();
// Create temporary cloud in to have handy struct object
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);
// Now convert the vertices to VCG MyMesh
int vertCount = cloud1->width*cloud1->height;
vcg::tri::Allocator<MyMesh>::AddVertices(m, vertCount);
for(unsigned int i=0;i<vertCount;++i)
m.vert[i].P()=vcg::Point3f(cloud1->points[i].x,cloud1->points[i].y,cloud1->points[i].z);
// Now convert the polygon indices to VCG MyMesh => make VCG faces..
int triCount = meshFixed.polygons.size();
if(triCount==1)
{
if(meshFixed.polygons[0].vertices[0]==0 && meshFixed.polygons[0].vertices[1]==0 && meshFixed.polygons[0].vertices[2]==0)
triCount=0;
}
Allocator<MyMesh>::AddFaces(m, triCount);
for(unsigned int i=0;i<triCount;++i)
{
m.face[i].V(0)=&m.vert[meshFixed.polygons[i].vertices[0]];
m.face[i].V(1)=&m.vert[meshFixed.polygons[i].vertices[1]];
m.face[i].V(2)=&m.vert[meshFixed.polygons[i].vertices[2]];
}
vcg::tri::UpdateBounding<MyMesh>::Box(m);
vcg::tri::UpdateNormal<MyMesh>::PerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
printf("Input mesh vn:%i fn:%i\n",m.VN(),m.FN());
// Start to flip all normals to outside
vcg::face::FFAdj<MyMesh>::FFAdj();
vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
bool oriented, orientable;
if ( vcg::tri::Clean<MyMesh>::CountNonManifoldEdgeFF(m)>0 ) {
std::cout << "Mesh has some not 2-manifold faces, Orientability requires manifoldness" << std::endl; // text
return; // can't continue, mesh can't be processed
}
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
// now convert VCG back to pcl::PolygonMesh
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
cloud->is_dense = false;
cloud->width = vertCount;
cloud->height = 1;
cloud->points.resize (vertCount);
// Now fill the pointcloud of the mesh
for(int i=0; i<vertCount; i++)
{
cloud->points[i].x = m.vert[i].P()[0];
cloud->points[i].y = m.vert[i].P()[1];
cloud->points[i].z = m.vert[i].P()[2];
}
pcl::toROSMsg(*cloud,meshFixed.cloud);
std::vector<pcl::Vertices> polygons;
// Now fill the indices of the triangles/faces of the mesh
for(int i=0; i<triCount; i++)
{
pcl::Vertices vertices;
vertices.vertices.push_back(m.face[i].V(0)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(1)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(2)-&*m.vert.begin());
polygons.push_back(vertices);
}
meshFixed.polygons = polygons;
Which results in: (Meshlab still shows normals are facing both sides)
I finally solved the problem. So I'm still using VCG library. From the above new code I slightly updated the following section:
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
//vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
//vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
Now I've updated the vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh() function in clean.h. Here the update is to orient the first polygon of a group correctly. Also after swapping the edge the normal of the face is calculated and updated.
static void OrientCoherentlyMesh(MeshType &m, bool &Oriented, bool &Orientable)
{
RequireFFAdjacency(m);
assert(&Oriented != &Orientable);
assert(m.face.back().FFp(0)); // This algorithms require FF topology initialized
Orientable = true;
Oriented = true;
tri::UpdateSelection<MeshType>::FaceClear(m);
std::stack<FacePointer> faces;
for (FaceIterator fi = m.face.begin(); fi != m.face.end(); ++fi)
{
if (!fi->IsD() && !fi->IsS())
{
// each face put in the stack is selected (and oriented)
fi->SetS();
// New section of code to orient the initial face correctly
if(fi->N()[2]>0.0)
{
face::SwapEdge<FaceType,true>(*fi, 0);
face::ComputeNormal(*fi);
}
// End of new code section.
faces.push(&(*fi));
// empty the stack
while (!faces.empty())
{
FacePointer fp = faces.top();
faces.pop();
// make consistently oriented the adjacent faces
for (int j = 0; j < 3; j++)
{
//get one of the adjacent face
FacePointer fpaux = fp->FFp(j);
int iaux = fp->FFi(j);
if (!fpaux->IsD() && fpaux != fp && face::IsManifold<FaceType>(*fp, j))
{
if (!CheckOrientation(*fpaux, iaux))
{
Oriented = false;
if (!fpaux->IsS())
{
face::SwapEdge<FaceType,true>(*fpaux, iaux);
// New line to update face normal
face::ComputeNormal(*fpaux);
// end of new section.
assert(CheckOrientation(*fpaux, iaux));
}
else
{
Orientable = false;
break;
}
}
// put the oriented face into the stack
if (!fpaux->IsS())
{
fpaux->SetS();
faces.push(fpaux);
}
}
}
}
}
if (!Orientable) break;
}
}
Besides I also updated the function bool CheckOrientation(FaceType &f, int z) to perform a calculation based on normal z-direction.
template <class FaceType>
bool CheckOrientation(FaceType &f, int z)
{
// Added next section to calculate the difference between normal z-directions
FaceType *original = f.FFp(z);
double nf2,ng2;
nf2=f.N()[2];
ng2=original->N()[2];
// End of additional section
if (IsBorder(f, z))
return true;
else
{
FaceType *g = f.FFp(z);
int gi = f.FFi(z);
// changed if statement from: if (f.V0(z) == g->V1(gi))
if (nf2/abs(nf2)==ng2/abs(ng2))
return true;
else
return false;
}
}
The result is as I expect and desire from the algorithm:

3D picking lwjgl

I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.)
This is how the code looks like:
if(Mouse.getEventButton() == 1) {
if (!Mouse.getEventButtonState()) {
Camera.get().generateViewMatrix();
float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio();
float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f));
float displacementRate = (float)Math.tan(Camera.get().getFovy()/2);
screenSpaceX *= displacementRate;
screenSpaceY *= displacementRate;
Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1);
Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1);
Matrix4f tmpView = new Matrix4f();
Camera.get().getViewMatrix().transpose(tmpView);
Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert();
Vector4f worldSpaceNear = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear);
Vector4f worldSpaceFar = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar);
Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z);
Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z);
rayDirection.normalise();
Ray clickRay = new Ray(rayPosition, rayDirection);
Vector tMin = new Vector(), tMax = new Vector(), tempPoint;
float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f;
Drawable closestDrawableHit = null;
for(Drawable d : this.worldModel.getDrawableThings()) {
// Calcualte AABB for each object... needs to be moved later...
firstVertex = true;
for(Surface surface : d.getSurfaces()) {
for(Vertex v : surface.getVertices()) {
worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x;
worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y;
worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z;
worldPosition = worldPosition.rotate(d.getRotation());
if (firstVertex) {
maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z;
minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z;
firstVertex = false;
} else {
if (worldPosition.x > maxX) {
maxX = worldPosition.x;
}
if (worldPosition.x < minX) {
minX = worldPosition.x;
}
if (worldPosition.y > maxY) {
maxY = worldPosition.y;
}
if (worldPosition.y < minY) {
minY = worldPosition.y;
}
if (worldPosition.z > maxZ) {
maxZ = worldPosition.z;
}
if (worldPosition.z < minZ) {
minZ = worldPosition.z;
}
}
}
}
// ray/slabs intersection test...
// clickRay.getOrigin().x + clickRay.getDirection().x * f = minX
// clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f
// clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f
// -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f
largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y;
if(smallestExitingValue > temp) {
smallestExitingValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z;
if(smallestExitingValue < temp) {
smallestExitingValue = temp;
}
if(largestEnteringValue > smallestExitingValue) {
//System.out.println("Miss!");
} else {
if (largestEnteringValue < closestEnteringValue) {
closestEnteringValue = largestEnteringValue;
closestDrawableHit = d;
}
}
}
if(closestDrawableHit != null) {
System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z);
this.worldModel.removeDrawableThing(closestDrawableHit);
}
}
}
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all.
Edit:
Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center and always shots to the same position no matter where i direct my camera...
My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the glulookat method in lwjgl; I have to build it my self and I guess thats where the problem is at)...
Edit2:
This is how i calculate it currently:
private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}};
public Vector getCameraDirectionVector() {
Vector actualEye = this.getActualEyePosition();
return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z);
}
public Vector getActualEyePosition() {
return eye.rotate(this.getRotation());
}
public void generateViewMatrix() {
Vector cameraDirectionVector = getCameraDirectionVector().normalize();
Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize();
Vector up = Vector.cross(side, cameraDirectionVector);
viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x;
viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y;
viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z;
/*
Vector actualEyePosition = this.getActualEyePosition();
Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize();
Vector xaxis = Vector.cross(upVector, zaxis).normalize();
Vector yaxis = Vector.cross(zaxis, xaxis);
viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x;
viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y;
viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z;
viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition);
*/
viewMatrix = new Matrix4f();
viewMatrix.load(getViewMatrixAsFloatBuffer());
}
Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it...
I have read alot of threads and documentations about this but i can't seam to wrapp my head around it...
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but things are not disappearing where i press on the screen.
OpenGL is not a scene graph, it's a drawing library. So after removing something from your internal representation you must redraw the scene. And your code is missing some call to a function that triggers a redraw.
Okay so i finaly solved it with the help from the guys at gamedev and a friend, here is a link to the answer where i have posted the code!