How to fill OpenVDB voxels that are inside a given plane? - c++

I have a quad defined by 4 (x,y,z) points (like a plane that has edges). I have an OpenVDB grid. I want to fill all the voxels with value 1 that are inside my quad (including edges). Is such thing possible with out setting each voxel of the quad (limited plane) manually?

If the four points build a rectangle, it could be possible using the
void fill(const CoordBBox& bbox, const ValueType& value, bool active = true);
function that exists in the Grid-class. It is not possible to transform the CoordBBox for rotations, instead you would have to do that by changing the transformation of the grid. With pseudo-code it could look like
CoordBBox plane; // created from your points
Transform old = grid.transform();
grid.setTransform(...); // Some transformation that places the grid correctly with respect to the plane
grid.fill(plane, 1);
grid.setTransform(old);
If this is not the case, you would have to set the values yourself.

There is an unoptimized method, for arbitrarily shaped planar quadrilaterals, you only need to input four vertices of the 3D space plane, and the output is filled planar voxels.
Get the vertex with the longest sum of adjacent sides as A, the adjacent vertices of A are B and D, and obtain the voxel coordinates and voxel numbers between A-B and A-D based on ray-cast, and in addition A vertex C-B and C-D progressively sample the same number of voxels.
Make one-to-one correspondence between the voxels adjacent to the two vertices A and D above, and fill the plane area based on ray-cast.
void VDBVolume::fillPlaneVoxel(const PlanarEquation& planar) {
auto accessor = volume_->getUnsafeAccessor();
const auto transform = volume_->transform();
const openvdb::Vec3f vdbnormal(planar.normal.x(), planar.normal.y(), planar.normal.z());
int longside_vtx[2];
calVtxLongSide(planar.vtx, longside_vtx);
// 1. ray cast long side
std::vector<Eigen::Vector3f> longsidepoints;
int neiborvtx[2];
{
const Eigen::Vector3f origin = planar.vtx[longside_vtx[0]];
const openvdb::Vec3R eye(origin.x(), origin.y(), origin.z());
GetRectNeiborVtx(longside_vtx[0], neiborvtx);
for(int i = 0; i < 2; ++i) {
Eigen::Vector3f direction = planar.vtx[neiborvtx[i]] - origin;
openvdb::Vec3R dir(direction.x(), direction.y(), direction.z());
dir.normalize();
const float length = static_cast<float>(direction.norm());
if(length > 50.f) {
std::cout << "GetRectNeiborVtx length to large, something wrong: "<< i << "\n" << origin.transpose() << "\n"
<< planar.vtx[neiborvtx[i]].transpose() << std::endl;
continue;
}
const float t0 = -voxel_size_/2;
const float t1 = length + voxel_size_/2;
const auto ray = openvdb::math::Ray<float>(eye, dir, t0, t1).worldToIndex(*volume_);
openvdb::math::DDA<decltype(ray)> dda(ray);
do {
const auto voxel = dda.voxel();
const auto voxel_center_world = GetVoxelCenter(voxel, transform);
longsidepoints.emplace_back(voxel_center_world);
} while (dda.step());
}
}
// 2. 在小边均匀采样相同数目点
const int longsidepointnum = longsidepoints.size();
std::vector<Eigen::Vector3f> shortsidepoints;
shortsidepoints.resize(longsidepointnum);
{
// 输入:顶点、两邻接点,vtxs, 输出采样带年坐标,根据距离进行采样
GenerateShortSidePoints(longside_vtx[1], neiborvtx, planar.vtx, shortsidepoints);
}
// 3. ray cast from longsidepoints to shortsidepoints
// std::cout << "longsidepointnum: " << longsidepointnum << std::endl;
for(int pid = 0; pid < longsidepointnum; ++pid) {
const Eigen::Vector3f origin = longsidepoints[pid];
const openvdb::Vec3R eye(origin.x(), origin.y(), origin.z());
const Eigen::Vector3f direction = shortsidepoints[pid] - origin;
openvdb::Vec3R dir(direction.x(), direction.y(), direction.z());
dir.normalize();
const float length = direction.norm();
if(length > 50.f) {
std::cout << "length to large, something wrong: "<< pid << "\n" << origin.transpose() << "\n"
<< shortsidepoints[pid].transpose() << std::endl;
continue;
}
const float t0 = -voxel_size_/2;
const float t1 = length + voxel_size_/2;
const auto ray = openvdb::math::Ray<float>(eye, dir, t0, t1).worldToIndex(*volume_);
openvdb::math::DDA<decltype(ray)> dda(ray);
do {
const auto voxel = dda.voxel();
accessor.setValue(voxel, vdbnormal);
} while (dda.step());
}
}

Related

Error: qualifiers dropped in binding reference of type "blah blah" to initialize "some other blah blah"

So, I get a runtime error when I create the "boxes" and "boxbound" variables outside the thread, however the error disappears when I move it inside the for loop inside the thread, what can be the reason of it?
void Flyscene::raytraceScene(int width, int height) {
std::cout << "ray tracing ..." << std::endl;
//start of acceleration structure
std::vector<std::vector<Tucano::Face>> boxes = firstBox(mesh);
std::vector<std::vector<Eigen::Vector3f>> boxbounds;
for (int i = 0; i < boxes.size(); i++) {
boxbounds.push_back(getBoxLimits(boxes[i], mesh));
}
/////
// if no width or height passed, use dimensions of current viewport
Eigen::Vector2i image_size(width, height);
if (width == 0 || height == 0) {
image_size = flycamera.getViewportSize();
}
// create 2d vector to hold pixel colors and resize to match image size
vector<vector<Eigen::Vector3f>> pixel_data;
pixel_data.resize(image_size[1]);
for (int i = 0; i < image_size[1]; ++i)
pixel_data[i].resize(image_size[0]);
// origin of the ray is always the camera center
Eigen::Vector3f origin = flycamera.getCenter();
Eigen::Vector3f screen_coords;
// Multi Threading
// Comment this if you don't want multi-threading
//-----------------------------------------------------//
int max_pixels = (image_size[0] * image_size[1]); //width * height
// Get amount of cores of your CPU
int cores = std::thread::hardware_concurrency();
// Keep track of # of pixels (atomic making sure no 2 threads render the same pixel)
volatile std::atomic<std::size_t> curr_pixel(0);
// Stores all cores assigned to a task
std::vector<std::future<void>> future_vector;
cout << "Threads supported: " << cores << "\n";
while (cores--)
future_vector.emplace_back(
std::async([=, &origin, &curr_pixel, &pixel_data]()
{
while (true)
{
int index = curr_pixel++;
if (index >= max_pixels)
break;
std::size_t i = index % image_size[1];
std::size_t j = index / image_size[1];
//cout << "at index: " << index << std::endl;
// create a ray from the camera passing through the pixel (i,j)
auto screen_coords = flycamera.screenToWorld(Eigen::Vector2f(i, j));
// launch raytracing for the given ray and write result to pixel data
pixel_data[i][j] = traceRay(0,origin, screen_coords, boxes, boxbounds);
if (index % 10000 == 0) {
std::cout << "Percentage done (mt): " << (float)(index / 10000) << "%" << std::endl;
}
}
}));
// Call futures (Async jobs), this will activate all process on the cores
for (auto& e : future_vector) {
e.get();
}
however when I move it inside like below the error goes away;
void Flyscene::raytraceScene(int width, int height) {
std::cout << "ray tracing ..." << std::endl;
// if no width or height passed, use dimensions of current viewport
Eigen::Vector2i image_size(width, height);
if (width == 0 || height == 0) {
image_size = flycamera.getViewportSize();
}
// create 2d vector to hold pixel colors and resize to match image size
vector<vector<Eigen::Vector3f>> pixel_data;
pixel_data.resize(image_size[1]);
for (int i = 0; i < image_size[1]; ++i)
pixel_data[i].resize(image_size[0]);
// origin of the ray is always the camera center
Eigen::Vector3f origin = flycamera.getCenter();
Eigen::Vector3f screen_coords;
// Multi Threading
// Comment this if you don't want multi-threading
//-----------------------------------------------------//
int max_pixels = (image_size[0] * image_size[1]); //width * height
// Get amount of cores of your CPU
int cores = std::thread::hardware_concurrency();
// Keep track of # of pixels (atomic making sure no 2 threads render the same pixel)
volatile std::atomic<std::size_t> curr_pixel(0);
// Stores all cores assigned to a task
std::vector<std::future<void>> future_vector;
cout << "Threads supported: " << cores << "\n";
while (cores--)
future_vector.emplace_back(
std::async([=, &origin, &curr_pixel, &pixel_data]()
{
while (true)
{
int index = curr_pixel++;
if (index >= max_pixels)
break;
std::size_t i = index % image_size[1];
std::size_t j = index / image_size[1];
//cout << "at index: " << index << std::endl;
//start of acceleration structure
std::vector<std::vector<Tucano::Face>> boxes = firstBox(mesh);
std::vector<std::vector<Eigen::Vector3f>> boxbounds;
for (int i = 0; i < boxes.size(); i++) {
boxbounds.push_back(getBoxLimits(boxes[i], mesh));
}
/////
// create a ray from the camera passing through the pixel (i,j)
auto screen_coords = flycamera.screenToWorld(Eigen::Vector2f(i, j));
// launch raytracing for the given ray and write result to pixel data
pixel_data[i][j] = traceRay(0,origin, screen_coords, boxes, boxbounds);
if (index % 10000 == 0) {
std::cout << "Percentage done (mt): " << (float)(index / 10000) << "%" << std::endl;
}
}
}));
// Call futures (Async jobs), this will activate all process on the cores
for (auto& e : future_vector) {
e.get();
}
here's the rayTrace method as well:
Eigen::Vector3f Flyscene::traceRay(int level, Eigen::Vector3f& origin, Eigen::Vector3f& dest, std::vector<std::vector<Tucano::Face>>& boxes, std::vector<std::vector<Eigen::Vector3f>>& boxbounds)
Why do you think this is?
Here's the full error description:
Error (active) E0433 qualifiers dropped in binding reference of type "std::vector>, std::allocator>>> &" to initializer of type "const std::vector>, std::allocator>>>" raytracing
Error (active) E0433 qualifiers dropped in binding reference of type
"std::vector>, std::allocator>>> &" to initializer of type "const std::vector>, std::allocator>>>" raytracing
You need to add mutable to your lambda.
Vectors are passed by reference (to traceRay), so they can be modified inside this function. Your lambda takes vectors by copy = (is used in capture), object captured by = can be only read, you cannot modify them.
Your code can be reduced to this example:
void bar(std::vector<int>& v) {
}
void foo() {
std::vector<int> v;
auto l = [=]() /*mutable*/
{
bar(v); // works only with uncommented mutable
// v can be modified only with mutable
};
l();
}
When you create vectors inside lambda, they are not captured so you can change them in traceRay.
So in first snippet you add mutable:
std::async([=, &origin, &curr_pixel, &pixel_data]() mutable
{ ^^^^^^^
while (true)
{

Resample curve into even length segments using C++

What would be the best way of resampling a curve into even length segments using C++? What I have is a set of points that represents a 2d curve. In my example below I have a point struct with x and y components and a vector of points with test positions. Each pair of points represents a segment on the curve. Example resample curves are the images below. The red circles are the original positions the green circles are the target positions after the resample.
struct Point
{
float x, y;
};
std::vector<Point> Points;
int numPoints = 5;
float positions[] = {
0.0350462, -0.0589667,
0.0688311, 0.240896,
0.067369, 0.557199,
-0.024258, 0.715255,
0.0533231, 0.948694,
};
// Add points
int offset = 2;
for (int i =0; i < numPoints; i++)
{
offset = i * 2;
Point pt;
pt.x = positions[offset];
pt.y = positions[offset+1];
Points.push_back(pt);
}
See if this works for you. Resampled points are equidistant from each other on the linear interpolation of the source vector's points.
#include <iostream>
#include <iomanip>
#include <vector>
#include <cmath>
struct Point {
double x, y;
};
// Distance gives the Euclidean distance between two Points
double Distance(const Point& a, const Point& b) {
const double dx = b.x - a.x;
const double dy = b.y - a.y;
const double lsq = dx*dx + dy*dy;
return std::sqrt(lsq);
}
// LinearCurveLength calculates the total length of the linear
// interpolation through a vector of Points. It is the sum of
// the Euclidean distances between all consecutive points in
// the vector.
double LinearCurveLength(std::vector<Point> const &points) {
auto start = points.begin();
if(start == points.end()) return 0;
auto finish = start + 1;
double sum = 0;
while(finish != points.end()) {
sum += Distance(*start, *finish);
start = finish++;
}
return sum;
}
// Gives a vector of Points which are sampled as equally-spaced segments
// taken along the linear interpolation between points in the source.
// In general, consecutive points in the result will not be equidistant,
// because of a corner-cutting effect.
std::vector<Point> UniformLinearInterpolation(std::vector<Point> const &source, std::size_t target_count) {
std::vector<Point> result;
if(source.size() < 2 || target_count < 2) {
// degenerate source vector or target_count value
// for simplicity, this returns an empty result
// but special cases may be handled when appropriate for the application
return result;
}
// total_length is the total length along a linear interpolation
// of the source points.
const double total_length = LinearCurveLength(source);
// segment_length is the length between result points, taken as
// distance traveled between these points on a linear interpolation
// of the source points. The actual Euclidean distance between
// points in the result vector can vary, and is always less than
// or equal to segment_length.
const double segment_length = total_length / (target_count - 1);
// start and finish are the current source segment's endpoints
auto start = source.begin();
auto finish = start + 1;
// src_segment_offset is the distance along a linear interpolation
// of the source curve from its first point to the start of the current
// source segment.
double src_segment_offset = 0;
// src_segment_length is the length of a line connecting the current
// source segment's start and finish points.
double src_segment_length = Distance(*start, *finish);
// The first point in the result is the same as the first point
// in the source.
result.push_back(*start);
for(std::size_t i=1; i<target_count-1; ++i) {
// next_offset is the distance along a linear interpolation
// of the source curve from its beginning to the location
// of the i'th point in the result.
// segment_length is multiplied by i here because iteratively
// adding segment_length could accumulate error.
const double next_offset = segment_length * i;
// Check if next_offset lies inside the current source segment.
// If not, move to the next source segment and update the
// source segment offset and length variables.
while(src_segment_offset + src_segment_length < next_offset) {
src_segment_offset += src_segment_length;
start = finish++;
src_segment_length = Distance(*start, *finish);
}
// part_offset is the distance into the current source segment
// associated with the i'th point's offset.
const double part_offset = next_offset - src_segment_offset;
// part_ratio is part_offset's normalized distance into the
// source segment. Its value is between 0 and 1,
// where 0 locates the next point at "start" and 1
// locates it at "finish". In-between values represent a
// weighted location between these two extremes.
const double part_ratio = part_offset / src_segment_length;
// Use part_ratio to calculate the next point's components
// as weighted averages of components of the current
// source segment's points.
result.push_back({
start->x + part_ratio * (finish->x - start->x),
start->y + part_ratio * (finish->y - start->y)
});
}
// The first and last points of the result are exactly
// the same as the first and last points from the input,
// so the iterated calculation above skips calculating
// the last point in the result, which is instead copied
// directly from the source vector here.
result.push_back(source.back());
return result;
}
int main() {
std::vector<Point> points = {
{ 0.0350462, -0.0589667},
{ 0.0688311, 0.240896 },
{ 0.067369, 0.557199 },
{-0.024258, 0.715255 },
{ 0.0533231, 0.948694 }
};
std::cout << "Source Points:\n";
for(const auto& point : points) {
std::cout << std::setw(14) << point.x << " " << std::setw(14) << point.y << '\n';
}
std::cout << '\n';
auto interpolated = UniformLinearInterpolation(points, 7);
std::cout << "Interpolated Points:\n";
for(const auto& point : interpolated) {
std::cout << std::setw(14) << point.x << " " << std::setw(14) << point.y << '\n';
}
std::cout << '\n';
std::cout << "Source linear interpolated length: " << LinearCurveLength(points) << '\n';
std::cout << "Interpolation's linear interpolated length: " << LinearCurveLength(interpolated) << '\n';
}
For green points equidistant along the polyline:
The first run: walk through point list, calculate length of every segment and cumulative length up to current point. Pseudocode:
cumlen[0] = 0;
for (int i=1; i < numPoints; i++) {
len = Sqrt((Point[i].x - Point[i-1].x)^2 + (Point[i].y - Point [i-1].y)^2)
cumlen[i] = cumlen[i-1] + len;
}
Now find length of every new piece
plen = cumlen[numpoints-1] / numpieces;
Now the second run - walk through point list and insert new points in appropriate segments.
i = 0;
for (ip=0; ip<numpieces; ip++) {
curr = plen * ip;
while cumlen[i+1] < curr
i++;
P[ip].x = Points[i].x + (curr - cumlen[i]) * (Points[i+1].x - Points[i].x) /
(cumlen[i+1] - cumlen[i]);
..the same for y
}
Examples of real output for numpieces > numPoints and vice versa

Traversing all the points inside a triangle using loops

The following is the three vertices with x,y and z co-ordinates of a triangle in 3D:
-0.2035416, 0.1107585, 0.9516008 (vertex A)
-0.0334390, -0.2526040, 0.9751212 (vertex B)
0.2569092, 0.0913718, 0.9817184 (Vertex C)
The projection plane is divided into a grid of (height*width) pixels.
I want to traverse every pixel inside the triangle inside the projection plane manually starting from the bottom to top of the triangle and print each pixel co-ordinates inside the triangle on the screen in c++. Say, I have already found the top and bottom vertex of the triangle. But now, how will I traverse from bottom to top and print each pixel co-ordinate? What's the logic behind this?
I have an idea of making two nested for loops like below but what will I do inside the loops? how will I make the sideway move after each x and y increment?
for (int y = ymin; y <= ymax; ++y) {
for (int x = xmin; x <= xmax; ++x) {
//what to to here?
}
}
Traversing demo:
#include <iostream>
template <size_t kD>
class Vector{
public:
template <class... Args>
Vector(double coord, Args&&... args) {
static_assert( sizeof...(args)+1 == kD, "Unmatched vector dimension" );
InitCoord(0, coord, std::forward<Args>(args)...);
}
Vector(const Vector &) = default;
Vector &operator=(const Vector &) = default;
double &operator[](const size_t i) {
return coord_[i];
}
double operator[](const size_t i) const {
return coord_[i];
}
friend Vector<kD> operator-(const Vector<kD> &A, const Vector<kD> &B) {
Vector v;
for (size_t i=0; i<kD; ++i)
v[i] = A[i]-B[i];
return v;
}
private:
Vector() = default;
template <class... Args>
void InitCoord(const int pos, double coord, Args&&... args) {
coord_[pos] = coord;
InitCoord(pos+1, std::forward<Args>(args)...);
}
void InitCoord(const int pos) {}
double coord_[kD];
};
class Line {
public:
Line(const double x1, const double y1, const double x2, const double y2)
: x1_(x1), y1_(y1), x2_(x2), y2_(y2) {}
Line(const Vector<2> A, const Vector<2> B)
: x1_(A[0]), y1_(A[1]), x2_(B[0]), y2_(B[1]) {}
double operator()(const double x, const double y) const {
return (y-y1_)*(x2_-x1_) - (x-x1_)*(y2_-y1_);
}
int_fast8_t Sign(const double x, const double y) const {
return Signum( (y-y1_)*(x2_-x1_) - (x-x1_)*(y2_-y1_) );
}
private:
int_fast8_t Signum(const double x) const {
return (0.0 < x) - (x < 0.0);
}
const double x1_,y1_;
const double x2_,y2_;
};
void Transpos(Vector<2> &v) {
v[0] = (v[0]+1)/2*720; // col
v[1] = (v[1]+1)/2*480; // row
}
double CalculateZ(const Vector<2> &D, const Vector<2> &AB, const Vector<2> &AC,
const Vector<3> &AB3, const Vector<3> &AC3) {
const double b = (D[1]*AB[0]-D[0]*AB[1]) / (AC[1]*AB[0]-AC[0]*AB[1]);
const double a = AB[0]==0 ? (D[1]-b*AC[1])/AB[1] : (D[0]-b*AC[0])/AB[0];
std::cout << a << " " << b << std::endl;
return a*AB3[2]+b*AC3[2];
}
int main()
{
const auto A3 = Vector<3>(0.0, 0.0, 7.0);
const auto B3 = Vector<3>(0.0, 0.3, 9.0);
const auto C3 = Vector<3>(0.4, 0.0, 1.0);
const auto AB3 = B3-A3;
const auto AC3 = C3-A3;
const auto BC3 = C3-B3;
// Some projection works here, which I am not good at.
// A B C store the projected triangle coordiate in the [-1,1][-1,1] area
auto A = Vector<2>(0.0, 0.0);
auto B = Vector<2>(0.0, 0.3);
auto C = Vector<2>(0.4, 0.0);
Transpos(A);
Transpos(B);
Transpos(C);
const auto AB2 = B-A;
const auto AC2 = C-A;
const auto BC2 = C-B;
const Line AB(A, B);
const Line AC(A, C);
const Line BC(B, C);
const auto signAB = AB.Sign(C[0],C[1]);
const auto signAC = AC.Sign(B[0],B[1]);
const auto signBC = BC.Sign(A[0],A[1]);
// top
// 0------------720 (col x)
// |
// |
// |
// |
// 480 (row y)
// bottom
for (int row=480-1; row>=0; --row) {
for (int col=0; col<720; ++col) {
if (signAB*AB.Sign(col,row)>=0 && signAC*AC.Sign(col,row)>=0 &&
signBC*BC.Sign(col,row)>=0 )
std::cout << row << "," << col << " Z:"
<< CalculateZ(Vector<2>(col, row)-A, AB2, AC2, AB3, AC3) + A3[2]
<< std::endl;
}
}
return 0;
}
Projection:
first space [-1,1][-1,1]
second space [0,720][0,480]
Let's say we have a (x1,y1) in the first space, then (x_,y_) with x_=(x1+1)/2*720, y_=(y1+1)/2*480 will be in the second space.
More generalize:
first space [xmin,xmax][ymin,ymax]
second space [xmin_,xmax_][ymin_,ymax_]
(x1,y1)
->
( (x1-xmin)/(xmax-xmin)*(xmax_-xmin_)+xmin_ ,
(y1-ymin)/(ymax-ymin)*(ymax_-ymin_)+ymin_ )
If you just want to zoom it, not twist it or something...
Edit #1:
Thanks to #Adrian Colomitchi's advices, which is outstanding, I have
improved the demo.
Ax Ay Bx By Cx Cy are now coordinates in the first space, they are
then "transposed" into the second space. As a result, Line AB AC BC
are now "in" the second space. And the two loops are modified
accordingly, they now iterate the points of the second space.
How to find z value from (x,y):
AB represents a vector from A(Ax,Ay) to B(Bx,By), i.e. AB = B-A = (Bx-Ax,By-Ay).
For any given point D(Dx,Dy) in the triangle, represent it into AD = aAB + bAC : (Dx-Ax, Dy-Ay) = a*(Bx-Ax, By-Ay) + b*(Cx-Ax, Cy-Ay) where Dx Dy Ax Ay Bx By Cx Cy is known. Find out a and b, and then Dz = a*(Bz-Az) + b*(Cz-Az). Dx Dy in the 3D space can be calculated the same way.
Edit #2:
Z value calculation added to the demo.
I tried to keep the demo simple, but calculating the Z value really involved lots of variables and calculations. I declare a new class called Vector to manage points and vectors, while the Line class remains unchanged.
You need to change the inner loop slightly; don't go from xmin to xmax. For each value of y from ymin to ymax there will be exactly two different pixels (two different x values) which lie exactly on the two edges of the triangle. Calculate those points and then all points between them will be inside the triangle. And you'll have to handle some edge cases such as when one of the edges is horizontal.
First you must transform your {0,1} ranges (vert & horz) to pixel coordinates. You speak of 720x480. This is not a square, but a rectangle. If you decide to keep one-to-one scale, you'll get a distorted triangle. If not, perhaps you only use 480x480 pixels.
Second, now you have your three vertices in pixel-space, you can iterate every pixel in this pixel-space and tell if it belongs to the triangle or not. The function 'InTriangle' for this job is what #felix posted in the code of his solution:
if (signAB*AB(i,j)>=0 && signAC*AC(i,j)>=0 && signBC*BC(i,j)>=0 )

OpenCV: can projectPoints return negative values?

I'm using the cv::projectPoints to get correspondent pixels of a vector of 3D points.
The points are all near each other.
The problem is that for some points I get correct pixels coordinates but for other I get strange negative values like -22599...
Is it normal that cv::projectPoints return negative values or is it a bug in my code?
void SingleCameraTriangulator::projectPointsToImage2(const std::vector< cv::Vec3d >& pointsGroup, const double scale, std::vector< Pixel >& pixels)
{
cv::Vec3d
t2, r2;
decomposeTransformation(*g_12_, r2, t2);
cv::Mat imagePoints2;
cv::projectPoints(pointsGroup, r2, t2, *camera_matrix_, *distortion_coefficients_, imagePoints2);
for (std::size_t i = 0; i < imagePoints2.rows; i++)
{
cv::Vec2d pixel = imagePoints2.at<cv::Vec2d>(i);
Pixel p;
p.x_ = pixel[0];
p.y_ = pixel[1];
if ( (p.x_ < 0) || (p.x_ > ((1 / scale) * img_1_->cols)) || (p.y_ < 0) || (p.y_ > ((1/scale) * img_1_->rows)))
{
cv::Vec3d point = pointsGroup[i];
std::cout << point << " - " << pixel << " - " << pixel*scale << "problema" << std::endl;
}
p.i_ = getBilinearInterpPix32f(*img_2_, scale * p.x_, scale * p.y_);
pixels.push_back(p);
}
}
Thank you in advance for any suggestions.
reprojectImageTo3D (are you use it for getting 3D points?) gives large z coordinates (10000) for outlier points, so I think your problem is here.

Camera motion compensation

I am using openCV to implementing camera motion compensation for an application. I know I need to calculate the optical flow and then find the fundamental matrix between two frames to transform the image.
Here is what I have done so far:
void VideoStabilization::stabilize(Image *image) {
if (image->getWidth() != width || image->getHeight() != height) reset(image->getWidth(), image->getHeight());
IplImage *currImage = toCVImage(image);
IplImage *currImageGray = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
cvCvtColor(currImage, currImageGray, CV_BGRA2GRAY);
if (baseImage) {
CvPoint2D32f currFeatures[MAX_CORNERS];
char featuresFound[MAX_CORNERS];
opticalFlow(currImageGray, currFeatures, featuresFound);
IplImage *result = transformImage(currImage, currFeatures, featuresFound);
if (result) {
updateImage(image, result);
cvReleaseImage(&result);
}
}
cvReleaseImage(&currImage);
if (baseImage) cvReleaseImage(&baseImage);
baseImage = currImageGray;
updateGoodFeatures();
}
void VideoStabilization::updateGoodFeatures() {
const double QUALITY_LEVEL = 0.05;
const double MIN_DISTANCE = 5.0;
baseFeaturesCount = MAX_CORNERS;
cvGoodFeaturesToTrack(baseImage, eigImage,
tempImage, baseFeatures, &baseFeaturesCount, QUALITY_LEVEL, MIN_DISTANCE);
cvFindCornerSubPix(baseImage, baseFeatures, baseFeaturesCount,
cvSize(10, 10), cvSize(-1,-1), TERM_CRITERIA);
}
void VideoStabilization::opticalFlow(IplImage *currImage, CvPoint2D32f *currFeatures, char *featuresFound) {
const unsigned int WIN_SIZE = 15;
const unsigned int PYR_LEVEL = 5;
cvCalcOpticalFlowPyrLK(baseImage, currImage,
NULL, NULL,
baseFeatures,
currFeatures,
baseFeaturesCount,
cvSize(WIN_SIZE, WIN_SIZE),
PYR_LEVEL,
featuresFound,
NULL,
TERM_CRITERIA,
0);
}
IplImage *VideoStabilization::transformImage(IplImage *image, CvPoint2D32f *features, char *featuresFound) const {
unsigned int featuresFoundCount = 0;
for (unsigned int i = 0; i < MAX_CORNERS; ++i) {
if (featuresFound[i]) ++featuresFoundCount;
}
if (featuresFoundCount < 8) {
std::cout << "Not enough features found." << std::endl;
return NULL;
}
CvMat *points1 = cvCreateMat(2, featuresFoundCount, CV_32F);
CvMat *points2 = cvCreateMat(2, featuresFoundCount, CV_32F);
CvMat *fundamentalMatrix = cvCreateMat(3, 3, CV_32F);
unsigned int pos = 0;
for (unsigned int i = 0; i < featuresFoundCount; ++i) {
while (!featuresFound[pos]) ++pos;
cvSetReal2D(points1, 0, i, baseFeatures[pos].x);
cvSetReal2D(points1, 1, i, baseFeatures[pos].y);
cvSetReal2D(points2, 0, i, features[pos].x);
cvSetReal2D(points2, 1, i, features[pos].y);
++pos;
}
int fmCount = cvFindFundamentalMat(points1, points2, fundamentalMatrix, CV_FM_RANSAC, 1.0, 0.99);
if (fmCount < 1) {
std::cout << "Fundamental matrix not found." << std::endl;
return NULL;
}
std::cout << fundamentalMatrix->data.fl[0] << " " << fundamentalMatrix->data.fl[1] << " " << fundamentalMatrix->data.fl[2] << "\n";
std::cout << fundamentalMatrix->data.fl[3] << " " << fundamentalMatrix->data.fl[4] << " " << fundamentalMatrix->data.fl[5] << "\n";
std::cout << fundamentalMatrix->data.fl[6] << " " << fundamentalMatrix->data.fl[7] << " " << fundamentalMatrix->data.fl[8] << "\n";
cvReleaseMat(&points1);
cvReleaseMat(&points2);
IplImage *result = transformImage(image, *fundamentalMatrix);
cvReleaseMat(&fundamentalMatrix);
return result;
}
MAX_CORNERS is 100 and it usually find around 70-90 features.
With this code, I get a weird fundamental matrix, like:
-0.000190809 -0.00114947 1.2487
0.00127824 6.57727e-05 0.326055
-1.22443 -0.338243 1
Since I just hold the camera with my hand and try not to shake it (and there werent any objects moving), I expected the matrix to be close to identity. What am I doing wrong?
Also, I'm not sure what to use to transform the image. cvWarpAffine need a 2x3 matrix, should I discard the last row or use another function?
What you're looking for is not the fundamental matrix but rather an affine or perspective transform.
The fundamental matrix describes the relation of two cameras having significantly different viewpoints. It is calculated such that if you have two points x (on one image) and x' (on another) that are projections of the same point in space, then x F x' (the product) is zero. If x and x' are nearly identical... then the only solution is to make F nearly zero (and practically useless). That's why you've got what you have.
The matrix that should indeed be near identity is a transformation A that transforms the points x to x'= A x (the old image into the new one). Depending on what types of transformations you want to include (affine or perspective), you could (theoretically) use the functions cvGetAffineTransform or cvGetPerspectiveTransform to calculate the transform. For that, you would need 3 or 4 point pairs, respectively.
However, the best choice (I think) is cvFindHomograpy. It estimates a perspective transform based on all of the point pairs available, using outlier filtering algorithms (RANSAC, for example), giving you a 3x3 matrix.
Then you can use cvWarpPerspective to transform the images themselves.