Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to generate normal maps given a mesh, camera pose, and camera intrinsics.
My plan is to calculate the vertex normal for each point in the cloud then project them onto an image plane with the corresponding camera pose and intrinsics. More specifically, I would first calculate the vertex normals then convert the point coordinates from world coordinates into camera coordinates with camera pose. Finally, using the camera intrinsics, the point cloud can be projected onto an image where each pixel represents the surface normal of the corresponding 3D vertex.
Below is my code:
#include <iostream>
#include <thread>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/features/from_meshes.h>
#include <pcl/visualization/pcl_visualizer.h>
using namespace std;
using namespace pcl;
void readPLY(PolygonMesh::Ptr mesh, string fname, bool printResult=false)
{
PLYReader reader;
int success = reader.read(fname, *mesh); // load the file
if (success == -1) {
cout << "Couldn't read file " << fname << endl;
exit(-1);
}
if(printResult){
cout << "Loaded "
<< mesh->cloud.width * mesh->cloud.height
<< " data points from "
<< fname
<< " with the following fields: "
<< endl;
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// print the first 10 vertices
cout << "Vertices:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << cloud->points[i].x
<< " " << cloud->points[i].y
<< " " << cloud->points[i].z << endl;
// print the first 10 polygons
cout << "Polygons:" << endl;
for (size_t i=0; i<10; ++i){
cout << mesh->polygons[i] << endl;
}
}
}
void computeNormal(PolygonMesh::Ptr mesh,
PointCloud<Normal>::Ptr normal,
bool printResult=false)
{
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// compute surface normal
pcl::features::computeApproximateNormals(*cloud, mesh->polygons, *normal);
// print results
if (printResult){
cout << "Normal cloud contains "
<< normal->width * normal->height
<< " points" << endl;
// print the first 10 vertices
cout << "Vertex normals:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << normal->points[i] << endl;
}
}
int main (int argc, char** argv)
{
// ./main [path/to/ply] (--debug)
string fname = argv[1];
// check if debug flag is set
bool debug = false;
for(int i=0;i<argc;++i){
string arg = argv[i];
if(arg == "--debug")
debug = true;
}
// read file
PolygonMesh::Ptr mesh (new PolygonMesh);
readPLY(mesh, fname, debug);
// calculate normals
PointCloud<Normal>::Ptr normal (new PointCloud<Normal>);
computeNormal(mesh, normal, debug);
}
Currently, I have already obtained surface normal for each vertex with pcl::features::computeApproximateNormals. Is there a way to use PCL to project the normals onto an image plane with the xyz-elements of the normal mapped to the RGB channels and save the image to a file?
Welcome to Stack Overflow. What the documentation says is:
Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface in that point.
From what I gather from what you say is that you already have the surfaces for which you can easily calculate surface normals. Normal estimation is used because 3D point cloud data is basically a bunch of sample points from the real world. You do not have surface information in this kind of data. What you do is you estimate a surface around a pixel using Planar Fitting(2D Regression). Then, you obtain surface normal. You cannot compare these two methods. They essentially gave different purposes.
For question two: Yes. Refer to this SO answer.
Related
I'd like to perform the following action using CGAL :
Intersect a surface mesh; the cut mesh; with another surface mesh; the cutted mesh; to divide the cutted mesh in 2 different meshes.
To do so I use the CGAL Mesh Slicer which gives me a set of polylines. Unfortunately, the slicer doesn't provide the information of which face of the cutted mesh each polyline belongs to. Moreover it effectively perfom the slicer action.
How can I retrieve this information ?
I need this information to perform the subdivision of the cutted mesh along the polylines, then the division in 2 separate meshes.
In my context the cut mesh is a surface mesh like this :
Here is the cut mesh + the cutted mesh :
In my code, I generate a Plane from each faces of the cut mesh and perform a slice operation with the cutted mesh.
Here is my code :
//Import off files and instantiate meshes
const char* filename1 = "test.off";
const char* filename2 = "cutMesh2.off";
std::ifstream input(filename1);
Mesh cuttedMesh, cutMesh;
if (!input || !(input >> cuttedMesh))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
input.open(filename2);
if (!input || !(input >> cutMesh))
{
std::cerr << "Second mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
// AABB Slicer constructor from the cutted mesh
AABB_tree tree(edges(cuttedMesh).first, edges(cuttedMesh).second, cuttedMesh);
tree.accelerate_distance_queries();
CGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(cuttedMesh, tree);
std::cout << cutMesh.num_vertices()<< std::endl;
// For each face of the cut mesh
BOOST_FOREACH(face_descriptor f, faces(cutMesh))
{
std::cout << "Face " << f << std::endl;
Point points [3];
int i = 0;
//for each point of the current face
BOOST_FOREACH(vertex_descriptor v, CGAL::vertices_around_face(cutMesh.halfedge(f), cutMesh))
{
points[i]= cutMesh.point(v);
++i;
}
Polylines polylines;
// Perform the slice between the current face of the cut mesh and the cutted mesh
slicer_aabb(K::Plane_3(points[0],points[1],points[2]), std::back_inserter(polylines));
std::cout << "the slicer intersects " << polylines.size() << " polylines" << std::endl;
//for each polyline computed by this face of the cutmesh
BOOST_FOREACH(Polyline_type polyline,polylines)
{
std::cout << "Polyline : " << polyline.size() << " points"<< std::endl;
BOOST_FOREACH(Point point, polyline)
{
std::cout << "Point : " << point << std::endl;
}
}
std::cout << std::endl;
polylines.clear();
}
Thanks for your help.
The answer to the post has been proposed by sloriot :
Try using the non-documented clip() function located in Polygon_mesh_processing/include/CGAL/Polygon_mesh_processing/internal/clip.h
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have written an opencv code that reads a video, looks for red pixels in each frame, and exports the frame as a png file if the number of red pixels exceeds a certain amount. The code works well, but I am looking for ways to further reduce computation time because the videos are 4-5 hrs long. I was reading posts on using parallel_pipeline and was wondering if using that would substantially speed up the process. Based on what I read, it seems that I will have to assign a thread for each major task (reading video frames, color detection/thresholding with inRange, and image saving). So my question is:
1) Would this speed up the process compared to the default multithreading that opencv does?
2) Given what the code needs to do, are there more appropriate ways for multithreading than parallel_pipeline?
I am fairly new to this topic, so any help is much appreciated!
/**
* #CheckMotionParallel
* #Motion detection using color detection and image thresholding
*/
//opencv
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>
//C
#include <stdio.h>
//C++
#include <iostream>
#include <sstream>
#include "tbb/blocked_range.h"
#include "tbb/parallel_for.h"
#include "tbb/parallel_reduce.h"
#include "tbb/task_scheduler_init.h"
#include "tbb/mutex.h"
#include "tbb/tbb_thread.h"
#include "tbb/blocked_range2d.h"
using namespace cv;
using namespace std;
using namespace tbb;
void help();
void help()
{
cout
<< "--------------------------------------------------------------------------" << endl
<< "Note for program CheckMotion" << endl
<< "CheckMotion does the following" << endl
<< "1) It searches each frame in a video and looks for a specified range of colors in the frame" << endl
<< "2) Pixels falling within the range will be converted to white while everything else is turned to black" << endl
<< "3) For each frame, the program gives: frame number/time stamp, total pixel count, and white pixel count" << endl
<< "4) For frames whose white pixel count exceeds a threshold, it will export those frames as individial png files" << endl
<< "--------------------------------------------------------------------------" << endl
<< endl;
}
int64 startTime;
int NumThreads = task_scheduler_init::default_num_threads();
int main(int argc, char**)
{
//Print out program note
help();
///Part I: Read-in the video
VideoCapture cap("/Users/chi/Desktop/Video analyses/testvideo4.mp4");
//Error message if the video cannot be opened
//Create an object denoting the frames
//Create a window for showing the video as CheckMotion runs
//For loop looking through frames
if(cap.isOpened()) {
startTime = getTickCount();
Mat frame;
for(;;)
{
//Show each frame in the video window previously created
double tfreq = getTickFrequency();
double secs = ((double) getTickCount()-startTime)/tfreq;
cap >> frame;
// namedWindow("Frame");
// imshow("Frame",frame);
//
waitKey(10);
//Create a string for frame number that gets updated for each cycle of the loop
stringstream ss;
ss << cap.get(CAP_PROP_POS_FRAMES);
string FrameNumberString = ss.str();
stringstream maskedfilename;
stringstream rawfilename;
//Create filenames for later use in result output and image save using frame number as ref
maskedfilename << "/Users/chi/Desktop/test/masked" << FrameNumberString.c_str() << ".png";
rawfilename << "/Users/chi/Desktop/test/raw" << FrameNumberString.c_str() << ".png";
///Part II: Image thresholding and image saving
//Create an object representing new images after thresholding
Mat masked;
//inRange function that convert the pixels that fall within the specified range to white and everything else to black
//The Range is specified by a lower [Scalar(200,200,200)] and an upper [Scalar(255,255,255)] threshold
//A color is defined by its BGR score
//The thresholded images will then be represented by the object "masked"
inRange(frame, Scalar(10,0,90), Scalar(50,50,170), masked);
//Creating integer variables for total pixel count and white pixel count for each frame
int totalpixel;
int whitepixel;
//Total pixel count equals the number of rows and columns of the frame
totalpixel = masked.rows*masked.cols;
//Using countNonZero function to count the number of white pixels
whitepixel = countNonZero(masked);
//Output frame number, total pixel count and white pixel count for each frame
//Exit the loop when reaching the last frame (i.e. pixel count drops to 0)
if(totalpixel==0){
cout << "End of the video" << endl;
cout << "Number of threads: " << NumThreads << endl;
cap.release();
break;
}
else {
cout
<< "Frame:" << ss.str() << endl
<< "Number of total pixels:" << totalpixel << endl
<< "Pixels of target colors:" << whitepixel << endl
<< "Run time = " << fixed << secs << "seconds" << endl
<< endl;
//Save the frames with white pixel count larger than a user-determined value (100 in present case)
//Save both the orignal as well as the procesed images
if (whitepixel > 50){
imwrite(rawfilename.str(),frame);
imwrite(maskedfilename.str(),masked);
}
}
}
}
}
Just remove this line :)
waitKey(10);
Then replace endl with '\n'.
I am using VTK to generate a 3D Delaunay triangulation from a list of input points that are obtained using terrestrial laser scanner. So far, I successfully generated some 3D triangulations and saved them in .vtk or .vtu file formats. However, I need to save them as one of the popular formats such as .ply, .stl, .obj or .wrl. Could you please tell me how can I save 3d triangulation in one of these formats? My other question is about setAlpha and setTolerance parameters, could you please explain me these parameters in detail?
Thanks in advance
int main( int argc, char* argv[] )
{
//load the point cloud
vtkSmartPointer<vtkSimplePointsReader> reader = vtkSmartPointer<vtkSimplePointsReader>::New();
reader->SetFileName("kucuk50k.xyz");
reader->Update();
vtkPolyData* polydata = reader->GetOutput();
std::cout << "The point cloud is loaded" << std::endl;
//end of point cloud loading
std::cout << "----------------------------------------------" << std::endl;
// Generate a mesh from the input points. If Alpha is non-zero, then
// tetrahedra, triangles, edges and vertices that lie within the
// alpha radius are output.
std::cout << "Start delaunay 3d triangulation" << std::endl;
vtkSmartPointer<vtkDelaunay3D> delaunay3DAlpha = vtkSmartPointer<vtkDelaunay3D>::New();
delaunay3DAlpha->SetInputConnection(reader->GetOutputPort());
delaunay3DAlpha->SetAlpha(0.01);
std::cout << "3d Delaunay computed" << std::endl;
std::cout << "----------------------------------------------" << std::endl;
std::cout << "Start writing the triangulation" << std::endl;
vtkSmartPointer<vtkXMLUnstructuredGridWriter> ugWriter = vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
ugWriter->SetInputConnection(delaunay3DAlpha->GetOutputPort());
ugWriter->SetFileName("delaunayy_50k.vtk");
ugWriter->Write();
std::cout << "VTK file created" << std::endl;
return EXIT_SUCCESS;
}
You try to compute a triangular surface from a scanned terrain, right? Is there a good reason to use 3D Delaunay which creates tetrahedra instead of 2.5D Delaunay? You may have a look at my Fade2.5D library, there is a free student version. A limitation of 2.5D is however that you can't represent vertical walls and thus buildings are not directly supported.
I'm beginner in OpenCV and currently I'm using Visual Studio 2013 (64-bit) and OpenCV 3.2 (C++) to construct a two view geometry and try to display those matched 3D points in MeshLab. I use the triangulatePoints() to get the Points4D, which is a 4*N matrix that contains the coordinates of the matched points from two images. This the documentation of writeCloud().
triangulatePoints(CameraMatrix_1, CameraMatrix_2, matchpoints_1, matchpoints_2, Points4D);
writeCloud("twoview.ply", cloud, noArray(), noArray(), false);
My question is, what should be the cloud input of writeCloud() so that I could save those 3D points into a .ply file and display them? Assume that I do not assign color to the point cloud first.
Also, I have tried to use the MATLAB to generate a pointcloud.ply file and analyse it with the readCloud(), then I find out the following code successfully read a point cloud and save it into another one. But strangely, the cv::Mat twoviewcloud here is a 1*N matrix, how could you construct a point cloud form one dimensional array? I am totally confused.
Mat twoviewcloud = readCloud("pointcloud.ply");
writeCloud("trial.ply", twoviewcloud, noArray(), noArray(), false);
I would sincerely thank you if someone could give me some hint!
Ok, so I am still confused to use the original OpenCV function writeCloud(), however, I could just implement my own function to write the .ply file. Here is the code, it is quite simple actually and you could read the wiki page for the detailed .ply format.
struct dataType { Point3d point; int red; int green; int blue; };
typedef dataType SpacePoint;
vector<SpacePoint> pointCloud;
ofstream outfile("pointcloud.ply");
outfile << "ply\n" << "format ascii 1.0\n" << "comment VTK generated PLY File\n";
outfile << "obj_info vtkPolyData points and polygons : vtk4.0\n" << "element vertex " << pointCloud.size() << "\n";
outfile << "property float x\n" << "property float y\n" << "property float z\n" << "element face 0\n";
outfile << "property list uchar int vertex_indices\n" << "end_header\n";
for (int i = 0; i < pointCloud.size(); i++)
{
Point3d point = pointCloud.at(i).point;
outfile << point.x << " ";
outfile << point.y << " ";
outfile << point.z << " ";
outfile << "\n";
}
outfile.close();
Ive succesfully parsed the iqe (Inter Quake Exporter) format and now im stuck at displaying it in bindpose.
All vertices have a weird transformation, where root bone(which covers the full mesh for orientation) is not the only bone with influence for that vertex. You can see it at the arm / should / neck area of the mesh. This mesh has 3 bones. One root bone overing the whole mesh and two arm bones.
You can see how the mesh should look like in the background (exported as obj)
For better understanding, i have the following system:
1. I load all vertex data into one big vbo
(vertices, uvs, normals, tangent, bitangent, boneIndicies(4) (index of joint list) and boneWeights(4))
2. I add all joints to a joint list and create a tree system (simple linked list with position, rotation and parent pointer)
3. i have a seperate list called boneMatrices where i store.. well my bone matrices. currently every frame, later i will precalculate the matrices for each animation frame.
I try to calculate the bone matrix the following way:
for (int i = 0; i < this->jointList.size(); i++)
{
pixel::CJoint *joint = this->jointList.at(i);
std::cout << "Joint ------- " << joint->name << " -------- values: \n";
std::cout << "Translation: " << joint->position.x << " " << joint->position.y << " " << joint->position.z << "\n";
std::cout << "Quaternion: " << joint->rotation.x << " " << joint->rotation.y << " " << joint->rotation.z << " " << joint->rotation.w << "\n";
pixel::matrix4 rotation = pixel::CMatrix::fromQuaternion(joint->rotation);
pixel::matrix4 offset = pixel::CMatrix::translateMatrix(joint->position);
pixel::matrix4 baseMatrix = rotation * offset; // translation * rotation
joint->bindPose = baseMatrix;
joint->invBindPose = pixel::CMatrix::inverseMatrix(baseMatrix);
if (joint->parent != NULL)
{
std::cout << "Joint: " << joint->name << " is child of " << joint->parent->name << " \n";
joint->bindPose = joint->bindPose * joint->parent->invBindPose;
joint->invBindPose = pixel::CMatrix::inverseMatrix(joint->bindPose);
}
std::cout << "\n";
}
I store the transposed (else the mesh is upside down) of joint->invBindPose in the boneMatrices and send it to the shader:
boneMatrix is a std::vector of matrix4
this->material.setParameter("boneMatrix", this->boneMatrices.at(0), this->boneMatrices.size());
The root bone bind calculation has to be right (at least i think) because the head is at the right place and the eyes too (which dont have a bone influence currently)