Graphing 3D Plot with C++ GNUPlot? (C++ 14, VS 22) - c++

I'm attempting to plot a set of 3D points in C++ using the GNUPlot library implementation.
I'm using C++ 14 with Visual Studio 2022.
I understand how to plot 2D points with this library, however I'm quite confused as to how I'm supposed to plot a set of 3D points.
Let's say my 3D points are (0, 0, 0), (1, 1, 1), and (2, 2, 2).
One of the things I tried was to create a vector for each 3D point.
I tried the following example code from the GNUPlot examples page:
#include <string>
#include "gnuplot-iostream.h"
using namespace std;
int main()
{
vector<vector<float>> data = {};
data.push_back(vector<float>{0, 0, 0});
data.push_back(vector<float>{1, 1, 1});
data.push_back(vector<float>{2, 2, 2});
Gnuplot gp;
gp << "set title 'test'\n";
gp << "set dgrid3d 30,30\n";
gp << "set hidden3d\n";
gp << "splot" << gp.file1d(data) << "u 1:2:3 with lines title 'test'" << std::endl;
return 0;
}
However that gives me the following plot:
I also tried to create a dedicated struct for the 3D points, however that resulted in an error from the GNUPlot header file.
When I would graph 2D points, I would use the std::pair datatype for representing the points, so what datatype should I use to represent 3D points?
Thank you for reading my post, any guidance is appreciated.

It was much simpler than expected, simply removing dgrid3d and replacing lines with points appears to do the trick (thanks ypnos):
#include <string>
#include "gnuplot-iostream.h"
using namespace std;
int main()
{
vector<vector<float>> data = {};
data.push_back(vector<float>{0, 0, 0});
data.push_back(vector<float>{1, 1, 1});
data.push_back(vector<float>{2, 2, 2});
Gnuplot gp;
gp << "set title 'test'\n";
gp << "splot" << gp.file1d(data) << "u 1:2:3 with points pt 5 title 'test'" << std::endl;
return 0;
}
Output:

Related

Plot Contour with Gaps in GNUPlot C++? (C++ 14, VS 22)

I'm attempting to plot a contour plot in the GNUPlot C++ library, however I want to be able to plot with holes in the data (without interpolation where there's no data).
I'm using C++ 14, with Visual Studio 2022.
I have the following example code:
#include <iostream>
#include <string>
#include <vector>
#include "gnuplot-iostream.h"
using namespace std;
int main()
{
vector<vector<float>> data = {};
for (float x = -5; x <= 5; x += 0.1)
{
for (float y = -5; y <= 5; y += 0.1)
{
float dist = sqrt(pow(x, 2) + pow(y, 2));
float z = 1 / dist;
if ((dist > 0.1) && (z < 0.45))
data.push_back(vector<float>{x, y, z});
}
}
Gnuplot gp;
gp << "set title 'test'\n";
gp << "set dgrid3d 100,100\n";
gp << "splot" << gp.file1d(data) << "with pm3d title 'test'" << endl;
cin.get();
return 0;
}
Which produces the following contour plot:
However, in the plot above, the middle "crater" doesn't actually have any data in it:
The function automatically interpolates any areas without data to create the contour plot.
Is it possible in any way to stop that from happening with the contour functions? So the GNUPlot contour functions leave any areas without data empty, instead of interpolating?
Currently I'm using the dgrid3d function to create my contour grid, however it does not appear to be possible to achieve my goal with that. Is there a different graphing function that would be better suited to what I'm trying to accomplish?
Thanks for reading my post, any guidance is appreciated.
First a comment that "contour plot" means something else entirely to gnuplot. There are no contours in this plot.
The issue here is that by setting dgrid3d you reconstruct the full grid with no hole in it. Don't do that. You are generating the data as a grid anyhow, it's just that some of the grid points have a special status. Here are two ways to do it in gnuplot directly. Note that dgrid3d is not used for either method. I don't know anything about iostream so I leave that layer of coding to you.
Method 1 -
Write NaN ("not-a-number") for the points to be omitted.
set samples 101,101
set isosamples 101,101
set urange [-5:5]
set vrange [-5:5]
dist(x,y) = sqrt(x**2 + y**2)
z(x,y) = 1/dist(x,y)
splot '++' using (dist($1,$2)>0.1 && z($1,$2)<0.45 ? $1 : NaN):2:(z($1,$2)) with pm3d
Method 2 -
Write the z value for all points but tell gnuplot to clip at z=0.45
set samples 101,101
set isosamples 101,101
set urange [-5:5]
set vrange [-5:5]
dist(x,y) = sqrt(x**2 + y**2)
z(x,y) = 1/dist(x,y)
set zrange [*:0.45]
splot '++' using 1:2:(z($1,$2)) with pm3d

CGAL Inexact Volume calculation

My Problem
I would like to calculate the exact volume for an intersection with polygonmeshes. Unfortunately the result is wrong!?!?!.
I think it has something to do with me choosing the wrong options.
I have an STEP/OFF file (both work). I move a smaller Cylinder into a bigger Cylinder.
Then I calculate the Intersection. I do not use a pointmap.
If I calculate the Volume of the first three intersections, CGAL tells me their volume is zero, but it is not.
Why the Result is wrong
I know this, because:
I described this Problem analytically and solved the integral with matlab => Volume is not 0
I solved the Problem Using FreeCAD => Volume is the same as in Matlab
I solved the Volume in CGAL Result do not match 1 and 2.
I wrote the Results of many all the intersection into files, and the files concerning the problem are not empty. With a mesh viewer like (gmsh or meshlab) I can confirm heigth width and length. So the volume should not be 0, because it is intersecting so the volume cannot be 0.
What I have done
I have read this:
The Exact Computation Paradigm
Robustness and Precision Issues in Geometric Computation
FAQ: I am using double (or float or ...) as my number type and get assertion failures or incorrect output. Is this a bug in CGAL?
I did not understand how these three apply to my situation.
I am using the Exact_predicates_exact_constructions_kernel, I defined CGAL_DONT_USE_LAZY_KERNEL. I have used the other Kernels and not defined CGAL_DONT_USE_LAZY_KERNEL, the result does not change.
I do not use the the same output and input variable for intersection like in the
Polygon_mesh_processing/corefinement_consecutive_bool_op.cpp, so i do not use a point map as a result.
If needed I will supply the entire example, but I think, I did something wrong and the includes and way how I calculate the volume should suffice.
// originalExampleFrom corefinement_parallel_union_meshes.cpp;
#include <CGAL/Exact_predicates_exact_constructions_kernel_with_sqrt.h>
//#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Polygon_mesh_processing/transform.h>
#include <CGAL/Polygon_mesh_processing/intersection.h>
#include <CGAL/Named_function_parameters.h>
#include <CGAL/boost/graph/named_params_helper.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/aff_transformation_tags.h>
#include <CGAL/Polygon_mesh_processing/triangulate_faces.h>
#include <CGAL/Polygon_mesh_processing/corefinement.h>
#include <CGAL/Polygon_mesh_processing/repair.h>
#include <CGAL/Polygon_mesh_processing/IO/polygon_mesh_io.h>
#include <CGAL/Aff_transformation_3.h>
#include <iostream>
#include <fstream> // for write to file
#include <cassert>
#include <typeinfo>
//#define CGAL_DONT_USE_LAZY_KERNEL
/*
The corefinement operation (which is also internally used in the three Boolean operations) will correctly change the topology of the input surface mesh
if the point type used in the point property maps of the input meshes is from a CGAL Kernel with exact predicates.
If that kernel does not have exact constructions, the embedding of the output surface mesh might have self-intersections.
In case of consecutive operations, it is thus recommended to use a point property map with points from a kernel
with exact predicates and exact constructions (such as CGAL::Exact_predicates_exact_constructions_kernel).
In practice, this means that with exact predicates and inexact constructions, edges will be split at each intersection with a triangle but the position of the intersection point might create self-intersections due to the limited precision of floating point numbers.
*/
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel; //read text abouve about kernel
typedef Kernel::Point_3 Point_3;
typedef CGAL::Surface_mesh<Kernel::Point_3> Mesh;
#define CGAL_DONT_USE_LAZY_KERNEL
namespace PMP = CGAL::Polygon_mesh_processing;
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".off", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Numb. of Step: " << step << std::endl;
std::cout << "Bohrtiefe : " << Y_INIT - DRMAX * 1.0 / ND * step
<< std::endl;
#endif
return validIntersection;
}
The Rest of the CODE:
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".step", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Bohrtiefe : " << step << std::endl;
#endif
return validIntersection;
}
int main(int argc, char **argv) {
bool validRead = false;
bool validIntersection = false;
Mesh cutter, rotor; //out; // , out;
Mesh out[ND];
Kernel::Point_3 centers[ND];
//Kernel::FT volume[ND];
//GetGeomTraits<TriangleMesh, CGAL_NP_CLASS>::type::FT volume[ND];
double steps[40] = { 40.0000, 39.9981, 39.9926, 39.9833, 39.9703, 39.9535,
39.9330, 39.9086, 39.8804, 39.8482, 39.8121, 39.7719, 39.7276,
39.6791, 39.6262, 39.5689, 39.5070, 39.4404, 39.3689, 39.2922,
39.2103, 39.1227, 39.0293, 38.9297, 38.8235, 38.7102, 38.5893,
38.4601, 38.3219, 38.1736, 38.0140, 37.8415, 37.6542, 37.4490,
37.2221, 36.9671, 36.6740, 36.3232, 35.8661, 35.0000 };
const std::string cutterFile = CGAL::data_file_path("Cutter.off");
const std::string rotorFile = CGAL::data_file_path("Rotor.off");
validRead = (!PMP::IO::read_polygon_mesh(cutterFile, cutter)
|| !PMP::IO::read_polygon_mesh(rotorFile, rotor));
assert(!validRead);
PMP::triangulate_faces(cutter);
PMP::triangulate_faces(rotor);
PMP::transform(rotAroundX(M_PI / 2), cutter);
for (int i = 0; i < ND; i++) {
//simulateDrillingRadial(Mesh & cutter, Mesh & rotor, Mesh & out, unsigned int step)
simulateDrillingRadial(cutter, rotor, out[i], steps[i] + 10);
}
writeToCSV("tmp.csv", ND, out, steps);
return 0;
}
Change the output format of your mesh from *.off to *.stl, then open the intersection mesh in a software like Autodesk Netfabb which can detect and repair errors in the meshes being loaded. I think there’s a high chance the functions you’re using generate meshes with bugs. Possible bugs include holes, duplicate triangles, and self-intersections. Strictly speaking, such meshes do not unambiguously define a solid body and they have no volume.
If you confirm that’s the problem, there’re two ways to fix.
Replace or fix the intersection algorithm making it produce watertight meshes with no self-intersections or duplicate triangles. Maybe the Nef Polyhedra from the same library will help.
Replace volume computation algorithm making it tolerant to the bugs you have in your intersection meshes.
I realize the answer is rather vague. The reason for that — the problem is very hard to solve well. Very smart people published research papers over several decades. Some companies even selling commercial libraries just for reliable Boolean operations on 3D meshes.

Why is my C++ OpenCV 3.4.1 Neural Network predicting so badly?

I am trying to develop an Artificial Neural Network in C++ using OpenCV 3.4.1 with the aim of being able to recognise 33 different characters, including both numbers and letters, but the results I am obtaining are always wrong.
I have tested my code with different parameters' values like the alpha and beta of the sigmoid function that I am using for training, the backpropagation parameters, or the number of hidden nodes but, although the result varies sometimes, it normally tends to be a vector of the following shape:
Classification result:
[20.855789, -0.033862107, -0.0053131776, 0.026316155, -0.0032050854,
0.036046479, -0.025410429, -0.017537225, 0.015429396, -0.023276867, 0.013653283, -0.025660357, -0.051959664, -0.0032470606, 0.032143779, -0.011631044, 0.022339549, 0.041757714, 0.04414707, -0.044756029, 0.042280547, 0.012204648, 0.026924053, 0.016814215, -0.028257577, 0.05190875, -0.0070033628, -0.0084492415, -0.040644459, 0.00022287761, -0.0376678, -0.0021550131, -0.015310903]
That is, independently of which character I test, it is always predicting that the analysed character is the one in the first position of the characters vector, which corresponds to number '1'.
The training data is obtained from an .XML I have created, which contains 474 samples (rows) with 265 attributes each (cols). As for the training classes, following some advice I found in a previous question in this forum, it is obtained from another .XML file that contains 474 rows, one for each training sample, and 33 columns, one for each character/class.
I attach the code below so that you can perhaps kindly guess what I am doing wrong and I am so thankful in advance for any help you can offer! :)
//Create the Neural Network
Mat_<int> layerSizes(1, 3);
layerSizes(0, 0) = numFeaturesPerSample;
layerSizes(0, 1) = nlayers;
layerSizes(0, 2) = numClasses;
//Set ANN params
Ptr<ANN_MLP> network = ANN_MLP::create();
network->setLayerSizes(layerSizes);
network->setActivationFunction(ANN_MLP::SIGMOID_SYM, 0.6, 1);
network->setTrainMethod(ANN_MLP::BACKPROP, 0.1, 0.1);
Ptr<TrainData> trainData = TrainData::create(TrainingData, ROW_SAMPLE, classes);
network->train(trainData);
//Predict
if (network->isTrained())
{
trained = true;
Mat results;
cout << "Predict:" << endl;
network->predict(features, results);
cout << "Prediction done!" << endl;
cout << endl << "Classification result: " << endl << results << endl;
//We need to know where in output is the max val, the x (cols) is the class.
Point maxLoc;
double maxVal;
minMaxLoc(results, 0, &maxVal, 0, &maxLoc);
return maxLoc.x;
}

stereoCalibrate() changes focal lengths even when it was not supposed to

I noticed that opencv stereoCalibrate() changes the focal lengths in camera matrices even though I've set appropriate flag (ie CV_CALIB_FIX_FOCAL_LENGTH). I'm using two identical cameras with the same focal length set mechanically on lens and furthermore I know the sensor size so I can compute intrinsic camera matrix manually what actually I do.
Here you have some output form the stereo calibration program - camera matrices before and after stereoCalibrate().
std::cout << "Before calibration: " << std::endl;
std::cout << "C1: " << _cameraMatrixA << std::endl;
std::cout << "C2: " << _cameraMatrixB << std::endl;
double error = cv::stereoCalibrate(objectPoints, imagePointsA, imagePointsB, _cameraMatrixA, _distCoeffsA, _cameraMatrixB, _distCoeffsB, _imageSize,
R, T, E, F,
cv::TermCriteria((cv::TermCriteria::COUNT + cv::TermCriteria::EPS), 30, 9.999999999999e-7), CV_CALIB_FIX_FOCAL_LENGTH | CV_CALIB_FIX_PRINCIPAL_POINT);
std::cout << "After calibration: " << std::endl;
std::cout << "C1: " << _cameraMatrixA << std::endl;
std::cout << "C2: " << _cameraMatrixB << std::endl;
Before calibration:
C1: [6203.076923076923, 0, 1280; 0,
6203.076923076923, 960; 0, 0, 1]
C2: [6203.076923076923, 0, 1280; 0, 6203.076923076923, 960; 0, 0,
1]
After calibration:
C1: [6311.77650416514, 0, 1279.5; 0, 6331.34531760757, 959.5; 0,
0, 1]
C2: [6152.655897294907, 0, 1279.5; 0, 6206.591406832492, 959.5; 0,
0, 1]
I think this is weird opencv behavior. Anyone faced similar problem? I know it is easy to solve, I can just set focal lengths to camera matrices after stereo calibration.
In order to do what you want, you have to call stereoCalibrate with flags:
CV_CALIB_USE_INTRINSIC_GUESS | CV_CALIB_FIX_FOCAL_LENGTH | CV_CALIB_FIX_PRINCIPAL_POINT
If you do not use the CV_CALIB_USE_INTRINSIC_GUESS flag, stereoCalibrate will first initialize the camera matrices and distortion coefficients itself and then fix part of them in the subsequent optimization. This is stated in the documentation, although rather unclearly and without mentionning that critical flag:
Besides the stereo-related information, the function can also perform a full calibration of each of two cameras. However, due to the high dimensionality of the parameter space and noise in the input data, the function can diverge from the correct solution. If the intrinsic parameters can be estimated with high accuracy for each of the cameras individually (for example, using calibrateCamera() ), you are recommended to do so [...].
Using CV_CALIB_USE_INTRINSIC_GUESS in addition to any of the CV_CALIB_FIX_* flags tells the function to use what you are passing as input, otherwise, this input is simply ignored and overwritten.
the CV_CALIB_FIX_FOCAL_LENGTH flag causes the optimization routine to just use the Fx and Fy that were passed in the intrinsic matrix.

Transposing not quadratic matrix to another

I try to transpose from one MatrixX* into another (not quadratic but with correct dimensions). However the best I could find is the Transpose< Derived > ::transpose() function.
Is there even a call which puts the result into an already allocated Matrix instead of allocating a new one?
EDIT:
Actually I was using Eigen::Map on top of the Matrix.
typedef Eigen::Matrix<std::uint8_t, Eigen::Dynamic, Eigen::Dynamic> matrix_type;
typedef Eigen::Map<matrix_type> map_type;
const map_type src ( src_ptr , width , height );
map_type dest( dest_ptr, height, width );
map.transposeInPlace();
Using transposeInPlace() triggers an assert in Derived& DenseBase<Derived>
::lazyAssign(const DenseBase<OtherDerived>& other).
Try to use transposeInPlace() function
Here is the documentation: http://eigen.tuxfamily.org/dox/TutorialMatrixArithmetic.html
For in-place transposition, as for instance in a = a.transpose(),
simply use the transposeInPlace() function:
MatrixXf a(2,3); a << 1, 2, 3, 4, 5, 6;
cout << "Here is the initial matrix a:\n" << a << endl;
a.transposeInPlace();
cout << "and after being transposed:\n" << a << endl;
UPDATE: As Zeta mentioned in comment, matrix object should be resizable - this is always true for all MatrixX* objects.
Using Eigen::Map on top of a Matrix indeed results in an assert, since it seems like transposeInPlace is not possible yet for Eigen::Map (AKA a bug).
Luckily for me, using regular ::transpose was fine, since Eigen uses late assigning of data.