Armadillo reading MAT file error - c++

I'm currently cross-compiling on the BeagleBone Black in a Visual Studio environment using Armadillo to translate MATLAB code into C++.
This is a signal processing project, so I need a way to read and write binary data files, specifically .mat files. Thankfully, the armadillo documentation says that you can load .mat files directly into a matrix using .load()
I attempted that at first, but it seems like it's not reading the file correctly, nor is it reading all the entries. My reference file is a 2000x6 matrix, and the created armadillo matrix is 5298x1. I know that without an armadillo-mimicking header, it will be converted into a column vector and I will need to reshape it using .reshape(), yet it simply isn't receiving all the entries, and by inspection, the entries it did read are wrong.
I'm not sure what the problem is. I've placed the data reference .mat files in the Debug folder for the remote project on the BBB, where the .out compiled file is created. Is there another way I should integrate it?
Also, help with mimicking the armadillo header or other suggestions are welcome.
If you need anything, please let me know.
Here is the test program I am using:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main()
{
mat data_ref;
data_ref.load("Epoxy_6A_Healthy_Output_200kHz_Act1_001.mat");
cout << "For Data_ref, there are " << data_ref.n_cols << " columns and " << data_ref.n_rows << " rows.\n";
cout << "First item: " << data_ref(0) << "\n6th item: " << data_ref(6) << "\n2000th item: " << data_ref(2000);
data_ref.reshape(2000, 6);
cout << "For Data_ref, there are " << data_ref.n_cols << " columns and " << data_ref.n_rows << " rows.\n";
cout << "First item: " << data_ref(0,0) << "\nLast Item: " << data_ref(1999,5);
cout << "\nDone";
return 0;
}
The first element in the .mat file is 0.0, and the last element is 0.0014.
Here is the output.
For Data_ref, there are 1 columns and 5298 rows.
First item: 8.48749e-53
th item: 9.80727e+256
th item: -2.4474e+238For Data_ref, there are 6 columns and 2000 rows.
First item: 8.48749e-53
(gdb) 1028-var-list-children --simple-values "var4.public" 0 1000
(gdb) 1030-var-list-children --simple-values "var4.arma::Base<double,
arma::Mat<double> >" 0 1000
Last Item: 0
Done=thread-exited,id="1",group-id="i1"
The program '' has exited with code 0 (0x0).
Thanks

Armadillo does not support Matlab's .mat format. In the documentation they refer to the Armadillo mat binary format. You may however save the data in Matlab using the hdf5 binary format and import it into Armadillo but then you have to download the hdf5 lib and reconfigure Armadillo. See the hdf5_binary section in the documentation.

Related

function which generate random points in polygon in c++ and postgresql

installing PostgreSQL database with spatial extension (POST GIS)
create a table points (X,Y,Z) in the database
in C # or C ++ write a program that will connect to the database
please write a function that will generate random points within the Polish borders and save them in the table
points (at least 1000 points, for the Z value, i.e. the height, please assume the range from 0 to 300 meters)
please write a test that will check if the generated points are at least 3 km apart
please create a voivodship table and import the voivodeship borders there
please write a function that will list all generated points for each province
are included in its outline
SHP FILES with country borders and voivodeship borders:
https://gis-support.pl/baza-wiedzy-2/dane-do-pobrania/granice-administracyjne/
Additional task:
please draw all voivodeships and points in the program window
please color the points in different colors for each province
for the generated points, please perform the triangulation operation and show the result in the program window
main
My question is how am I supposed to write this function in qt by plugin QGIS to c++ or how? I don't understand how I am supposed to do this task. Thanks in advance
I added files with Polish borders and voivodship to postgresql and created x,y,z table.
Then I connected to database via pqxx library.
#include <string>
#include <iostream>
#include <pqxx/pqxx>
#include <random>
#include <pqxx/stream_to.hxx>
#include <pqxx/transaction_base.hxx>
#include <pqxx/stream_from.hxx>
int main()
{
std::string connectionString = "host=localhost port=5432 dbname=test user=postgres password =Kotarba2020";
try
{
pqxx::connection connectionObject(connectionString.c_str());
pqxx::work worker(connectionObject);
pqxx::result response = worker.exec("SELECT * FROM Wojewodztwa");
for (size_t i = 0; i < response.size(); i++)
{
std::cout << "Id: " << response[i][0] << " X: " << response[i][1] << " Y: " << response[i][2] << " Z: " << response[i][3] << std::endl;
}
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
}
system("pause");
}

Why is the point-cloud-library's loadPCDFile so slow?

I am reading 2.2 million points from a PCD file, and loadPCDFile is using ca 13 sec both in Release as well as Debug mode. Given that visualization programs like CloudCompare can read the file in what seems like milliseconds, I expect that I am doing something harder than it needs to be.
What am I doing wrong?
The top of my PCD file:
# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS rgb x y z _
SIZE 4 4 4 4 1
TYPE F F F F U
COUNT 1 1 1 1 4
WIDTH 2206753
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 2206753
DATA binary
¥•ÃöèÝÃájfD ®§”ÃÍÌÝÃá:fD H”ø¾ÝÃH!fD .....
From my code, reading the file:
#include <iostream>
#include <vector>
#include <pcl/common/common.h>
#include <pcl/common/common_headers.h>
#include <pcl/common/angles.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/console/parse.h>
#include <pcl/filters/extract_indices.h>
#include <pcl/features/normal_3d.h>
#include <boost/thread/thread.hpp>
int main() {
(...)
pcl::PointCloud<pcl::PointXYZRGB>::Ptr largeCloud(new pcl::PointCloud<pcl::PointXYZRGB>);
largeCloud->points.resize(3000000); //Tried to force resizing only once. Did not help much.
if (pcl::io::loadPCDFile<pcl::PointXYZRGB>("MY_POINTS.pcd", *largeCloud) == -1) {
PCL_ERROR("Couldn't read file MY_POINTS.pcd\n");
return(-1);
}
(...)
return 0;
}
(Using PCL 1.8 and Visual Studio 2015)
Summary of below...
PCL is slightly slower at loading cloud compare formatted PCD files. Looking at the headers, CC seems to add an extra variable to each point "_" that PCL doesn't like and has to format out. But this is only a difference of 30%-40% load time.
Based on the result that with the same size point cloud (3M), my computer took 13 seconds to load it from cloud compare when the program was compiled in Debug mode and only 0.25s to load the same cloud in Release mode. I think that you are running in debug mode. Depending on how you compiled/installed PCL, you may need to rebuild PCL to generate the appropriate Release build. My guess is that whatever you think you are doing to change from Debug to Release is not in fact engaging the PCL release library.
In PCL, across almost all functions, moving from Debug to Release will often give you one to two orders of magnitude faster processing (due to PCL's heavy usage of large array objects that have to be managed differently in Debug mode for visibility)
Testing PCL with cloud compare files
Here is the code that I ran to produce the following outputs:
std::cout << "Press enter to load cloud compare sample" << std::endl;
std::cin.get();
TimeStamp stopWatch = TimeStamp();
pcl::PointCloud<pcl::PointXYZRGB>::Ptr tempCloud2(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::io::loadPCDFile("C:/SO/testTorusColor.pcd", *tempCloud2);
stopWatch.fullStamp(true);
std::cout <<"Points loaded: "<< tempCloud2->points.size() << std::endl;
std::cout << "Sample point: " << tempCloud2->points.at(0) << std::endl;
std::cout << std::endl;
std::cout << "Press enter to save cloud in pcl format " << std::endl;
std::cin.get();
pcl::io::savePCDFileBinary("C:/SO/testTorusColorPCLFormatted.pcd", *tempCloud2);
std::cout << "Press enter to load formatted cloud" << std::endl;
std::cin.get();
stopWatch = TimeStamp();
pcl::PointCloud<pcl::PointXYZRGB>::Ptr tempCloud3(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::io::loadPCDFile("C:/SO/testTorusColorPCLFormatted.pcd", *tempCloud3);
stopWatch.fullStamp(true);
std::cout << "Points loaded: " << tempCloud3->points.size() << std::endl;
std::cout << "Sample point: " << tempCloud3->points.at(0) << std::endl;
std::cout << std::endl;
std::cin.get();
Cloud compare generated colored cloud (3M points with color):
Running in Debug, reproduced your approximate load time with a 3M pt cloud:
Running in Release:
I was running into exactly this situation.
It simply comes down to file storage style. Your file (taking that long to load) is almost certainly an ASCII style point cloud file. If you want to be able to load it much faster (x100) then convert it to binary format. For reference, I load a 1M pt cloud in about a quarter second (but that is system dependent)
pcl::PointCloud<pcl::PointXYZ>::Ptr tempCloud(new pcl::PointCloud<pcl::PointXYZ>);
The load call is the same:
pcl::io::loadPCDFile(fp, *tempCloud);
but in order to save as binary use this:
pcl::io::savePCDFileBinary(fp, *tempCloud);
Just in case it helps, here is a snippet of the code I use to load and save clouds (I structure them a bit, but it is likely based on an example, so I don't know how important that is but you may want to play with it if you switch to binary and are still seeing long load times).
//save pt cloud
std::string filePath = getUserInput("Enter file name here");
int fileType = stoi(getUserInput("0: binary, 1:ascii"));
if (filePath.size() == 0)
printf("failed file save!\n");
else
{
pcl::PointCloud<pcl::PointXYZ> tempCloud;
copyPointCloud(*currentWorkingCloud, tempCloud);
tempCloud.width = currentWorkingCloud->points.size();
tempCloud.height = 1;
tempCloud.is_dense = false;
filePath = "../PointCloudFiles/" + filePath;
std::cout << "Cloud saved to:_" << filePath << std::endl;
if (fileType == 0){pcl::io::savePCDFileBinary(filePath, tempCloud);}
else
{pcl::io::savePCDFileASCII(filePath, tempCloud);}
}
//load pt cloud
std::string filePath = getUserInput("Enter file name here");
if (filePath.size() == 0)
printf("failed user input!\n");
else
{
filePath = "../PointCloudFiles/" + filePath;
pcl::PointCloud<pcl::PointXYZ>::Ptr tempCloud(new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile(filePath, *tempCloud) == -1) //* load the file
{
printf("failed file load!\n");
}
else
{
copyPointCloud(*tempCloud, *currentWorkingCloud); std::cout << "Cloud loaded from:_" << filePath << std::endl;
}
}
List item
This looks correct, when comparing with a pcl example. I think the main work of loadPCDFile is done in the function pcl::PCDReader::read, which is located in the file pcd_io.cpp. When checking the code for binary data, as it is in your case, there are 3 nested for loops which check if the numerical data of each field is valid. The exact code comment is
// Once copied, we need to go over each field and check if it has NaN/Inf values and assign cloud
That could be time consuming. However, I am speculating.

how to load jpeg file using DLIB libarary?

After attempting to run a example program downloaded from Here, I understand for working with jpeg files , I must add #define DLIB_JPEG_SUPPORT directive to the project. but before that It's necessary to download jpeg library and add it to the project. I did These steps:
1.Download jpegsr9a.zip from here and unzipped it.
2.Download WIN32.mak and paste it into the jpeg root folder
3.Open Developer Command Prompt from visual studio 2013 tools
4.In command prompt type : nmake -f makefile.vc setup-v10
5.After these steps jpeg.sln created ,the note is when I open jpeg.sln in VS2013 the message come:
maybe base of the problem start from here , I don't know
6.Build the jpeg.sln with the proper configuration (I built it many times with different configurations, recently I built it using this .)
at the end of building the error came :"unable to start jpeg.lib"
but in release folder or debug folder (depend on configuration) jpeg.lib was created
open main project which is using DLIB for detecting face,I added jpeg root folder to Additonal Include Directory and jepegroot/release to Additional Libarary Directories ,then change the UseLibrary dependencies to "yes" and I also added jpeg.lib to the dependecies.
during building the project errors come:
This is the source which I trying to build and run
//#define HAVE_BOOLEAN
#define DLIB_JPEG_SUPPORT
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include<dlib/image_transforms.h>
#include <dlib/gui_widgets.h>
#include <dlib/image_io.h>
#include <iostream>
//
using namespace dlib;
using namespace std;
// ----------------------------------------------------------------------------------------
int main(int argc, char** argv)
{
try
{
// This example takes in a shape model file and then a list of images to
// process. We will take these filenames in as command line arguments.
// Dlib comes with example images in the examples/faces folder so give
// those as arguments to this program.
if (argc == 1)
{
cout << "Call this program like this:" << endl;
cout << "./face_landmark_detection_ex shape_predictor_68_face_landmarks.dat faces/*.jpg" << endl;
cout << "\nYou can get the shape_predictor_68_face_landmarks.dat file from:\n";
cout << "http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
return 0;
}
// We need a face detector. We will use this to get bounding boxes for
// each face in an image.
frontal_face_detector detector = get_frontal_face_detector();
// And we also need a shape_predictor. This is the tool that will predict face
// landmark positions given an image and face bounding box. Here we are just
// loading the model from the shape_predictor_68_face_landmarks.dat file you gave
// as a command line argument.
shape_predictor sp;
deserialize(argv[1])>>sp;
image_window win, win_faces;
// Loop over all the images provided on the command line.
for (int i = 2; i < argc; ++i)
{
cout << "processing image " << argv[i] << endl;
array2d<rgb_pixel> img;
load_image(img, argv[i]);
// Make the image larger so we can detect small faces.
pyramid_up(img);
// Now tell the face detector to give us a list of bounding boxes
// around all the faces in the image.
std::vector<rectangle> dets = detector(img);
cout << "Number of faces detected: " << dets.size() << endl;
// Now we will go ask the shape_predictor to tell us the pose of
// each face we detected.
std::vector<full_object_detection> shapes;
for (unsigned long j = 0; j < dets.size(); ++j)
{
full_object_detection shape = sp(img, dets[j]);
cout << "number of parts: " << shape.num_parts() << endl;
cout << "pixel position of first part: " << shape.part(0) << endl;
cout << "pixel position of second part: " << shape.part(1) << endl;
// You get the idea, you can get all the face part locations if
// you want them. Here we just store them in shapes so we can
// put them on the screen.
shapes.push_back(shape);
}
// Now let's view our face poses on the screen.
win.clear_overlay();
win.set_image(img);
win.add_overlay(render_face_detections(shapes));
// We can also extract copies of each face that are cropped, rotated upright,
// and scaled to a standard size as shown here:
dlib::array<array2d<rgb_pixel> > face_chips;
extract_image_chips(img, get_face_chip_details(shapes), face_chips);
win_faces.set_image(tile_images(face_chips));
cout << "Hit enter to process the next image..." << endl;
cin.get();
}
}
catch (exception& e)
{
cout << "\nexception thrown!" << endl;
cout << e.what() << endl;
}
}
// ----------------------------------------------------------------------------------------
I can choose other alternatives but I spend too much time to reach here , I want to know How I can solve this problem and load jpeg file when using DLIB
I also read these links:
Compiling libjpeg
http://www.dahlsys.com/misc/compiling_ijg_libjpeg/index.html
dlib load jpeg files
http://sourceforge.net/p/dclib/discussion/442518/thread/8a0d42dc/
I solved my problem by below instruction, please follow it.
- Add include directory in VC++
- Include source.cpp
- Add add files in dlib/external/libjpeg to project
- Define in Preprocessor
-- You don't need to use any additional library.

How to have the exact same formatting of numbers in C++and R?

I am writing a functional test in R for a program in C++. The typical output file has some columns as "string" and the other columns as "double". Then, I would like to use diff to compare the expected output file returned by R with the observed output file returned by C++.
In pseudo-C++, I simply do this:
stringstream ssTxt;
ssTxt.precision (7);
ssTxt.setf (ios::scientific);
for(i=0; i<10; ++i){
ssTxt << names[i] << " " << values[0][i] << " " << values[1][i] << " " << values[2][i] << endl;
// write in file and clear ssTxt
}
Here is a typical output:
item1 2.8200000e-01 500 4.1846912e-04
In R, I do that:
results <- data.frame(name="item1", val1=0.282, val2=500, val3=0.00041846912873)
write.table(x=format(results, digits=8, nsmall=7, scientific=TRUE), file=...)
Here is the output corresponding to the same data:
item1 2.82e-01 500 4.1846912e-04
As you can see, it almost works, but R doesn't add trailing 0 after "2.82". I prefer to change the R code rather than the C++ code. So how can I do that?
Have you tried
sprintf or formatC?
To be specific I think
sprintf("%e", 2.82e-01)
should do the trick but as mentioned above have a look at help(sprintf), which describes the various formatting capabilities of this function ...

Why does my output go to cout rather than to file?

I am doing some scientific work on a system with a queue. The cout gets output to a log file with name specified with command line options when submitting to the queue. However, I also want a separate output to a file, which I implement like this:
ofstream vout("potential.txt"); ...
vout<<printf("%.3f %.5f\n",Rf*BohrToA,eval(0)*hatocm);
However it gets mixed in with the output going to cout and I only get some cryptic repeating numbers in my potential.txt. Is this a buffer problem? Other instances of outputting to other files work... maybe I should move this one away from an area that is cout heavy?
You are sending the value returned by printf in vout, not the string.
You should simply do:
vout << Rf*BohrToA << " " << eval(0)*hatocm << "\n";
You are getting your C and C++ mixed together.
printf is a function from the c library which prints a formatted string to standard output. ofstream and its << operator are how you print to a file in C++ style.
You have two options here, you can either print it out the C way or the C++ way.
C style:
FILE* vout = fopen("potential.txt", "w");
fprintf(vout, "%.3f %.5f\n",Rf*BohrToA,eval(0)*hatocm);
C++ style:
#include <iomanip>
//...
ofstream vout("potential.txt");
vout << fixed << setprecision(3) << (Rf*BohrToA) << " ";
vout << setprecision(5) << (eval(0)*hatocm) << endl;
If this is on a *nix system, you can simply write your program to send its output to stdout and then use a pipe and the tee command to direct the output to one or more files as well. e.g.
$ command parameters | tee outfile
will cause the output of command to be written to outfile as well as the console.
You can also do this on Windows if you have the appropriate tools installed (such as GnuWin32).