I tried to read the .mat file with -v7.3 in C++. As the .mat file with version -7.3 is the same as the hdf5 file, I try to read the mat file with the hdf5 API. I able to open group, reference, and the dataset. I also able to read the dataset with struct, int, double, or character array format.
But I see one dataset show its name as a class type. But I don't know how I read it. I attached an image for better understanding.
"error" field shows the value of a class type name. When I open it in matlab it shows like the below picture -
I also try a compound data type to read it. But I can not able to read. Can you suggest me any way to read the data from -v7.3 mat file in C++?
Related
I recently used cppflow on VS 2019 on Windows 10.
My original data is data in 4 columns per row. I want to use neural network to classify precipitation particles. I have trained and saved my model (.pb) on python. Because it is text data in .txt, and the example described in the cppflow document uses pictures as input, I would like to ask what is the function of cppflow input in .txt text?
Read the .txt file into a std::vector then convert into a cppflow::tensor.
I am trying to create a program that can read out to a csv (comma separated). Is there a way to manipulate say the column width or whether a cell is left or right justified internally from my code so that when i open up the file in excel it looks better than a bunch of strings cramped into tiny cells. My goal is for the user to do as little thinking as possible. If they open up the file and have to size everything right just to see it that seems a little crummy.
CSV is a plain text file format. It doesn't support any visual formatting. For that, you need to write the data to another file format such as .xlsx or .ods.
I am trying to save multiple datasets into a single hdf5 file using armadillo's new feature to give custom names to datasets (using armadillo version 8.100.1).
However, only the last saved dataset will end up in the file. Is there any way to append to an existing hdf5 file with armadillo instead of replacing it?
Here is my example code:
#define ARMA_USE_HDF5
#include <armadillo>
int main(){
arma::mat A(2,2, arma::fill::randu);
arma::mat B(3,3, arma::fill::eye);
A.save(arma::hdf5_name("multi-hdf5.mat", "dataset1"), arma::hdf5_binary);
B.save(arma::hdf5_name("multi-hdf5.mat", "dataset2"), arma::hdf5_binary);
return 0;
}
The hdf5 file is read out using the h5dump utility.
Unfortunately, I don't think you can do that. I'm an HDF5 developer, not an armadillo developer, but I took a peek at their source for you.
The save functions look like they are designed to dump a single matrix to a single file. In the function save_hdf5_binary() (diskio_meat.hpp:1255 for one version) they call H5Fcreate() with the H5F_ACC_TRUNC flag, which will clobber any existing file. There's no 'open if file exists' or clobber/non-clobber option. The only H5Fopen() calls are in the hdf5_binary_load() functions and those don't keep the file open for later writing.
This clobbering is what is happening in your case, btw. A.save() creates a file containing dataset1, then B.save() clobbers that file with a new file containing dataset2.
Also, for what it's worth, 'appending to an HDF5 file' is not really the right way to think about that. HDF5 files are not byte/character streams like a text file. Appending to a dataset, yes. Files, no. Think of it like a relational database: You might append data to a table, but you probably wouldn't say that you were appending data to the database.
The latest version of Armadillo already covers that possibility.
You have to use hdf5_opts::append in the save method so if you want to save
a matrix A then you can write
A.save(hdf5_name(filename, dataset, hdf5_opts::append) ).
How to read image data from .cr2 (raw image format by Canon) in C++?
The only one operation I need to perform is to read pixel data of .cr2 file directly if it is possible, otherwise I would like to convert it to any loss-less image and read its pixels' data.
Any suggestions?
I would go with ImageMagick too. You don't have to convert all your files up front, you can do them one at a time as you need them.
In your program, rather than opening the CR2 file, just open a pipe (popen() call) that is executing an ImageMagick command like
convert file.cr2 ppm:-
then you can read the extremely simple PPM format which is described here - basically just a line of ASCII text that tells you the file type, then another line of ASCII text that tells you the image dimensions, followed by a max value and then the data in binary.
Later on you can actually use the ImageMagick library and API if you need to.
I'm want to use OpenCV's KNN algorithm to classify 4 features into one of two classes. In a text file, I have my training data in the following format:
feature_1,feature_2,feature_3,feature_4,class
where feature_1, feature_3, feature_4 and class are integers and feature_2 is of type float. The first line of the text file contains the headings for each feature.
However, the OpenCV documentation (http://docs.opencv.org/modules/ml/doc/k_nearest_neighbors.html) states that the train function requires the training data in the Mat data structure.
I'm confused as to how I can convert my text file of training data, to a Mat. If anyone can help me out with this I would really appreciate it.
Basicly, OpenCV implements CvMLData which can read csv files (and your file is a comma separated file).
according to documentation: http://docs.opencv.org/modules/ml/doc/mldata.html
Once you create an CvMLData object, you can use read_csv method:
read_csv(const char* filename)
to load it, and then use get_values() to get pointer to the input data as Mat and get_responses() to get the pointer to the labels as Mat
To set which column is considered as "response" (label) use the set_response_idx method