for normal saving and loading pytorch model is quite simple.
in python I can use torch.save(model, FilePath) and in c++torch::jit::load(FilePath). and the saved model and c++ load code can be placed in one directory. However, there is a limitation that binary file cannot be contained in the directory in the production mode (Please don't ask me why the binary file cannot be contained, I also wondering).
So, I want to know how to save the pytorch model from python without serializtion and load this model in c++. Is it possible?
Use the ONNX file format. Pytorch can output files in this format, and then you should be able to find a related library to load this in C++.
Related
I have a character/font dataset found in UCI repository:
https://archive.ics.uci.edu/ml/datasets/Character+Font+Images
Take any CSV file as an example, for instance 'AGENCY.csv'. I am struggling to load it to the OpenCV using a c++ functions. It seems that the structure of the dataset is quite different from what normally assumed in function
cv::ml::TrainData::loadFromCSV
Any ideas to do it neatly or I need to pre-process the csv files directly?
You can try to load csv file like this:
CvMLData data;
data.read_csv( filename )
For details on opencv ml csv, Refer this page:
http://www.opencv.org.cn/opencvdoc/2.3.1/html/modules/ml/doc/mldata.html
When applying TF model with its C++ interface, freeze_graph operations, for the graph file and the cpk file, are required. This operation will then generate a dumped binary protobuffer file.
However, protobuffer does not support to dump a file that is larger than 2GiB.
In this case, what's the right way to load a big model in TF, with its C++ interface? Any clues will be appriciated.
You can still load a model if you don't freeze the graph and use savedmodel.
I've found a few resources for how to import a model with Tensorflow c++ after exporting it to a .pb file, but my understanding is that the .pb file method has been replaced with a newer method which uses the tf.Saver.save method to produce a .meta, .index, .data-00000-of-00001, and a checkpoint file. I cannot find anything on how to import a model from these file types with the C++ API.
How can I do this?
I use TFLearn wrapper on the top of Tensorflow, but the process should be identical with plain Tensorflow models. You can save a checkpoint from TFLearn in this way correctly, but you can also freeze if you want to use a .pb model. It is possible to load both a checkpoint or a model file in C++. In any case, the inference part is identical in C++.
I am currently developing an application for facial recognition.
The algorithms are implemented and trained using the MatConvnet library (http://www.vlfeat.org/matconvnet/). At the end, I have a Network (.mat file) which looks like that:
I would like to know if it were possible to extract the weights of the Network using its .mat file, write them in a XML file and read them with Caffe C++. I would like to reuse them in Caffe C++ in order to do some testing and hardware implementation. Is there an efficient and practical way to proceed so ?
Thank you for very much for your help.
The layer whose parameters you'd like to store, must be set as 'precious'. In net.var you can access the parameters and write them.
There is a conversion script that converts matconvnet models to caffe models here which you may find useful.
You can't use weights of the trained Network by matconvnet for caffe. You can merely import your model from matconvnet to caffe.(https://github.com/vlfeat/matconvnet/blob/4ce2871ec55f0d7deed1683eb5bd77a8a19a50cd/utils/import-caffe.py). But this script does not support all layers and you may have difficulties in employing it.
The best way is to define your caffe prototxt in python as the matconvnet model.
I have been given the task to create an application that will scan a directory containing .stl files and generate jpgs thumnails of the models, no viewer or manipulation required. Is there any solution available or should I create my own?
Assimp claims to be able to read .stl files, perhaps you could prepare a small wrapper around the API that loads the model, captures the framebuffer and saves an image.