Tensorflow C++ API: How to read Tensor from files? - c++

I saved my training data (maybe float vectors) in some files, and tried to load it as a Tensor using Tensorflow C++ reader class.
Here is my code.
using namespace tensorflow;
using namespace tensorflow::ops;
using namespace tensorflow::sparse;
Scope root = Scope::NewRootScope();
auto indexReader = FixedLengthRecordReader(root, sizeof(uint32_t));
auto queue = FIFOQueue(root, {DataType::DT_STRING});
auto file = Input::Initializer(std::string("mydata.feat"));
std::cerr << file.tensor.DebugString() << std::endl;
auto enqueue = QueueEnqueue(root, queue, {file});
std::cerr << Input(QueueSize(root, queue).size).tensor().DebugString() << std::endl;
auto rawInputIndex = ReaderRead(root, indexReader, queue);
std::cerr << Input(rawInputIndex.key).tensor().DebugString() << std::endl;
auto decodedInputIndex = DecodeRaw(root, rawInputIndex.value, DataType::DT_UINT8);
std::cerr << Input(decodedInputIndex.output).tensor().DebugString() << std::endl;
It is compiled very well but the cerr shows always empty Tensor. (below is execution result of my program on shell)
Tensor<type: string shape: [] values: mydata.feat>
Tensor<type: float shape: [0] values: >
Tensor<type: float shape: [0] values: >
Tensor<type: float shape: [0] values: >
I don't know why it doesn't work.
Or, is there any C++ example code for class ReaderRead or class FIFOQueue? I cannot find it anywhere...

What you're doing is building a graph. To run this graph you need to create a Session and run it. See the label_image example on the tensorflow codebase for an example of how to do this.

Related

CGAL How can I copy properties from Point_set to Surface mesh

First off, I'm aware of the CGAL GIS tutorial, but I just can't seem to copy properties from Point_set to surface mesh.
Any way, I'm loading the LIDAR point cloud to the point set as follows:
using Kernel = CGAL::Exact_predicates_inexact_constructions_kernel;
using Point = Kernel::Point_3;
using Point_set = CGAL::Point_set_3<Point>;
std::ifstream ifile("input.ply", std::ios_base::binary);
ifile >> point_set;
std::cerr << point_set.size() << " point(s) read" << std::endl;
ifile.close();
I can get the properties via
auto props = point_set.properties();
for (const auto& item : props)
std::cerr << item << std::endl;
// I do know that there exist property "classification" that is of unsigned char type
Point_set::Property_map<unsigned char> original_class_map
= point_set.property_map<unsigned char>("classification").first;
Then, I had tried to set the mesh and had added vertex property, using the code from above mentioned CGAL tutorial. The code below set the point's z coordinate as a property.
auto idx_to_point_with_info
= [&](const Point_set::Index& idx) -> std::pair<Point, Point_set::Index> {
return std::make_pair(point_set.point(idx), idx);
};
using Vbi = CGAL::Triangulation_vertex_base_with_info_2<Point_set::Index, Projection_traits>;
using Fbi = CGAL::Triangulation_face_base_with_info_2<int, Projection_traits>;
using TDS = CGAL::Triangulation_data_structure_2<Vbi, Fbi>;
using TIN_with_info = CGAL::Delaunay_triangulation_2<Projection_traits, TDS>;
TIN_with_info tin_with_info(
boost::make_transform_iterator(point_set.begin(), idx_to_point_with_info),
boost::make_transform_iterator(point_set.end(), idx_to_point_with_info));
auto classification_value = [&](const TIN_with_info::Vertex_handle vh) -> double
{
return vh->point().z();
};
for (TIN_with_info::Vertex_handle vh : tin_with_info.all_vertex_handles())
{ // should work without classification_value, just plain vh->info() = vh->point().z();
vh->info() = classification_value(vh);
}
using Mesh = CGAL::Surface_mesh<Point>;
Mesh tin_class_mesh;
Mesh::Property_map<Mesh::Vertex_index, double> class_map
= tin_class_mesh.add_property_map<Mesh::Vertex_index, double>("v:class").first;
CGAL::copy_face_graph(tin_with_info, tin_class_mesh,
CGAL::parameters::vertex_to_vertex_output_iterator(
boost::make_function_output_iterator(class_lambda)));
std::cerr << tin_class_mesh.number_of_vertices() << " vs " << point_set.size() <<std::endl;
Now, this works just fine, I had successfully set the z coordinate as a property on a mesh.
But, I just can't figure out how can I copy the classification property from the point_set to the tin_class_mesh. I know that I'd need to change double to unsigned char in the code, but I don't know how to access the property from the point_set and assign it to the corresponding vertex in tin_class_mesh. What am I doing wrong?
As a side note, the interesting part here is that the number of tin_colored_mesh.number_of_vertices() differs slightly from the point_set.size(). Why is that?

Different output from Libtorch C++ and pytorch

I'm using the same traced model in pytorch and libtorch but I'm getting different outputs.
Python Code:
import cv2
import numpy as np
import torch
import torchvision
from torchvision import transforms as trans
# device for pytorch
device = torch.device('cuda:0')
torch.set_default_tensor_type('torch.cuda.FloatTensor')
model = torch.jit.load("traced_facelearner_model_new.pt")
model.eval()
# read the example image used for tracing
image=cv2.imread("videos/example.jpg")
test_transform = trans.Compose([
trans.ToTensor(),
trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
resized_image = cv2.resize(image, (112, 112))
tens = test_transform(resized_image).to(device).unsqueeze(0)
output = model(tens)
print(output)
C++ Code:
#include <iostream>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <torch/script.h>
int main()
{
try
{
torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt");
model.to(torch::kCUDA);
model.eval();
cv::Mat visibleFrame = cv::imread("example.jpg");
cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112));
at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { 1, visibleFrame.rows,
visibleFrame.cols, 3 }, at::kByte);
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
tensor_image = tensor_image.to(at::kFloat);
tensor_image[0][0] = tensor_image[0][0].sub(0.5).div(0.5);
tensor_image[0][1] = tensor_image[0][1].sub(0.5).div(0.5);
tensor_image[0][2] = tensor_image[0][2].sub(0.5).div(0.5);
tensor_image = tensor_image.to(torch::kCUDA);
std::vector<torch::jit::IValue> input;
input.emplace_back(tensor_image);
// Execute the model and turn its output into a tensor.
auto output = model.forward(input).toTensor();
output = output.to(torch::kCPU);
std::cout << "Embds: " << output << std::endl;
std::cout << "Done!\n";
}
catch (std::exception e)
{
std::cout << "exception" << e.what() << std::endl;
}
}
The model gives (1x512) size output tensor as shown below.
Python output
tensor([[-1.6270e+00, -7.8417e-02, -3.4403e-01, -1.5171e+00, -1.3259e+00,
-1.1877e+00, -2.0234e-01, -1.0677e+00, 8.8365e-01, 7.2514e-01,
2.3642e+00, -1.4473e+00, -1.6696e+00, -1.2191e+00, 6.7770e-01,
...
-7.1650e-01, 1.7661e-01]], device=‘cuda:0’,
grad_fn=)
C++ output
Embds: Columns 1 to 8 -84.6285 -14.7203 17.7419 47.0915 31.8170 57.6813 3.6089 -38.0543
Columns 9 to 16 3.3444 -95.5730 90.3788 -10.8355 2.8831 -14.3861 0.8706 -60.7844
...
Columns 505 to 512 36.8830 -31.1061 51.6818 8.2866 1.7214 -2.9263 -37.4330 48.5854
[ CPUFloatType{1,512} ]
Using
Pytorch 1.6.0
Libtorch 1.6.0
Visual studio 2019
Windows 10
Cuda 10.1
before the final normalization, you need to scale your input to the range 0-1 and then carry on the normalization you are doing. convert to float and then divide by 255 should get you there. Here is the snippet I wrote, there might be some syntaax errors, that should be visible.
Try this :
#include <iostream>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <torch/script.h>
int main()
{
try
{
torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt");
model.to(torch::kCUDA);
cv::Mat visibleFrame = cv::imread("example.jpg");
cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112));
at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { visibleFrame.rows,
visibleFrame.cols, 3 }, at::kByte);
tensor_image = tensor_image.to(at::kFloat).div(255).unsqueeze(0);
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
ensor_image.sub_(0.5).div_(0.5);
tensor_image = tensor_image.to(torch::kCUDA);
// Execute the model and turn its output into a tensor.
auto output = model.forward({tensor_image}).toTensor();
output = output.cpu();
std::cout << "Embds: " << output << std::endl;
std::cout << "Done!\n";
}
catch (std::exception e)
{
std::cout << "exception" << e.what() << std::endl;
}
}
I don't have access to a system to run this so if you face anything comment below.

Parameter passing from Python script to C++ with boost-python

I am currently embedding Python in C++ using boost-python and boost-numpy.
I have the following Python test script:
import numpy as np
import time
def test_qr(m,n):
print("create numpy array")
A = np.random.rand(m, n)
print("Matrix A is {}".format(A))
print("Lets QR factorize this thing! Mathematics is great !!")
ts = time.time()
Q, R = np.linalg.qr(A)
te = time.time()
print("It took {} seconds to factorize A".format(te - ts))
print("The Q matrix is {}".format(Q))
print("The R matrix is {}".format(R))
return Q,R
def sum(m,n):
return m+n
I am able to execute a part of the code in C++ like this:
namespace p = boost::python;
namespace np = boost::python::numpy;
int main() {
Py_Initialize(); //initialize python environment
np::initialize(); //initialize numpy environment
p::object main_module = p::import("__main__");
p::object main_namespace = main_module.attr("__dict__");
// execute code in the main_namespace
p::exec_file("/Users/Michael/CLionProjects/CythonTest/test_file.py",main_namespace); //loads python script
p::exec("m = 100\n"
"n = 100\n"
"Q,R = test_qr(m,n)", main_namespace);
np::ndarray Q_matrix = p::extract<np::ndarray>(main_namespace["Q"]); // extract results as numpy array types
np::ndarray R_matrix = p::extract<np::ndarray>(main_namespace["R"]);
std::cout<<"C++ Q Matrix: \n" << p::extract<char const *>(p::str(Q_matrix)) << std::endl; // extract every element as a
std::cout<<"C++ R Matrix: \n" << p::extract<char const *>(p::str(R_matrix)) << std::endl;
std::cout<<"code also works with numpy, ask for a raise" << std::endl;
p::object sum = main_namespace.attr("sum")(10,10);
int result = p::extract<int>(main_namespace.attr("sum")(10,10));
std::cout<<"sum result works " << result << std::endl;
return 0;}
Now I am trying to use the sum function in the Python script but I do not always want to write a string like:
p::exec("m = 100\n"
"n = 100\n"
"Q,R = test_qr(m,n)", main_namespace);}
How can this be done without using the exec function?
I have tried things like:
p::object sum = main_namespace.attr("sum")(10,10);
int result = p::extract<int>(main_namespace.attr("sum")(10,10));
std::cout<<"sum result works " << result << std::endl;
As mentioned in the documentation of boost.
I also tried using the call_method function, but it didn't work.
I get either boost::python::error_already_set exception which mean there is something wrong in Python, but I do not know what.
Or an exit code 11.
The issue is rather trivial. Let's look at the tutorial you mention:
object main_module = import("__main__");
object main_namespace = main_module.attr("__dict__");
object ignored = exec("result = 5 ** 2", main_namespace);
int five_squared = extract<int>(main_namespace["result"]);
Notice how they extract the result object in the last line: main_namespace["result"]
The main_namespace object is a Python dictionary, and rather than extracting it's attribute, you're just looking for a value stored with the particular key. Hence, indexing with [] is the way to go.
C++ code:
#define BOOST_ALL_NO_LIB
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
#include <iostream>
namespace bp = boost::python;
int main()
{
try {
Py_Initialize();
bp::object module = bp::import("__main__");
bp::object globals = module.attr("__dict__");
bp::exec_file("bpcall.py", globals);
bp::object sum_fn = globals["sum"];
int result = bp::extract<int>(sum_fn(1,2));
std::cout << "Result (C++) = " << result << "\n";
} catch (bp::error_already_set) {
PyErr_Print();
}
Py_Finalize();
}
Python script:
def sum(m,n):
return m+n
Output:
Result (C++) = 3

simplify combinatorial map using CGAL

I want to simplify or edge collapse a mesh read from .off file as a combinatorial map using CGAL
std::ifstream ifile(fileName.toStdString().c_str());
if (ifile)
{
CGAL::load_off(lcc, ifile);
lcc.display_characteristics(std::cout)<<", is_valid="<<CGAL::is_valid(lcc)<<std::endl;
}
namespace SMS = CGAL::Surface_mesh_simplification ;
SMS::Count_stop_predicate<LCC> stop(lcc.number_of_halfedges()/2 - 1);
int r = SMS::edge_collapse
(lcc
,stop
,CGAL::parameters::halfedge_index_map(get(CGAL::halfedge_index, lcc))
.vertex_index_map(get(boost::vertex_index, lcc))
.get_cost(SMS::Edge_length_cost<LCC>())
.get_placement(SMS::Midpoint_placement<LCC>())
);
std::cout << "\nFinished...\n" << r << " edges removed.\n"
<< (lcc.number_of_darts()/2) << " final edges.\n" ;
lcc.display_characteristics(std::cout)<<", is_valid="<<CGAL::is_valid(lcc)<<std::endl;
the output :
#Darts=16674, #0-cells=2775, #1-cells=8337, #2-cells=5558, #ccs=1, is_valid=1
Finished...
0 edges removed.
8337 final edges.
#Darts=16674, #0-cells=2775, #1-cells=8337, #2-cells=5558, #ccs=1, is_valid=1
the method do nothing , I tried more than .off file and it's preview it properly but it cannot simplify it
I appreciate any help .
See the example given here, it works perfectly.

Reading *.mhd/*.raw format 3D images in ITK

How to load and write .mhd/.raw format 3D images in ITK? I have tried to use the following code but it is not getting loaded as the dimension of the loaded image is displayed as 0,0,0.
Can someone please point out the mistake I am making?
typedef float InputPixelType;
const unsigned int DimensionOfRaw = 3;
typedef itk::Image< InputPixelType, DimensionOfRaw > InputImageType;
//typedef itk::RawImageIO<InputPixelType, DimensionOfRaw> ImageIOType;
typedef itk::ImageFileReader<InputImageType > ReaderType;
/*
* --------------------Loader and saver of Raws, as well the function that takes a resulting (from inference matrix/vector) and creates a Raw out of it.-----------------------
*/
InputImageType::Pointer loadRawImageItk( std::string RawFullFilepathname, ReaderType::Pointer & RawImageIO ) {
//http://www.itk.org/Doxygen/html/classitk_1_1Image.html
//http://www.itk.org/Doxygen/html/classitk_1_1ImageFileReader.html
typedef itk::ImageFileReader<InputImageType> ReaderType;
ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(RawFullFilepathname);
//ImageIOType::Pointer RawImageIO = ImageIOType::New();
reader->SetImageIO( RawImageIO );
try {
reader->Update();
} catch (itk::ExceptionObject& e) {
std::cerr << e.GetDescription() << std::endl;
exit(1); // You can choose to do something else, of course.
}
//InputImageType::Pointer inputImage = reader->GetOutput();
InputImageType::Pointer inputImage = reader->GetOutput();
return inputImage;
}
int saveRawImageItk( std::string RawFullFilepathname, InputImageType::Pointer & outputImageItkType , ImageIOType::Pointer & RawImageIO) {
std::cout << "Saving image to: " << RawFullFilepathname << "\n";
typedef itk::ImageFileWriter< InputImageType > Writer1Type;
Writer1Type::Pointer writer1 = Writer1Type::New();
writer1->SetInput( outputImageItkType );
writer1->SetFileName( RawFullFilepathname );
writer1->SetImageIO( RawImageIO ); //seems like this is useless.
// Execution of the writer is triggered by invoking the \code{Update()} method.
try
{
writer1->Update();
}
catch (itk::ExceptionObject & e)
{
std::cerr << "exception in file writer " << std::endl;
std::cerr << e.GetDescription() << std::endl;
std::cerr << e.GetLocation() << std::endl;
return 1;
}
return 0;
}
I have just read the mhd and raw files in Python successfully using the following SimpleITK code:
import SimpleITK as sitk
import numpy as np
def load_itk_image(filename):
itkimage = sitk.ReadImage(filename)
numpyImage = sitk.GetArrayFromImage(itkimage)
return numpyImage
Maybe you can use it as a reference.
Whether you should use the ReadImage function instead of the ImageFileReader? You can have a try.
A few good examples of file reading depending on a known format are found here.
reader->SetImageIO( RawImageIO );
seems the incorrect thing to do here if you are loading both .mhd and .raw files as they are seperate formats, MetaImage vs Raw format where you do and don't know the image size, origin, spacing etc based on the absense or presense of a header.
How are you determining the size of the image and getting (0,0,0)? image->GetSize()?
Can you provide test data?
https://itk.org/Wiki/ITK/Examples/IO/ReadUnknownImageType
https://itk.org/ITKExamples/src/IO/ImageBase/RegisterIOFactories/Documentation.html