How to populate torch::tensor with c++ array? - c++

This is very basic: I am normally using Eigen3 for my math operations, but need to use libtorch for a network forward pass. Now I want to populate the torch::tensor with the data from my Eigen3 (or pure C++ array), but without a for loop. How can I do this?
Here is the solution with a loop:
Eigen::Matrix<double, N, 1> inputEigen; // previously initialized
torch::Tensor inputTorch = torch::ones({1, N}); // my torch tensor for the forward pass
for (int i = 0; i < N; i++) {
inputTorch[0][i] = inputEigen[i]; // batch size == 1
}
std::vector<torch::jit::IValue> inputs;
inputs.push_back(inputTorch);
at::Tensor output = net.forward(inputs).toTensor();
This works fine for now, but N might become really large and I'm just looking for a way to directly set the underlying data of my torch::tensor with a previously used C++ array

Libtorch provides the torch::from_blob function (see this thread), which asks for a void* pointer to some data and an IntArrayRef to know the dimensions of the interpreted data. So that would give something like:
Eigen::Matrix<double, N, 1> inputEigen; // previously initialized;
torch::Tensor inputElement = torch::from_blob(inputEigen.data(), {1,N}).clone(); // dims
Please note the call to clone which you may or may not need depending or your use case : basically from_blob does not take ownership of the underlying data, so without the clone it will remain shared with (and possibly destroyed by) your Eigen matrix

Related

How to pass efficiently a subvector of std::vector<cv::Point3f> as parameter (don't own function)

I am trying to use an opencv function that accepts std::vector<cv::Point3f> among other parameters. In my program, I have an std::vector<cv::Point3f> worldPoints and another std::vector<int> mask, both of larger dimension than what I want to send.
What I want to do is pass to the opencv function only the entries that have a respective non-zero mask, as efficiently as possible.
std::vector<cv::Point3f> worldPointsSubset;
for (int i=0; i<mask.size(); i++) {
if (mask[i] != 0) {
worldPointsSubset.push_back(worldPoints[i]);
}
}
// Then use worldPointsSubset in function
Is there any other way around, possibly involving no copying of data?
EDIT 1: The function I am referring to is solvePnPRansac()
The function that you call requires a vector of Point3f, so if the only thing you have is a masked vector, then you have to copy the data first. There is no way around this if the function doesn't accept a vector and its mask.
To see if this copy is an issue, you must measure the drop in performance first and see if this copy is a bottleneck. If it is a bottleneck, the first thing is to count the number of points you need and reserve that capacity in worldPointsSubset.
There is no way to convert data from std::vector<int> to std::vector<cv::Point3f> without a copy because despite the fact you see the same values the size of data might be different.
But you can change the type of data you are working on (std::vector<int> to std::vector<cv::Point3f>) and work directly with cv::Point3f and when needed pass it to solvePnPRansac().

Reshaping Tensor in C

How can I reshape TF_Tensor* using Tensorflow's C_api as it's being done in C++?
TensorShape inputShape({1,1,80,80});
Tensor inputTensor;
Tensor newTensor;
bool result = inputTensor->CopyFrom(newTensor, inputShape);
I don't see a similar method using the tensorflow's c_api.
Tensorflow C API operates with a (data,dims) model - treating data as a flat raw array supplied with the needed dimensions.
Step 1: Allocating a new Tensor
Have a look at TF_AllocateTensor(ref):
TF_CAPI_EXPORT extern TF_Tensor* TF_AllocateTensor(TF_DataType,
const int64_t* dims,
int num_dims, size_t len);
Here:
TF_DataType: The TF equivalent of the data type you need from here.
dims: Array corresponding to dimensions of tensor to be allocated eg. {1, 1, 80, 80}
num_dims: length of dims(4 above)
len: reduce(dims, *): i.e. 1*1*80*80*sizeof(DataType) = 6400*sizeof(DataType).
Step 2: Copying data
// Get the tensor buffer
auto buff = (DataType *)TF_TensorData(output_of_tf_allocate);
// std::memcpy() ...
Here is some sample code from a project I did a while back on writing a very light Tensorflow C-API Wrapper.
So, essentially your reshape will involve allocating your new tensor and copying the data from the original tensor into buff.
The Tensorflow C API isnt meant for regular usage and thus is harder to learn + lacking documentation. I figured a lot of this out with experimentation. Any suggestions from the more experienced developers out there?

std::vector<Eigen::Vector3d> to Eigen::MatrixXd Eigen

I would like to know whether is there an easier way to solve my problem rather than use a for loop. So here is the situation:
In general, I would like to gather data points from my sensor (the message is of type Eigen::Vector3d and I can't change this, because it's a huge framework)
Gathered points should be saved in Eigen MatrixXd (in order to process them further as the Matrix in the optimization algorithm), the dimensions apriori of the Matrix are partially unknown, because it depends of me how many measurements I will take (one dimension is 3 because there are x,y,z coordinates)
For the time being, I created a std::vector<Eigen::Vector3d> where I collect points by push_back and after I finished collecting points I would like to convert it to MatrixXd by using the operation Map .
sensor_input = Eigen::Map<Eigen::MatrixXd>(sensor_input_vector.data(),3,sensor_input_vector.size());
But I have an error and note : no known conversion for argument 1 from Eigen::Matrix<double, 3, 1>* to Eigen::Map<Eigen::Matrix<double, -1, -1>, 0, Eigen::Stride<0, 0> >::PointerArgType {aka double*}
Can you tell me how I could implement this by using a map function?
Short answer: You need to write (after making sure that your input is not empty):
sensor_input = Eigen::Map<Eigen::MatrixXd>(sensor_input_vector[0].data(),3,sensor_input_vector.size());
The reason is that Eigen::Map expects a pointer to the underlying Scalar type (in this case double*), whereas std::vector::data() returns a pointer to the first element inside the vector (i.e., an Eigen::Vector3d*).
Now sensor_input_vector[0].data() will give you a pointer to the first (Vector3d) entry of the first element of your std::vector. Alternatively, you could reinterpret_cast like so:
sensor_input = Eigen::Map<Eigen::MatrixXd>(reinterpret_cast<double*>(sensor_input_vector.data()),3,sensor_input_vector.size());
In many cases you can actually avoid copying the data to a Eigen::MatrixXd but instead directly work with the Eigen::Map, and instead of MatrixXd you can use Matrix3Xd to express that it is known at compile time that there are exactly 3 rows:
// creating an Eigen::Map has O(1) cost
Eigen::Map<Eigen::Matrix3Xd> sensor_input_mapped(sensor_input_vector[0].data(),3,sensor_input_vector.size());
// use sensor_input_mapped, the same way you used sensor_input before
You need to make sure that the underlying std::vector will not get re-allocated while sensor_input_mapped is used. Also, changing individual elements of the std::vector will change them in the Map and vice versa.
This solution should work:
Eigen::MatrixXd sensor_input = Eigen::MatrixXd::Map(sensor_input_vector[0].data(),
3, sensor_input_vector.size());
Since your output will be a matrix of 3 x N (N is the number of 3D vectors), you could use the Map function of Eigen::Matrix3Xd too:
Eigen::Matrix3Xd sensor_input = Eigen::Matrix3Xd::Map(sensor_input_vector[0].data(),
3, sensor_input_vector.size());

Cast dynamic matrix to fixed matrix in Eigen

For flexibility, I'm loading data into dynamic-sized matrices (e.g. Eigen::MatrixXf) using the C++ library Eigen. I've written some functions which require mixed- or fixed-sized matrices as parameters (e.g. Eigen::Matrix<float, 3, Eigen::Dynamic> or Eigen::Matrix4f). Assuming I do the proper assertions for row and column size, how can I convert the dynamic matrix (size set at runtime) to a fixed matrix (size set at compile time)?
The only solution I can think of is to map it, for example:
Eigen::MatrixXf dyn = Eigen::MatrixXf::Random(3, 100);
Eigen::Matrix<float, 3, Eigen::Dynamic> fixed =
Eigen::Map<float, 3, Eigen::Dynamic>(dyn.data(), 3, dyn.cols());
But it's unclear to me if that will work either because the fixed size map constructor doesn't accept rows and columns as parameters in the docs. Is there a better solution? Simply assigning dynamic- to fixed-sized matrices doesn't work.
You can use Ref for that purpose, it's usage in your case is simpler, and it will do the runtime assertion checks for you, e.g.:
MatrixXf A_dyn(4,4);
Ref<Matrix4f> A_fixed(A_dyn);
You might even require a fixed outer-stride and aligned memory:
Ref<Matrix4f,Aligned16,OuterStride<4> > A_fixed(A_dyn);
In this case, A_fixed is really like a Matrix4f.

Calculating eigenvalues and eigenvectors of a complex dynamic matrix

I have a class that needs to use an Eigen Matrix as an instance variable. Unfortunately I have more then one size so I think I am forced to use dynamic as the size parameter;
I declare the instance variable as:
Matrix<std::complex<float>, Dynamic, Dynamic> mCSM;
and in the constructor I fix the size with:
mCSM.resize(40, 40);
I look at the matrix in the debugger and it has the correct size so I think this is good.
I now want to get the Eigen Values and Eigen Vectors for the matrix and this is where I get lost.
I try
ComplexEigenSolver<Matrix<std::complex<float>, Dynamic, Dynamic>> ces;
ces.compute(mCSM);
and to get the eigen values and eigen vectors I use:
mEva = ces.eigenvalues();
mEve = ces.eigenvectors();
mEva and mEve are both instance variables and are built just like mCSM:
Matrix<std::complex<float>, Dynamic, Dynamic> mEve;
Matrix<std::complex<float>, Dynamic, 1> mEva;
//mEve.resize(40, 40);
//mEva.resize(40);
And after the call to mEva = ces.eigenvalues(); mEva has cols = 0 and rows = 0;
Do I somehow need to tell ComplexEigenSolver the matrixSize? Is this the correct way when using dynamic matrices?
I now check if the call to compute was successful with:
ces.compute(mCSM);
if (ces.info() == Eigen::Success)
{
....