I am creating a class that will be used to run inference on an embedded device (not raspberry pi) in c++ using tensorflow's tflite c++ api. Tensorflow doesn't seem to have decent documentation on how to run inference for n number of samples of image data. My data shape in python is (n, 5, 40, 1) [n samples, 5 height, 40 width, 1 channel]. What I cannot figure out is how to input the data and receive the inference per sample in the output. I have two classes so I should receive n 2-d array ouputs. Does anyone know if you can pass in any data type such as an Eigen? I am testing with an input of shape (1, 5, 2, 1) to simplify my test.
#include "classifier.h"
#include <iostream>
using namespace tflite;
Classifier::Classifier(std::string modelPath) {
tflite::StderrReporter error_reporter;
model = tflite::FlatBufferModel::BuildFromFile(modelPath.c_str(), &error_reporter);
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter); // private class variable interpreter
std::vector<int> sizes = {1, 5, 2, 1};
interpreter->ResizeInputTensor(0, sizes);
interpreter->AllocateTensors();
}
std::vector<std::vector<float> Classifier::getDataSamples() {
std::vector<std::vector<float> test = {{0.02, 0.02}, {0.02, 0.02}, {0.02, 0.02},{0.02, 0.02},{0.02, 0.02},};
return test;
}
float Classifier::predict() {
std::vector<float> signatures = getDataSamples();
for (int i = 0; i < signatures.size(); ++i) {
interpreter->typed_input_tensor<float>(0)[i];
}
// float* input = interpreter->typed_input_tensor<float>(0);
// *input = 1.0;
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
return *output;
}
From the Tensorflow documentation we can find below details,
It should be noted that:
Tensors are represented by integers, in order to avoid string comparisons (and any fixed dependency on string libraries).
An interpreter must not be accessed from concurrent threads.
Memory allocation for input and output tensors must be triggered by calling AllocateTensors() right after resizing tensors.
You can find more about the Load and run a model in C++ here.
Related
I am working on trying to get some sparse matrix operations working in Tensorflow. The first one I am tackling is a sparse determinant, via a sparse Cholesky decomposition. Eigen has a sparse Cholesky, so my thought is to wrap that.
I have been making some progress, but am now a little bit stuck. I know that SparseTensors in Tensorflow are made up of three parts: indices, values, and shape. Copying similar ops, I went for the following REGISTER_OP declaration:
REGISTER_OP("SparseLogDet")
.Input("a_indices: int64")
.Input("a_values: float32")
.Input("a_shape: int64")
.Output("determinant: float32")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
shape_inference::ShapeHandle h;
c->set_output(0, h);
return Status::OK();
});
This compiles fine, but when I run it using some example code:
import tensorflow as tf
log_det_op = tf.load_op_library('./sparse_log_det_op.so')
with tf.Session(''):
t = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2],
dense_shape=[3, 4])
print(log_det_op.sparse_log_det(t).eval().shape)
print(log_det_op.sparse_log_det(t).eval())
It complains, saying:
TypeError: sparse_log_det() missing 2 required positional arguments: 'a_values' and 'a_shape'
This makes sense to me, since it's expecting the other arguments. However, I would really just like to pass the sparse tensor, not break it up into components! Does anyone know how this is handled for other sparse operations?
Thanks!
If you want to pass in the sparse tensor and then determine indices, values and shape from this, this should be possible. Just modify your OP to take a single Tensor input, and produce a single float output. Then extract the desired information form the Eigen::Tensor by looping through its elements as seen below:
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <Eigen/Dense>
using namespace tensorflow;
REGISTER_OP("SparseDeterminant")
.Input("sparse_tensor: float")
.Output("sparse_determinant: float");
class SparseDeterminantOp : public OpKernel {
public:
explicit SparseDeterminantOp(OpKernelConstruction *context) : OpKernel(context) {}
void Compute(OpKernelContext *context) override {
// get the input tesnorflow tensor
const Tensor& sparse_tensor = context->input(0);
// get shape of input
const TensorShape& sparse_shape = sparse_tensor.shape();
// get Eigen Tensor for input tensor
auto eigen_sparse = sparse_tensor.matrix<float>();
//extract the data you want from the sparse tensor input
auto a_shape = sparse_tensor.shape();
// loop over all elements of the input tensor and add to values and indices
for (int i=0; i<a_shape.dim_size(0); ++i){
for (int j=0; j<a_shape.dim_size(1); ++j){
if(eigen_sparse(i,j) != 0){
/// ***Here add non zero elements to list/tensor of values and their indicies***
std::cout<<eigen_sparse(i,j)<<" at"<<" "<<i<<" "<<j<<" "<<"not zero."<<std::endl;
}
}
}
// create output tensor
Tensor *output_tensor = NULL;
TensorShape output_shape;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output_tensor));
auto output = output_tensor->scalar<float>();
output(0) = 1.; //**asign return value***;
}
};
REGISTER_KERNEL_BUILDER(Name("SparseDeterminant").Device(DEVICE_CPU), SparseDeterminantOp);
sadly, when you pass t into your op it becomes a Tensorflow::Tensor and loses the values and indices methods associated with tf.sparsetensor, so you can't get them easily.
Once compiled this code can be run with:
//run.py
import tensorflow as tf
import numpy as np
my_module = tf.load_op_library('./out.so')
# create sparse matrix
a = np.zeros((10,10))
for i in range(len(a)):
a[i,i] = i
print(a)
a_t = tf.convert_to_tensor(a, dtype= float)
with tf.Session() as sess:
sess.run(my_module.sparse_determinant(a_t))
I have a tensorflow::Tensor batch in C++ with shape [2, 720, 1280, 3] (#images x height x width x #channels).
I want to get another tensor with only the first image, thus I would have a tensor of shape [1, 720, 1280, 3]. In order words, I want:
tensorflow::Tensor first = batch[0]
What's the most efficient way to achieve it?
I know how to do this in python, but the C++ api and documentation are not as good as python's.
After spending some time trying to implement through copy, I realised that this operation is supported in the API as Slice:
tensorflow::Tensor first = batch.Slice(0, 1);
Note that, as documented, the returned tensor shares the internal buffer with the sliced one, and the alignment of both tensors may be different, if that is relevant to you.
EDIT:
Since I had already done it, here is my attempt at reproducing the same functionality, copy-based. I think it should work (it is pretty similar to what I use in other context).
#include <cstdlib>
#include <cassert>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow/core/framework/tensor_shape.h>
tensorflow::Tensor get_element(const tensorflow::Tensor data, unsigned int index, bool keepDim)
{
using namespace std;
using namespace tensorflow;
typedef typename tensorflow::DataTypeToEnum<T> DataType;
auto dtype = DataType::v();
assert(dtype == data.dtype());
auto dtype = data.dtype();
auto dataShape = data.shape();
TensorShape elementShape;
if (keepDim)
{
elementShape.addDim(1);
}
for (int iDim = 1; iDim < dataShape.dims(); iDim++) {
elementShape.AddDim(dataShape.dim_size(iDim));
}
Tensor element(dtype, elementShape);
auto elementBytes = elementShape.num_elements() * DataTypeSize(dtype);
memcpy(element.flat<void>().data(),
batch.flat<void>().data() + elementBytes * index,
elementBytes);
return element;
}
int main()
{
Tensor batch = ...;
Tensor first = get_element(batch, 0);
return 0;
}
The code can also be changed if you just want to extract the data to, for example, a vector or something else.
This works fine
#include "tensorflow/core/framework/tensor_slice.h"
Tensor t2 = t1.Slice(0,1);
I've managed to get VlFeat's SIFT implmentation working and I'd like to try matching two sets of image descriptors.
SIFT's feature vectors are 128 element float arrays, I've stored the descriptor lists in std::vectors as shown in the snippet below:
std::vector<std::vector<float> > ldescriptors = leftImage->descriptors;
std::vector<std::vector<float> > rdescriptors = rightImage->descriptors;
/* KDTree, L1 comparison metric, dimension 128, 1 tree, L1 metric */
VlKDForest* forest = vl_kdforest_new(VL_TYPE_FLOAT, 128, 1, VlDistanceL1);
/* Build the tree from the left descriptors */
vl_kdforest_build(forest, ldescriptors.size(), ldescriptors.data());
/* Searcher object */
VlKDForestSearcher* searcher = vl_kdforest_new_searcher(forest);
VlKDForestNeighbor neighbours[2];
/* Query the first ten points for now */
for(int i=0; i < 10; i++){
int nvisited = vl_kdforestsearcher_query(searcher, &neighbours, 2, rdescriptors[i].data());
cout << nvisited << neighbours[0].distance << neighbours[1].distance;
}
As far as I can tell that should work, but all I get out, for the distances, are nan's. The length of the descriptor arrays checkout so there does seem to be data going into the tree. I've plotted the keypoints and they also look reasonable, so the data is fairly sane.
What am I missing?
Rather sparse documentation here (links to the API): http://www.vlfeat.org/api/kdtree.html
What am I missing?
The 2nd argument of vl_kdforestsearcher_query takes a pointer to VlKDForestNeighbor:
vl_size
vl_kdforestsearcher_query(
VlKDForestSearcher *self,
VlKDForestNeighbor *neighbors,
vl_size numNeighbors,
void const *query
);
But here you declared VlKDForestNeighbor neighbours[2]; and then passed &neighbours as 2nd parameter which is not correct - your compiler probably issued a incompatible pointer types warning.
Since you declared an array, what you must do instead is either pass explicitly a pointer to the 1st neighbor:
int nvisited = vl_kdforestsearcher_query(searcher, &neighbours[0], 2, qrys[i]);
Or alternatively let the compiler do it for you:
int nvisited = vl_kdforestsearcher_query(searcher, neighbours, 2, qrys[i]);
EDIT
There is indeed a second (major) problem related to the way you build the kd-tree with ldescriptors.data().
Here you pass a std::vector<float>* pointer when VLFeat expects a float * contiguous array containing all your data points in row major order. So what you can do is copying your data in this format:
float *data = new float[128*ldescriptors.size()];
for (unsigned int i = 0; i < ldescriptors.size(); i++)
std::copy(ldescriptors[i].begin(), ldescriptors[i].end(), data + 128*i);
vl_kdforest_build(forest, ldescriptors.size(), data);
// ...
// then, right after `vl_kdforest_delete(forest);`
// do a `delete[] data;`
I am trying a pass a vector of doubles that I generate in my C++ code to a python numpy array. I am looking to do some downstream processing in Python and want to use some python facilities, once I populate the numpy array. One of the biggest things I want to do is to be able to plot things, and C++ is a bit clumsy when it comes to that. Also I want to be able to leverage Python's statistical power.
Though I am not very clear as to how to do it. I spent a lot of time going through the Python C API documentation. I came across a function PyArray_SimpleNewFromData that apparently can do the trick. I still am very unclear as far as the overall set up of the code is concerned. I am building certain very simple test cases to help me understand this process. I generated the following code as a standlone Empty project in Visual Studio express 2012. I call this file Project1
#include <Python.h>
#include "C:/Python27/Lib/site-packages/numpy/core/include/numpy/arrayobject.h"
PyObject * testCreatArray()
{
float fArray[5] = {0,1,2,3,4};
npy_intp m = 5;
PyObject * c = PyArray_SimpleNewFromData(1,&m,PyArray_FLOAT,fArray);
return c;
}
My goal is to be able to read the PyObject in Python. I am stuck because I don't know how to reference this module in Python. In particular how do I import this Project from Python, I tried to do a import Project1, from the project path in python, but failed. Once I understand this base case, my goal is to figure out a way to pass the vector container that I compute in my main function to Python. I am not sure how to do that either.
Any experts who can help me with this, or maybe post a simple well contained example of some code that reads in and populates a numpy array from a simple c++ vector, I will be grateful. Many thanks in advance.
I'm not a cpp-hero ,but wanted to provide my solution with two template functions for 1D and 2D vectors. This is a one liner for usage l8ter and by templating 1D and 2D vectors, the compiler can take the correct version for your vectors shape. Throws a string in case of unregular shape in the case of 2D. The routine copies the data here, but one can easily modify it to take the adress of the first element of the input vector in order to make it just a "representation".
Usage looks like this:
// Random data
vector<float> some_vector_1D(3,1.f); // 3 entries set to 1
vector< vector<float> > some_vector_2D(3,vector<float>(3,1.f)); // 3 subvectors with 1
// Convert vectors to numpy arrays
PyObject* np_vec_1D = (PyObject*) vector_to_nparray(some_vector_1D);
PyObject* np_vec_2D = (PyObject*) vector_to_nparray(some_vector_2D);
You may also change the type of the numpy array by the optional arguments. The template functions are:
/** Convert a c++ 2D vector into a numpy array
*
* #param const vector< vector<T> >& vec : 2D vector data
* #return PyArrayObject* array : converted numpy array
*
* Transforms an arbitrary 2D C++ vector into a numpy array. Throws in case of
* unregular shape. The array may contain empty columns or something else, as
* long as it's shape is square.
*
* Warning this routine makes a copy of the memory!
*/
template<typename T>
static PyArrayObject* vector_to_nparray(const vector< vector<T> >& vec, int type_num = PyArray_FLOAT){
// rows not empty
if( !vec.empty() ){
// column not empty
if( !vec[0].empty() ){
size_t nRows = vec.size();
size_t nCols = vec[0].size();
npy_intp dims[2] = {nRows, nCols};
PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(2, dims, type_num);
T *vec_array_pointer = (T*) PyArray_DATA(vec_array);
// copy vector line by line ... maybe could be done at one
for (size_t iRow=0; iRow < vec.size(); ++iRow){
if( vec[iRow].size() != nCols){
Py_DECREF(vec_array); // delete
throw(string("Can not convert vector<vector<T>> to np.array, since c++ matrix shape is not uniform."));
}
copy(vec[iRow].begin(),vec[iRow].end(),vec_array_pointer+iRow*nCols);
}
return vec_array;
// Empty columns
} else {
npy_intp dims[2] = {vec.size(), 0};
return (PyArrayObject*) PyArray_ZEROS(2, dims, PyArray_FLOAT, 0);
}
// no data at all
} else {
npy_intp dims[2] = {0, 0};
return (PyArrayObject*) PyArray_ZEROS(2, dims, PyArray_FLOAT, 0);
}
}
/** Convert a c++ vector into a numpy array
*
* #param const vector<T>& vec : 1D vector data
* #return PyArrayObject* array : converted numpy array
*
* Transforms an arbitrary C++ vector into a numpy array. Throws in case of
* unregular shape. The array may contain empty columns or something else, as
* long as it's shape is square.
*
* Warning this routine makes a copy of the memory!
*/
template<typename T>
static PyArrayObject* vector_to_nparray(const vector<T>& vec, int type_num = PyArray_FLOAT){
// rows not empty
if( !vec.empty() ){
size_t nRows = vec.size();
npy_intp dims[1] = {nRows};
PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(1, dims, type_num);
T *vec_array_pointer = (T*) PyArray_DATA(vec_array);
copy(vec.begin(),vec.end(),vec_array_pointer);
return vec_array;
// no data at all
} else {
npy_intp dims[1] = {0};
return (PyArrayObject*) PyArray_ZEROS(1, dims, PyArray_FLOAT, 0);
}
}
Since there is no answer to this that is actually helpful for people that might be looking for this sort of thing I figured I'd put an easy solution.
First you will need to create a python extension module in C++, this is easy enough to do and is all in the python c-api documentation so i'm not going to go into that.
Now to convert a c++ std::vector to a numpy array is extremely simple. You first need to import the numpy array header
#include <numpy/arrayobject.h>
and in your intialising function you need to import_array()
PyModINIT_FUNC
inittestFunction(void){
(void) Py_InitModule("testFunction". testFunctionMethods);
import_array();
}
now you can use the numpy array functions that are provided.
The one that you will want for this is as the OP said a few years back PyArray_SimpleNewFromData, it's stupidly simple to use. All you need is an array of type npy_intp, this is the shape of the array to be created. make sure it is the same as your vector using testVector.size(), (and for multiple dimensions do testVector[0].size(), testVector[0][0].size() ect. vectors are guaranteed to be continuous in c++11 unless it's a bool).
//create testVector with data initialised to 0
std::vector<std::vector<uint16_t>> testVector;
testVector.resize(width, std::vector<uint16_t>(height, 0);
//create shape for numpy array
npy_intp dims[2] = {width, height}
//convert testVector to a numpy array
PyArrayObject* numpyArray = (PyArrayObject*)PyArray_SimpleNewFromData(2, dims, NPY_UINT16, (uint16_t*)testVector.data());
To go through the paramaters. First you need to cast it to a PyArrayObject, otherwise it will be a PyObject and when returned to python won't be a numpy array.
The 2, is the number of dimensions in the array.
dims, is the shape of the array. This has to be of type npy_intp
NPY_UINT16 is the data type that the array will be in python.
you then use testVector.data() to get the data of the array, cast this to either void* or a pointer of the same data type as your vector.
Hope this helps anyone else who may need this.
(Also if you don't need pure speed I would advise avoiding using the C-API, it causes quite a few problems and cython or swig are still probably your best choices. There is also c types which can be quite helpful.
I came across your post when trying to do something very similar. I was able to cobble together a solution, the entirety of which is on my Github. It makes two C++ vectors, converts them to Python tuples, passes them to Python, converts them to NumPy arrays, then plots them using Matplotlib.
Much of this code is from the Python Documentation.
Here are some of the important bits from the .cpp file :
//Make some vectors containing the data
static const double xarr[] = {1,2,3,4,5,6,7,8,9,10,11,12,13,14};
std::vector<double> xvec (xarr, xarr + sizeof(xarr) / sizeof(xarr[0]) );
static const double yarr[] = {0,0,1,1,0,0,2,2,0,0,1,1,0,0};
std::vector<double> yvec (yarr, yarr + sizeof(yarr) / sizeof(yarr[0]) );
//Transfer the C++ vector to a python tuple
pXVec = PyTuple_New(xvec.size());
for (i = 0; i < xvec.size(); ++i) {
pValue = PyFloat_FromDouble(xvec[i]);
if (!pValue) {
Py_DECREF(pXVec);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert array value\n");
return 1;
}
PyTuple_SetItem(pXVec, i, pValue);
}
//Transfer the other C++ vector to a python tuple
pYVec = PyTuple_New(yvec.size());
for (i = 0; i < yvec.size(); ++i) {
pValue = PyFloat_FromDouble(yvec[i]);
if (!pValue) {
Py_DECREF(pYVec);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert array value\n");
return 1;
}
PyTuple_SetItem(pYVec, i, pValue); //
}
//Set the argument tuple to contain the two input tuples
PyTuple_SetItem(pArgTuple, 0, pXVec);
PyTuple_SetItem(pArgTuple, 1, pYVec);
//Call the python function
pValue = PyObject_CallObject(pFunc, pArgTuple);
And the Python code:
def plotStdVectors(x, y):
import numpy as np
import matplotlib.pyplot as plt
print "Printing from Python in plotStdVectors()"
print x
print y
x = np.fromiter(x, dtype = np.float)
y = np.fromiter(y, dtype = np.float)
print x
print y
plt.plot(x, y)
plt.show()
return 0
Which results in the plot that I can't post here due to my reputation, but is posted on my blog post here.
_import_array(); //this is required for numpy to create an array correctly
Note: In Numpy's extension guide they use import_array() to accomplish the same goal that I used _import_array() for. When I tried using import_array(), on a mac I got an error. So you may need to try both commands and see which one works.
By the way you can use C++ std::vector in the call to PyArray_SimpleNewFromData.
If your std::vector is my_vector, replace fArraywith &my_vector[0]. &my_vector[0] allows you to access the pointer that stores the data in my_vector.
I'm using the following code to add some noise to an image (straight out of the OpenCV reference, page 449 -- explanation of cv::Mat::begin):
void
simulate_noise(Mat const &in, double stddev, Mat &out)
{
cv::Size s = in.size();
vector<double> noise = generate_noise(s.width*s.height, stddev);
typedef cv::Vec<unsigned char, 3> V4;
cv::MatConstIterator_<V4> in_itr = in.begin<V4>();
cv::MatConstIterator_<V4> in_end = in.end<V4>();
cv::MatIterator_<V4> out_itr = out.begin<V4>();
cv::MatIterator_<V4> out_end = out.end<V4>();
for (; in_itr != in_end && out_itr != out_end; ++in_itr, ++out_itr)
{
int noise_index = my_rand(noise.size());
for (int j = 0; j < 3; ++j)
(*out_itr)[j] = (*in_itr)[j] + noise[noise_index];
}
}
Nothing overly complicated:
in and out are allocated cv::Mat objects of the same dimensions and type
iterate over the input image in
at each position, pick a random value from noise (my_rand(int n) returns a random number in [0..n-1]
sum the pixel from in with the random noise value
put the summation result into out
I don't like this code because the following statement seems unavoidable:
typedef cv::Vec<unsigned char, 3> V4;
It has hard-coded two things:
The images have 3 channels
The channel depth is 8bpp
If I get this typedef wrong (e.g. wrong channel depth or wrong number of channels), then my program segfaults. I originally used typedef cv::Vec<unsigned char, 4> V4 to handle images with an arbitrary number of channels (the max OpenCV supports is 4), but this caused a segfault.
Is there any way I can avoid hard-coding the two things above? Ideally, I want something that's as generic as:
typedef cv::Vec<in.type(), in.size()> V4;
I know this comes late. However, the real solution to your problem is to use OpenCV functionality to do what you want to do.
create noise vector as you do already (or use the functions that OpenCV provides hint!)
shuffle noise vector so you don't need individual noise_index for each pixel; or create vector of randomised noise beforehand
build a matrix header around your shuffled/random vector: cv::Mat_<double>(noise);
use matrix operations for computation: out = in + noise; or cv::add(in, noise, out);
PROFIT!
Another advantage of this method is that OpenCV might employ multithreading, SSE or whatever to speed-up this massive-element operation, which you do not. Your code is simpler, cleaner, and OpenCV does all the nasty type handling for you.
The problem is that you need determine to determine type and number of channels at runtime, but templates need the information at compile time. You can avoid hardcoding the number of channels by either using cv::split and cv::merge, or by changing the iteration to
for(int row = 0; row < in.rows; ++row) {
unsigned char* inp = in.ptr<unsigned char>(row);
unsigned char* outp = out.ptr<unsigned char>(row);
for (int col = 0; col < in.cols; ++col) {
for (int c = 0; c < in.channels(); ++c) {
*outp++ = *inp++ + noise();
}
}
}
If you want to get rid of the dependance of the type, I'd suggest putting the above in a templated function and calling that from your function, depending on the type of the matrix.
They are hardcoded because performance is better that way.
In OpenCV1.x there is cvGet2D() , which can be used here since Mat can be casted as an IplImage.
But it's slow since each time you access a pixel the function will find out the type, size, etc. Specially inefficient in loops.