Adding a custom sparse op (Sparse Determinant) - c++

I am working on trying to get some sparse matrix operations working in Tensorflow. The first one I am tackling is a sparse determinant, via a sparse Cholesky decomposition. Eigen has a sparse Cholesky, so my thought is to wrap that.
I have been making some progress, but am now a little bit stuck. I know that SparseTensors in Tensorflow are made up of three parts: indices, values, and shape. Copying similar ops, I went for the following REGISTER_OP declaration:
REGISTER_OP("SparseLogDet")
.Input("a_indices: int64")
.Input("a_values: float32")
.Input("a_shape: int64")
.Output("determinant: float32")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
shape_inference::ShapeHandle h;
c->set_output(0, h);
return Status::OK();
});
This compiles fine, but when I run it using some example code:
import tensorflow as tf
log_det_op = tf.load_op_library('./sparse_log_det_op.so')
with tf.Session(''):
t = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2],
dense_shape=[3, 4])
print(log_det_op.sparse_log_det(t).eval().shape)
print(log_det_op.sparse_log_det(t).eval())
It complains, saying:
TypeError: sparse_log_det() missing 2 required positional arguments: 'a_values' and 'a_shape'
This makes sense to me, since it's expecting the other arguments. However, I would really just like to pass the sparse tensor, not break it up into components! Does anyone know how this is handled for other sparse operations?
Thanks!

If you want to pass in the sparse tensor and then determine indices, values and shape from this, this should be possible. Just modify your OP to take a single Tensor input, and produce a single float output. Then extract the desired information form the Eigen::Tensor by looping through its elements as seen below:
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <Eigen/Dense>
using namespace tensorflow;
REGISTER_OP("SparseDeterminant")
.Input("sparse_tensor: float")
.Output("sparse_determinant: float");
class SparseDeterminantOp : public OpKernel {
public:
explicit SparseDeterminantOp(OpKernelConstruction *context) : OpKernel(context) {}
void Compute(OpKernelContext *context) override {
// get the input tesnorflow tensor
const Tensor& sparse_tensor = context->input(0);
// get shape of input
const TensorShape& sparse_shape = sparse_tensor.shape();
// get Eigen Tensor for input tensor
auto eigen_sparse = sparse_tensor.matrix<float>();
//extract the data you want from the sparse tensor input
auto a_shape = sparse_tensor.shape();
// loop over all elements of the input tensor and add to values and indices
for (int i=0; i<a_shape.dim_size(0); ++i){
for (int j=0; j<a_shape.dim_size(1); ++j){
if(eigen_sparse(i,j) != 0){
/// ***Here add non zero elements to list/tensor of values and their indicies***
std::cout<<eigen_sparse(i,j)<<" at"<<" "<<i<<" "<<j<<" "<<"not zero."<<std::endl;
}
}
}
// create output tensor
Tensor *output_tensor = NULL;
TensorShape output_shape;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output_tensor));
auto output = output_tensor->scalar<float>();
output(0) = 1.; //**asign return value***;
}
};
REGISTER_KERNEL_BUILDER(Name("SparseDeterminant").Device(DEVICE_CPU), SparseDeterminantOp);
sadly, when you pass t into your op it becomes a Tensorflow::Tensor and loses the values and indices methods associated with tf.sparsetensor, so you can't get them easily.
Once compiled this code can be run with:
//run.py
import tensorflow as tf
import numpy as np
my_module = tf.load_op_library('./out.so')
# create sparse matrix
a = np.zeros((10,10))
for i in range(len(a)):
a[i,i] = i
print(a)
a_t = tf.convert_to_tensor(a, dtype= float)
with tf.Session() as sess:
sess.run(my_module.sparse_determinant(a_t))

Related

Tensorflow tflite c++ api inference for matrix data array

I am creating a class that will be used to run inference on an embedded device (not raspberry pi) in c++ using tensorflow's tflite c++ api. Tensorflow doesn't seem to have decent documentation on how to run inference for n number of samples of image data. My data shape in python is (n, 5, 40, 1) [n samples, 5 height, 40 width, 1 channel]. What I cannot figure out is how to input the data and receive the inference per sample in the output. I have two classes so I should receive n 2-d array ouputs. Does anyone know if you can pass in any data type such as an Eigen? I am testing with an input of shape (1, 5, 2, 1) to simplify my test.
#include "classifier.h"
#include <iostream>
using namespace tflite;
Classifier::Classifier(std::string modelPath) {
tflite::StderrReporter error_reporter;
model = tflite::FlatBufferModel::BuildFromFile(modelPath.c_str(), &error_reporter);
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter); // private class variable interpreter
std::vector<int> sizes = {1, 5, 2, 1};
interpreter->ResizeInputTensor(0, sizes);
interpreter->AllocateTensors();
}
std::vector<std::vector<float> Classifier::getDataSamples() {
std::vector<std::vector<float> test = {{0.02, 0.02}, {0.02, 0.02}, {0.02, 0.02},{0.02, 0.02},{0.02, 0.02},};
return test;
}
float Classifier::predict() {
std::vector<float> signatures = getDataSamples();
for (int i = 0; i < signatures.size(); ++i) {
interpreter->typed_input_tensor<float>(0)[i];
}
// float* input = interpreter->typed_input_tensor<float>(0);
// *input = 1.0;
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
return *output;
}
From the Tensorflow documentation we can find below details,
It should be noted that:
Tensors are represented by integers, in order to avoid string comparisons (and any fixed dependency on string libraries).
An interpreter must not be accessed from concurrent threads.
Memory allocation for input and output tensors must be triggered by calling AllocateTensors() right after resizing tensors.
You can find more about the Load and run a model in C++ here.

How to write Multiplicative Update Rules for Matrix Factorization when one doesn't have access to the whole matrix?

So we want to approximate the matrix A with m rows and n columns with the product of two matrices P and Q that have dimension mxk and kxn respectively. Here is an implementation of the multiplicative update rule due to Lee in C++ using the Eigen library.
void multiplicative_update()
{
Q = Q.cwiseProduct((P.transpose()*matrix).cwiseQuotient(P.transpose()*P*Q));
P = P.cwiseProduct((matrix*Q.transpose()).cwiseQuotient(P*Q*Q.transpose()));
}
where P, Q, and the matrix (matrix = A) are global variables in the class mat_fac. Thus I train them using the following method,
void train_2(){
double error_trial = 0;
for (int count = 0;count < num_iterations; count ++)
{
multiplicative_update();
error_trial = (matrix-P*Q).squaredNorm();
if (error_trial < 0.001)
{
break;
}
}
}
where num_iterations is also a global variable in the class mat_fac.
The problem is that I am working with very large matrices and in particular I do not have access to the entire matrix. Given a triple (i,j,matrix[i][j]), I have access to the row vector P[i][:] and the column vector Q[:][j]. So my goal is to write rewrite the multiplicative update rule in such a way that I update these two vectors every time, I see a non-zero matrix value.
In code, I want to have something like this:
void multiplicative_update(int i, int j, double mat_value)
{
Eigen::MatrixXd q_vect = get_vector(1, j); // get_vector returns Q[:][j] as a column vector
Eigen::MatrixXd p_vect = get_vector(0, i); // get_vector returns P[i][:] as a column vector
// Somehow compute coeff_AQ_t, coeff_PQQ_t, coeff_P_tA and coeff_P_tA.
for(int i = 0; i< k; i++):
p_vect[i] = p_vect[i]* (coeff_AQ_t)/(coeff_PQQ_t)
q_vect[i] = q_vect[i]* (coeff_P_tA)/(coeff_P_tA)
}
Thus the problem boils down to computing the required coefficients given the two vectors. Is this a possible thing to do? If not, what more data do I need for the multiplicative update to work in this manner?

Subtensor of a Tensorflow tensor (C++)

I have a tensorflow::Tensor batch in C++ with shape [2, 720, 1280, 3] (#images x height x width x #channels).
I want to get another tensor with only the first image, thus I would have a tensor of shape [1, 720, 1280, 3]. In order words, I want:
tensorflow::Tensor first = batch[0]
What's the most efficient way to achieve it?
I know how to do this in python, but the C++ api and documentation are not as good as python's.
After spending some time trying to implement through copy, I realised that this operation is supported in the API as Slice:
tensorflow::Tensor first = batch.Slice(0, 1);
Note that, as documented, the returned tensor shares the internal buffer with the sliced one, and the alignment of both tensors may be different, if that is relevant to you.
EDIT:
Since I had already done it, here is my attempt at reproducing the same functionality, copy-based. I think it should work (it is pretty similar to what I use in other context).
#include <cstdlib>
#include <cassert>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow/core/framework/tensor_shape.h>
tensorflow::Tensor get_element(const tensorflow::Tensor data, unsigned int index, bool keepDim)
{
using namespace std;
using namespace tensorflow;
typedef typename tensorflow::DataTypeToEnum<T> DataType;
auto dtype = DataType::v();
assert(dtype == data.dtype());
auto dtype = data.dtype();
auto dataShape = data.shape();
TensorShape elementShape;
if (keepDim)
{
elementShape.addDim(1);
}
for (int iDim = 1; iDim < dataShape.dims(); iDim++) {
elementShape.AddDim(dataShape.dim_size(iDim));
}
Tensor element(dtype, elementShape);
auto elementBytes = elementShape.num_elements() * DataTypeSize(dtype);
memcpy(element.flat<void>().data(),
batch.flat<void>().data() + elementBytes * index,
elementBytes);
return element;
}
int main()
{
Tensor batch = ...;
Tensor first = get_element(batch, 0);
return 0;
}
The code can also be changed if you just want to extract the data to, for example, a vector or something else.
This works fine
#include "tensorflow/core/framework/tensor_slice.h"
Tensor t2 = t1.Slice(0,1);

Fast way to slice an Eigen SparseMatrix

In finite element analyses it is quite common to apply some prescribed condition(s) to a big sparse matrix and get a reduced one. This can be achieved easily in MATLAB, SciPy and Julia, for instance, in MATLAB
a=sprand(10000,10000,0.2); % create a random sparse matrix; fill %20
tic; c=a(1:2:4000,2:3:5000); toc % slice the matrix to get a reduced one
Assuming that one has a similar sparse matrix in Eigen, what is the most efficient way to slice an Eigen matrix. I don't care about a copy or a view, but the methodology needs to be able to cope up with non-contiguous slicing. The latter requirement makes the Eigen block operations useless in this regard.
I can think of two methodologies that I have tested:
Iterate over the columns and rows using for loops and assign the values to a second sparse matrix (I know this is a truly bad idea).
Create a dummy sparse matrix with zeros and ones and pre and post multiply it with the actual matrix D*A*A.transpose()
I always use setFromTriplets to create a sparse matrices in Eigen and I have been happy with the solvers and assembling of sparse matrices. However it seems that this slicing is the bottleneck in my code at the moment
The timing of MATLAB vs Eigen (using -O3 -DNDEBUG -march=native) is
MATLAB: 0.016 secs
EIGEN LOOP INDEXING: 193 secs
EIGEN PRE-POST MUL: 13.7 secs
The other methodology that I do not know how to go about is to manipulate directly the [I,J,V] triplets outerIndexPtr, innerIndexPtr, valuePtr.
Here is a proof of concept code snippet
#include <Eigen/Core>
#include <Eigen/Sparse>
template<typename T>
using spmatrix = Eigen::SparseMatrix<T,Eigen::RowMajor>;
spmatrix<double> sprand(int rows, int cols, double sparsity) {
std::default_random_engine gen;
std::uniform_real_distribution<double> dist(0.0,1.0);
int sparsity_ = sparsity*100;
typedef Eigen::Triplet<double> T;
std::vector<T> tripletList; tripletList.reserve(rows*cols);
int counter = 0;
for(int i=0;i<rows;++i) {
for(int j=0;j<cols;++j) {
if (counter % sparsity_ == 0) {
auto v_ij=dist(gen);
tripletList.push_back(T(i,j,v_ij));
}
counter++;
}
}
spmatrix<double> mat(rows,cols);
mat.setFromTriplets(tripletList.begin(), tripletList.end());
return mat;
}
int main() {
int m=1000,n=10000;
auto a = sprand(n,n,0.05);
auto b = sprand(m,n,0.1);
spmatrix<double> c;
// this is efficient but definitely not the right way to do this
// c = b*a*b.transpose(); // uncomment to check, much slower than block operation
c = a.block(0,0,1000,1000); // very fast, Faster than MATLAB (I believe this is just a view)
return 0;
}
So Any pointers, in this direction would be useful.

Passing a C++ std::Vector to numpy array in Python

I am trying a pass a vector of doubles that I generate in my C++ code to a python numpy array. I am looking to do some downstream processing in Python and want to use some python facilities, once I populate the numpy array. One of the biggest things I want to do is to be able to plot things, and C++ is a bit clumsy when it comes to that. Also I want to be able to leverage Python's statistical power.
Though I am not very clear as to how to do it. I spent a lot of time going through the Python C API documentation. I came across a function PyArray_SimpleNewFromData that apparently can do the trick. I still am very unclear as far as the overall set up of the code is concerned. I am building certain very simple test cases to help me understand this process. I generated the following code as a standlone Empty project in Visual Studio express 2012. I call this file Project1
#include <Python.h>
#include "C:/Python27/Lib/site-packages/numpy/core/include/numpy/arrayobject.h"
PyObject * testCreatArray()
{
float fArray[5] = {0,1,2,3,4};
npy_intp m = 5;
PyObject * c = PyArray_SimpleNewFromData(1,&m,PyArray_FLOAT,fArray);
return c;
}
My goal is to be able to read the PyObject in Python. I am stuck because I don't know how to reference this module in Python. In particular how do I import this Project from Python, I tried to do a import Project1, from the project path in python, but failed. Once I understand this base case, my goal is to figure out a way to pass the vector container that I compute in my main function to Python. I am not sure how to do that either.
Any experts who can help me with this, or maybe post a simple well contained example of some code that reads in and populates a numpy array from a simple c++ vector, I will be grateful. Many thanks in advance.
I'm not a cpp-hero ,but wanted to provide my solution with two template functions for 1D and 2D vectors. This is a one liner for usage l8ter and by templating 1D and 2D vectors, the compiler can take the correct version for your vectors shape. Throws a string in case of unregular shape in the case of 2D. The routine copies the data here, but one can easily modify it to take the adress of the first element of the input vector in order to make it just a "representation".
Usage looks like this:
// Random data
vector<float> some_vector_1D(3,1.f); // 3 entries set to 1
vector< vector<float> > some_vector_2D(3,vector<float>(3,1.f)); // 3 subvectors with 1
// Convert vectors to numpy arrays
PyObject* np_vec_1D = (PyObject*) vector_to_nparray(some_vector_1D);
PyObject* np_vec_2D = (PyObject*) vector_to_nparray(some_vector_2D);
You may also change the type of the numpy array by the optional arguments. The template functions are:
/** Convert a c++ 2D vector into a numpy array
*
* #param const vector< vector<T> >& vec : 2D vector data
* #return PyArrayObject* array : converted numpy array
*
* Transforms an arbitrary 2D C++ vector into a numpy array. Throws in case of
* unregular shape. The array may contain empty columns or something else, as
* long as it's shape is square.
*
* Warning this routine makes a copy of the memory!
*/
template<typename T>
static PyArrayObject* vector_to_nparray(const vector< vector<T> >& vec, int type_num = PyArray_FLOAT){
// rows not empty
if( !vec.empty() ){
// column not empty
if( !vec[0].empty() ){
size_t nRows = vec.size();
size_t nCols = vec[0].size();
npy_intp dims[2] = {nRows, nCols};
PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(2, dims, type_num);
T *vec_array_pointer = (T*) PyArray_DATA(vec_array);
// copy vector line by line ... maybe could be done at one
for (size_t iRow=0; iRow < vec.size(); ++iRow){
if( vec[iRow].size() != nCols){
Py_DECREF(vec_array); // delete
throw(string("Can not convert vector<vector<T>> to np.array, since c++ matrix shape is not uniform."));
}
copy(vec[iRow].begin(),vec[iRow].end(),vec_array_pointer+iRow*nCols);
}
return vec_array;
// Empty columns
} else {
npy_intp dims[2] = {vec.size(), 0};
return (PyArrayObject*) PyArray_ZEROS(2, dims, PyArray_FLOAT, 0);
}
// no data at all
} else {
npy_intp dims[2] = {0, 0};
return (PyArrayObject*) PyArray_ZEROS(2, dims, PyArray_FLOAT, 0);
}
}
/** Convert a c++ vector into a numpy array
*
* #param const vector<T>& vec : 1D vector data
* #return PyArrayObject* array : converted numpy array
*
* Transforms an arbitrary C++ vector into a numpy array. Throws in case of
* unregular shape. The array may contain empty columns or something else, as
* long as it's shape is square.
*
* Warning this routine makes a copy of the memory!
*/
template<typename T>
static PyArrayObject* vector_to_nparray(const vector<T>& vec, int type_num = PyArray_FLOAT){
// rows not empty
if( !vec.empty() ){
size_t nRows = vec.size();
npy_intp dims[1] = {nRows};
PyArrayObject* vec_array = (PyArrayObject *) PyArray_SimpleNew(1, dims, type_num);
T *vec_array_pointer = (T*) PyArray_DATA(vec_array);
copy(vec.begin(),vec.end(),vec_array_pointer);
return vec_array;
// no data at all
} else {
npy_intp dims[1] = {0};
return (PyArrayObject*) PyArray_ZEROS(1, dims, PyArray_FLOAT, 0);
}
}
Since there is no answer to this that is actually helpful for people that might be looking for this sort of thing I figured I'd put an easy solution.
First you will need to create a python extension module in C++, this is easy enough to do and is all in the python c-api documentation so i'm not going to go into that.
Now to convert a c++ std::vector to a numpy array is extremely simple. You first need to import the numpy array header
#include <numpy/arrayobject.h>
and in your intialising function you need to import_array()
PyModINIT_FUNC
inittestFunction(void){
(void) Py_InitModule("testFunction". testFunctionMethods);
import_array();
}
now you can use the numpy array functions that are provided.
The one that you will want for this is as the OP said a few years back PyArray_SimpleNewFromData, it's stupidly simple to use. All you need is an array of type npy_intp, this is the shape of the array to be created. make sure it is the same as your vector using testVector.size(), (and for multiple dimensions do testVector[0].size(), testVector[0][0].size() ect. vectors are guaranteed to be continuous in c++11 unless it's a bool).
//create testVector with data initialised to 0
std::vector<std::vector<uint16_t>> testVector;
testVector.resize(width, std::vector<uint16_t>(height, 0);
//create shape for numpy array
npy_intp dims[2] = {width, height}
//convert testVector to a numpy array
PyArrayObject* numpyArray = (PyArrayObject*)PyArray_SimpleNewFromData(2, dims, NPY_UINT16, (uint16_t*)testVector.data());
To go through the paramaters. First you need to cast it to a PyArrayObject, otherwise it will be a PyObject and when returned to python won't be a numpy array.
The 2, is the number of dimensions in the array.
dims, is the shape of the array. This has to be of type npy_intp
NPY_UINT16 is the data type that the array will be in python.
you then use testVector.data() to get the data of the array, cast this to either void* or a pointer of the same data type as your vector.
Hope this helps anyone else who may need this.
(Also if you don't need pure speed I would advise avoiding using the C-API, it causes quite a few problems and cython or swig are still probably your best choices. There is also c types which can be quite helpful.
I came across your post when trying to do something very similar. I was able to cobble together a solution, the entirety of which is on my Github. It makes two C++ vectors, converts them to Python tuples, passes them to Python, converts them to NumPy arrays, then plots them using Matplotlib.
Much of this code is from the Python Documentation.
Here are some of the important bits from the .cpp file :
//Make some vectors containing the data
static const double xarr[] = {1,2,3,4,5,6,7,8,9,10,11,12,13,14};
std::vector<double> xvec (xarr, xarr + sizeof(xarr) / sizeof(xarr[0]) );
static const double yarr[] = {0,0,1,1,0,0,2,2,0,0,1,1,0,0};
std::vector<double> yvec (yarr, yarr + sizeof(yarr) / sizeof(yarr[0]) );
//Transfer the C++ vector to a python tuple
pXVec = PyTuple_New(xvec.size());
for (i = 0; i < xvec.size(); ++i) {
pValue = PyFloat_FromDouble(xvec[i]);
if (!pValue) {
Py_DECREF(pXVec);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert array value\n");
return 1;
}
PyTuple_SetItem(pXVec, i, pValue);
}
//Transfer the other C++ vector to a python tuple
pYVec = PyTuple_New(yvec.size());
for (i = 0; i < yvec.size(); ++i) {
pValue = PyFloat_FromDouble(yvec[i]);
if (!pValue) {
Py_DECREF(pYVec);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert array value\n");
return 1;
}
PyTuple_SetItem(pYVec, i, pValue); //
}
//Set the argument tuple to contain the two input tuples
PyTuple_SetItem(pArgTuple, 0, pXVec);
PyTuple_SetItem(pArgTuple, 1, pYVec);
//Call the python function
pValue = PyObject_CallObject(pFunc, pArgTuple);
And the Python code:
def plotStdVectors(x, y):
import numpy as np
import matplotlib.pyplot as plt
print "Printing from Python in plotStdVectors()"
print x
print y
x = np.fromiter(x, dtype = np.float)
y = np.fromiter(y, dtype = np.float)
print x
print y
plt.plot(x, y)
plt.show()
return 0
Which results in the plot that I can't post here due to my reputation, but is posted on my blog post here.
_import_array(); //this is required for numpy to create an array correctly
Note: In Numpy's extension guide they use import_array() to accomplish the same goal that I used _import_array() for. When I tried using import_array(), on a mac I got an error. So you may need to try both commands and see which one works.
By the way you can use C++ std::vector in the call to PyArray_SimpleNewFromData.
If your std::vector is my_vector, replace fArraywith &my_vector[0]. &my_vector[0] allows you to access the pointer that stores the data in my_vector.