What is the TensorFlow C++ equivalent of argmax (axis=-1) - c++

I am predicting pb graph output in TensorFlow C++.
The session->Run works fine and it gives a list of float values as output
load_graph_status = session->Run(inputs, { output_layer_name }, {}, &outputs);
I have done similar prediction in Python where I used
output = outputs.argmax(axis=-1)
I couldn't find the equivalent of this in C++? There is a tensorflow::ops::argmax in TensorFlow C++ documentation. But I couldn't figure out how to use it.

To answer my own question, there is no direct method in C++ that does this job.
The work around is to fetch and store each output value iteratively and get the maximum of the list.

Related

Understanding tflite examples in documentation

I've been trying to figure out how to use tflite with c++ for a raspberry pi project, and I've had trouble understanding documentation and how to perform inference. I've been confused about the code shown on this page:
https://www.tensorflow.org/lite/api_docs/cc/class/tflite/interpreter
For context, I am inexperienced in c++, and usually work with python and matlab.
Here are the lines of code I'm trying to understand:
auto input = interpreter->typed_tensor(0);
for (int i = 0; i < input_size; i++) {
input[i] = ...; interpreter->Invoke();
My understanding of these lines is - They set input equal to the typed_tensor function of interpreter, with (0) as the input.
Then, they loop for values of i between 0 and input_size. For each i value, they make the inpt[i] equal to some value. They then call interpreter->Invoke(), which has the neural net perform inference, with input[i] as the input?
This is then repeated for each i value, each time calling interpreter->Invoke() with input[i] as the input to the neural net.
Is my understanding of this process correct?
What shape should the input take? For example, if I had converted a tensorflow model that takes a 1x100 input and then converted this to a tflite model, how would I create the input to feed into the tflite model in c++?
What data type should the input to the tflite model be in?
Thank you,
Simon
I've tried looking at example code of other people's uses of tflite in c++, but I haven't been able to examine the inputs to their models. I would also appreciate help with viewing the input tensors while debugging. I'm using code::blocks as an IDE for now.

extracting output data from typed_output_tensor in TFlite

Thanks in advance for your support.
I'm trying to get the output of a tensor after the inference on a .tflite U-net neural network. I'm using Tensorflow lite image classification code as a baseline.
I need to adapt the code for a segmentation task. My question is how I can access the output of the inferenced model (which 128x128x1) and write the result into an image?
I already debugged the code and explored many different approaches. Unfortunately, I'm not confident with the C++ language. What I found is that the command: interpreter->typed_output_tensor<float>(0) should be what I need, as also referenced here: https://www.tensorflow.org/lite/guide/inference#loading_a_model. However, I cannot access the 128x128 tensor generated by the network.
You can find the code at the address: https://github.com/tensorflow/tensorflow/blob/770481fb3e9126f9a29db5667f528e450d54d719/tensorflow/lite/examples/label_image/label_image.cc
The interesting part is here (lines 217 -224):
const float threshold = 0.001f;
std::vector<std::pair<float, int>> top_results;
int output = interpreter->outputs()[0];
TfLiteIntArray* output_dims = interpreter->tensor(output)->dims;
// assume output dims to be something like (1, 1, ... ,size)
auto output_size = output_dims->data[output_dims->size - 1];
I expect the values saved in an image or an alternative way of saving the output tensor

BitwiseXOR for Tensorflow

My project requests a new layer, which needs the new operator of Tensor to compute bitwiseXOR between input x and constant Key k.
E.g. x = 4 (bit form: 100), k = 7 (111), the bitwiseXOR(x, k) expects as 3 (011).
As far as I know, Tensor only has LogicXOR operator for bool type. Luckily, Tensorflow has the extended ability to have a new Op. However, I read the document in https://www.tensorflow.org/extend/adding_an_op, I can get the basic idea, but that is far from the implementation, maybe because of the lack of c++ knowledge. Any suggestions for implementation the new operator that will be helpful. Then I can use that new Op of Tensor to build the new layers.
If you don't want to implement your own C++ op, you can try with tf.py_func which allows you to define a python function that operates on numpy arrays and is then used as a Tensorflow operation in the graph.
For your problem you can use numpy's bitwise_xor():
import tensorflow as tf
import numpy as np
t1 = tf.constant([2,4,6], dtype=tf.int64)
t2 = tf.constant([1,3,5], dtype=tf.int64)
t_xor = tf.py_func(np.bitwise_xor, [t1, t2], tf.int64, stateful=False)
with tf.Session() as sess:
val = sess.run(t_xor)
print(val)
which prints [3,7,3] as expected.
Please take care of the known limitations of this function (taken from the link above):
N.B. The tf.py_func() operation has the following known limitations:
The body of the function (i.e. func) will not be serialized in a
GraphDef. Therefore, you should not use this function if you need to
serialize your model and restore it in a different environment.
The operation must run in the same address space as the Python program
that calls tf.py_func(). If you are using distributed TensorFlow, you
must run a tf.train.Server in the same process as the program that
calls tf.py_func() and you must pin the created operation to a device
in that server (e.g. using with tf.device():).

custom kernel in e1071

i am currently trying to tune the svm function in the e1071 package for R. my input is genomic data (that is each attribute takes a value in the set {-1, 0, 1}) and none of the four kernels currently offered in the package is really good for this kind of data --- i would like to use Hamming distance as my kernel instead.
the svm function, it seems, is written in C++. i have downloaded the source via
download.packages(pkgs = "e1071",
destdir = ".",
type = "source")
found the svm.cpp file containing code for the function and the corresponding kernel portion, where i can potentially add my own custom kernel. has anyone tried doing this? is it possible to do this? once i've finished modifying svm.cpp (provided i figure out how..), how do i make the package "see" the modified file?
You can modify the existing kernel.
I changed the return statement of radial kernel to make the changes..
You can try with that

Embed a stateful Python script into C++ program using boost::python

I have a C++ program that keeps generating data. I have a python class that process these data. I want to use this python class to process the data: when each time a data point is generated, I can use this python script to process the data. But this python script must be "stateful", i.e. it should be able to remember what it did before this data point.
One super basic example is, my C++ program just generates numbers, and my python class calculates the cumulative sums of the numbers generated:
Python:
class CumSum:
def addone(x):
self._cumsum += x;
print self._cumsum;
C++
[Somehow construct a CumSum instance, say c]
for (int i=0; i<100000; i++) {
int x = rand() % 1000;
[Call c.addone(x)]
}
I heard boost::python is a good way to handle this. Can anyone sketch out how to do it? I tried to read boost documents but they were too huge for me to digest.
I appreciate your help.
For basic information about how to execute your python script:
http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/embedding.html
For details on manipulating python objects in C++
http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/object.html
Much of boost-python is concerned with exporting your C++ classes to python but you aren't doing that so you can ignore it.
You may be better off using a simpler wrapper like SCXX
http://davidf.sjsoft.com/mirrors/mcmillan-inc/scxx.html