Can I mock tensorflow import while testing? - unit-testing

Importing TensorFlow can take quite a while. Is there a way to mock or ignore TensorFlow while unit-testing functions on the same file that does not depend on TensorFlow?

Related

What is google recommand method to import sqlite3 C++ in bazel project?

How to import sqlite3 C++ to bazel project? I find many sqlite3 encapsulation with bazel build of third_party project since sqlite3 does not support bazel officially, and they worked. How can I import sqlite3 or any other officially single-source-file project elegantly? Perhaps there are some google recommend docs?
Google internally hold everything in their repo. You don't want to do this, because you don't want to maintain each of external dependencies on your own.
If you want to see how to use bazel with C++ I recommend an envoy project, because it is open source thus the used idiom are more applicable to the general usage than google internal stuff
I would use some existing rule definition like rockwotj/sqlite-bazel.

Is there a C++ wrapper for the Tensorflow Object Detection API?

We have trained our models and tested them successfully using the provided Python scripts. However, we now want to deploy it on our website and run a web-service for the second round of tests.
Is there a C++ wrapper so that we can use to run/execute our models the same way we do with Python scripts?
I think the easiest way is to use cppflow. It is a C++ wrapper for the TensorFlow C API. It is simple but really easy to use and you do not need to install it neither compiling with Bazel. You just have to download the C API and use it like this:
Model model("graph.pb");
model.restore("path/to/checkpoint");
auto input = new Tensor(model, "input");
auto output = new Tensor(model, "output");
model.run(input, output);
You'll find code to run object detection on C++ here. You'll need an exported graph (.pb format), that you can get using the TF object detection API.
The compilation used to be tricky (except if you put your project in the tensorflow directory and compile everything with bazel, but you might not want to do that). I think it's supposed to be easier now, but I don't know how; or you can follow these instructions to compile tensorflow on its own and use it in a cmake project. You have another example of runing a graph in c++ here.

How to use OpenCV test framework to run specific tests?

OpenCV provides unit tests for its important functions. In order to do that, it provides a unit test framework, which is built on top of gtest. However, the documentation on this test framework is really limited as it is not supposed to be used outsider OpenCV. I tried to manage to use OpenCV test framework based on this question. However, I cannot make the test framework only run specific test functions I wrote due to lack of documentation. Any ideas?
I finally find the solution: use --gtest_filter= option from the command line of the test program. This comes from the fact that OpenCV unit test is an improved version of GTEST.

Making predictions with TensorFlow trained model with existing C++ program (not Bazel)

I have a big C++ program built with Automake and it would be a very big hassle (practically nearly impossible given my time constraints) to convert it to use the Bazel build system. Is there any way I can use a TensorFlow trained model (deep convolutional net) within my program to make predictions (I don't need to do learning within the program right now, but it would be really cool if that can also be done)? Like using TensorFlow as a library?
Thanks!
TensorFlow has a pretty narrow C API exported in the c_api.h header file. The library file can be produced by building the libtensorflow.so target. The C API has everything you require to load a pre-trained model and run inference on it to make predictions. You don't need to convert your build system to Bazel. All you need is to use Bazel to build //tensorflow:libtensorflow.so target, and copy the libtensorflow.so and c_api.h to where you see fit.

Is there a standard way to test scripts/executables in Jenkins?

We have a project that contains a library of Python and Scala packages, as well as Bourne, Python and Perl executable scripts. Although the library has good test coverage, we don't have any tests on the scripts.
The current testing environment uses Jenkins, Python, virtualenv, nose, Scala, and sbt.
Is there a standard/common way to incorporate testing of scripts in Jenkins?
Edit: I'm hoping for something simple like Python's unittest for shell scripts, like this:
assertEquals expected.txt commandline
assertError commandline --bogus
assertStatus 11 commandline baddata.in
Have you looked at shunit2: https://github.com/kward/shunit2
It allows you to write testable shell scripts in Bourne, bash or ksh scripts.
Not sure how you can integrate it into what you're describing, but it generates output similar to other unit test suites.
I do not know how 'standard' this is, but if you truly practice TDD your scripts also should be developed with TDD. How you connect your TDD tests with Jenkins then depends on the TDD framework you are using: you can generate JUnit reports for example, that Jenkins can read, or your tests can simply return failed status, etc.
If your script requires another project, then my inclination is to make a new jenkins project, say 'system-qa'.
This would be a downstream project of the python project, and have a dependency on the python project and the in-house project.
If you were using dependency resolution/publishing technology, say apache ivy http://ant.apache.org/ivy/, and if these existing projects were to publish a packaged version of their code (as simple as a .tar.gz, perhaps), then system-qa project could then declare dependencies (again, using ivy) for both the python package and the in-house project package, download it using ivy, extract/install it, run tests, and exit.
So in summary, the system-qa project's build script is responsible for retrieving dependencies, running tests against those dependencies, and then perhaps publishing a standardized output test format like junit xml (but at a minimum returning 0 or non-0 to clue Jenkins in on how the built went).
I think this is a technically correct solution, but also a lot of work. Judgement call required if it's worth it.