Mask RCNN OpenVino - C++ API - c++

I would like to implement a custom image classifier using MaskRCNN.
In order to increase the speed of the network, i would like to optimise the inference.
I already used OpenCV DNN library, but i would like to do a step forward with OpenVINO.
I used successfully OpenVINO Model optimiser (python), to build the .xml and .bin file representing my network.
I successfully builded OpenVINO Sample directory with Visual Studio 2017 and run MaskRCNNDemo project.
mask_rcnn_demo.exe -m .\Release\frozen_inference_graph.xml -i .\Release\input.jpg
InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] .\Release\input.jpg
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (4288, 2848) to (800, 800)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)
Average running time of one iteration: 2593.81 ms
[ INFO ] Processing output blobs
[ INFO ] Detected class 16 with probability 0.986519: [2043.3, 1104.9], [2412.87, 1436.52]
[ INFO ] Image out.png created!
[ INFO ] Execution successful
Then i tried to reproduce this project in a separate project...
First i had to watch dependancies...
<MaskRCNNDemo>
//References
<format_reader/> => Open CV Images, resize it and get uchar data
<ie_cpu_extension/> => CPU extension for un-managed layers (?)
//Linker
format_reader.lib => Format Reader Lib (VINO Samples Compiled)
cpu_extension.lib => CPU extension Lib (VINO Samples Compiled)
inference_engined.lib => Inference Engine lib (VINO)
opencv_world401d.lib => OpenCV Lib
libiomp5md.lib => Dependancy
... (other libs)
With it i've build a new project, with my own classes and way to open images (multiframe tiff).
This work without problem then i will not describe (i use with a CV DNN inference engine without problem).
I wanted to build the same project than MaskRCNNDemo : CustomIA
<CustomIA>
//References
None => I use my own libtiff way to open image and i resize with OpenCV
None => I will just add include to cpu_extension source code.
//Linker
opencv_world345d.lib => OpenCV 3.4.5 library
tiffd.lib => Libtiff Library
cpu_extension.lib => CPU extension compiled with sample
inference_engined.lib => Inference engine lib.
I added the following dll to the project target dir :
cpu_extension.dll
inference_engined.dll
libiomp5md.dll
mkl_tiny_omp.dll
MKLDNNPlugind.dll
opencv_world345d.dll
tiffd.dll
tiffxxd.dll
I successfully compiled and execute but i faced two issues :
OLD CODE:
slog::info << "Loading plugin" << slog::endl;
InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d);
/** Loading default extensions **/
if (FLAGS_d.find("CPU") != std::string::npos) {
/**
* cpu_extensions library is compiled from "extension" folder containing
* custom MKLDNNPlugin layer implementations. These layers are not supported
* by mkldnn, but they can be useful for inferring custom topologies.
**/
plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
}
/** Printing plugin version **/
printPluginVersion(plugin, std::cout);
OUTPUT :
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
NEW CODE:
VINOEngine::VINOEngine()
{
// Loading Plugin
std::cout << std::endl;
std::cout << "[INFO] - Loading VINO Plugin..." << std::endl;
this->plugin= PluginDispatcher({ "", "../../../lib/intel64" , "" }).getPluginByDevice("CPU");
this->plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
printPluginVersion(this->plugin, std::cout);
OUTPUT :
[INFO] - Loading VINO Plugin...
000001A242280A18 // Like memory adress ???
Second Issue :
When i try to extract my ROI and masks from New Code, if i have a "match", i always have :
score =1.0
x1=x2=0.0
y1=y2=1.0
But the mask looks well extracted...
New Code :
float score = box_info[2];
if (score > this->Conf_Threshold)
{
// On reconstruit les coordonnées de la box..
float x1 = std::min(std::max(0.0f, box_info[3] * Image.cols), static_cast<float>(Image.cols));
float y1 = std::min(std::max(0.0f, box_info[4] * Image.rows), static_cast<float>(Image.rows));
float x2 = std::min(std::max(0.0f, box_info[5] * Image.cols), static_cast<float>(Image.cols));
float y2 = std::min(std::max(0.0f, box_info[6] * Image.rows), static_cast<float>(Image.rows));
int box_width = std::min(static_cast<int>(std::max(0.0f, x2 - x1)), Image.cols);
int box_height = std::min(static_cast<int>(std::max(0.0f, y2 - y1)), Image.rows);
Image is resized from (4288, 2848) to (800, 800)
Detected class 62 with probability 1: [4288, 0], [4288, 0]
Then it is impossible for me to place the mask in the image and resize it while i don't have correct bbox coordinate...
Do anybody have an idea about what i make badly ?
How to create and link correctly an OpenVINO project using cpu_extension ?
Thanks !

First issue with version: look above printPluginVersion function, you will see overloaded std::ostream operators for InferenceEngine and plugin version info.
Second: You can try to debug your model by comparing output after very first convolution and output layer for original framework and OV. Make sure it's equal element by element.
In OV you can use network.addOutput("layer_name") to add any layer to output. Then read output by using: const Blob::Ptr debug_blob = infer_request.GetBlob("layer_name").
Most of the time with issues like this i finding missing of input pre-processing (mean, normalization, etc.)
cpu_extensions is a dynamic library, but you still can change cmake script to make it static and link it with your application. After that you would need to use your application path with call to IExtensionPtr extension_ptr = make_so_pointer(argv[0])

Related

Render to "pdf_document" output format in rmarkdown getting stuck on knitr asis_output function

New to Rmarkdown (and markdown in general). I've inherited some code that works great for the html_document output format but not for pdf_document. It seems to get stuck on the knitr asis_output function in the .Rmd script. When I comment out chunks containing that function, it writes to pdf no problem. Here's some troubleshooting I've tried:
xfun::session_info('rmarkdown')
R version 3.6.1 (2019-07-05)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Catalina 10.15.1, RStudio 1.2.1335
Random number generation:
RNG: Mersenne-Twister
Normal: Inversion
Sample: Rounding
Locale: en_CA.UTF-8 / en_CA.UTF-8 / en_CA.UTF-8 / C / en_CA.UTF-8 / en_CA.UTF-8
:Package version:
base64enc_0.1.3 digest_0.6.20 evaluate_0.14 glue_1.3.1 graphics_3.6.1 grDevices_3.6.1 highr_0.8
htmltools_0.4.0 jsonlite_1.6 knitr_1.25 magrittr_1.5 markdown_1.1 methods_3.6.1 mime_0.7
Rcpp_1.0.2 rlang_0.4.0 rmarkdown_1.16 stats_3.6.1 stringi_1.4.3 stringr_1.4.0 tinytex_0.17.1
tools_3.6.1 utils_3.6.1 xfun_0.10 yaml_2.2.0
Pandoc version: 2.7.3
Sys.getenv('PATH')
[1] "/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Library/TeX/texbin:/opt/X11/bin"
tinytex::tinytex_root()
[1] "/usr/local/texlive/2019"
(tinytex::tlmgr_path())
tlmgr path add add_link_dir_dir: /usr/local/share/info/dir exists;
not making symlink. add_link_dir_dir: destination
/usr/local/share/man/man5 not writable, no links from
/usr/local/texlive/2019/texmf-dist/doc/man/man5.tlmgr: An error has
occurred. See above messages. Exiting. add of symlinks had 1
error(s), see messages above.[1] 6
So maybe the problem is a path issue? In which case I have no clue how to fix. Or should I be using an alternative to the asis_output function? Any help is much appreciated. Here's the relevant bits of my code:
In the R script:
id <- 44
rmarkdown::render('mymarkdown.Rmd',
output_format = "pdf_document",
output_file = paste("report_", id,".pdf", sep=''),
output_dir = '/Users/myname/Documents/test')
In the Rmd file:
---
title: "Monitoring Activity Summary Report"
mode: selfcontained
date: "November 2019"
output:
pdf_document: default
html_document: default
self_contained: yes
---
[some code chunks...]
[then these code chunks that get stuck only for "pdf_document"...]
``` {r setup_Samp1a, echo=FALSE}
sampling_1 <- !is.na(sampling_unique[1])```
```{r conditional block, eval = sampling_1}
asis_output("### 3.1 Sampling 1\\n") # Header that is only shown if
sampling_1 == TRUE```
The error message
! Undefined control sequence.
<argument> 3.1 Sampling 1\n
Error: Failed to compile /Users/myname/Documents/test/report_44.tex.
See https://yihui.name/tinytex/r/#debugging for debugging tips. See
report_44.log for more info.

How to fix Cannot find libcocos2dcpp.so when trying to support 64 bit

i have a cocos2dx game in android studio and when I'm trying to make it support 64 bit requirement then i got this error "couldn't find "libcocos2dcpp.so" " when i start project on my phone
what I've done to support the 64 bit :
i searched on cocos2dx forum for solution to support 64 bit and i found a solution :
* i've Modified Application.MK file : adding APP_ABI := armeabi armeabi-v7a arm64-v8a
* gradle.properties : adding PROP_APP_ABI=armeabi-v7a:arm64-v8a
* build.gradle : adding ndk.abiFilters 'armeabi-v7a', 'arm64-v8a'
java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/com.xxxxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/base.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_dependencies_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_resources_apk.apk", zip file "/data/app/com.xxxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_0_apk.apk", zip file "/data/app/com.xxxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_1_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_2_apk.apk", zip file "/data/app/com.xxxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_3_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_4_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_5_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_6_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_7_apk.apk", zip file "/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_8_apk.apk", zip file "/data/app/com.xxxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/split_lib_slice_9_apk.apk"],nativeLibraryDirectories=[/data/app/com.xxx.kidslearngame-oq27wbETBHeT2MFhWg9cOw==/lib/arm64, /system/lib64, /system/vendor/lib64]]] couldn't find "libcocos2dcpp.so"
those are the solution that i found but when i run the app on my phone it's crashing and giving me the error above but when i removed ndk.abiFilters 'armeabi-v7a', 'arm64-v8a' from gradle.build it's working good but when i upload it to play store they show me the warning message "your app those not support the 64 bit requirement "
Perhaps you should clear the project and in Android Studio select in the menu File -> Invalidate Caches / Restart
Below I give the settings that work for me:
In Application.mk
APP_ABI := arm64-v8a
In gradle.properties
PROP_APP_ABI=armeabi-v7a:arm64-v8a
In app/build.gradle
android {
compileSdkVersion PROP_COMPILE_SDK_VERSION.toInteger()
buildToolsVersion PROP_BUILD_TOOLS_VERSION
def versionMajor = 0
def versionMinor = 9
def versionPatch = 0
def versionBuild = 0
defaultConfig {
applicationId "YOUR APP ID"
minSdkVersion PROP_MIN_SDK_VERSION
targetSdkVersion PROP_TARGET_SDK_VERSION
// versionCode 1
// versionName "1.0"
versionCode versionMajor * 10000 + versionMinor * 1000 + versionPatch * 100 + versionBuild
versionName "${versionMajor}.${versionMinor}.${versionPatch}"
externalNativeBuild {
if (PROP_BUILD_TYPE == 'ndk-build') {
ndkBuild {
targets 'MyGame'
arguments 'NDK_TOOLCHAIN_VERSION=clang'
arguments '-j' + Runtime.runtime.availableProcessors()
}
}
else if (PROP_BUILD_TYPE == 'cmake') {
cmake {
targets 'MyGame'
arguments "-DCMAKE_FIND_ROOT_PATH=", "-DANDROID_STL=c++_static", "-DANDROID_TOOLCHAIN=clang", "-DANDROID_ARM_NEON=TRUE", \
"-DUSE_CHIPMUNK=TRUE", "-DUSE_BULLET=TRUE"
cppFlags "-frtti -fexceptions"
// prebuilt root must be defined as a directory which you have right to access or create if you use prebuilt
// set "-DGEN_COCOS_PREBUILT=ON" and "-DUSE_COCOS_PREBUILT=OFF" to generate prebuilt, this way build cocos2d-x libs
// set "-DGEN_COCOS_PREBUILT=OFF" and "-DUSE_COCOS_PREBUILT=ON" to use prebuilt, this way not build cocos2d-x libs
//arguments "-DCOCOS_PREBUILT_ROOT=/Users/laptop/cocos-prebuilt"
//arguments "-DGEN_COCOS_PREBUILT=OFF", "-DUSE_COCOS_PREBUILT=OFF"
}
}
}
ndk {
abiFilters = []
abiFilters.addAll(PROP_APP_ABI.split(':').collect{it as String})
}
}
splits {
// Configures multiple APKs based on ABI.
abi {
// Enables building multiple APKs per ABI.
enable true
//enable gradle.startParameter.taskNames.contains(":app:assembleRelease")
//enable project.hasProperty('splitApks')
// By default all ABIs are included, so use reset() and include to specify that we only
// want APKs for x86, armeabi-v7a, and mips.
reset()
// Specifies a list of ABIs that Gradle should create APKs for.
include "x86", "x86_64", "armeabi-v7a", "arm64-v8a"
// Specifies that we want to also generate a universal APK that includes all ABIs.
universalApk true
}
}
// Map for the version code that gives each ABI a value.
def abiCodes = ['x86':3, 'x86_64':4, 'armeabi-v7a':1, 'arm64-v8a':2]
// APKs for the same app that all have the same version information.
android.applicationVariants.all { variant ->
// Assigns a different version code for each output APK.
variant.outputs.each {
output ->
def abiName = output.getFilter(OutputFile.ABI)
output.versionCodeOverride = abiCodes.get(abiName, 0) * 1000000 + android.defaultConfig.versionCode
}
}
. . . . . . . . .
. . . . . . . . .
I hope this helps.

Using boost with Bazel under Windows 10 and Visual Studio Community 2019

I have set up a simple C++ program that makes use of the boost filesystem module. To build the program I use Bazel 0.25.0. I am working under Windows 10 x64.
I installed Visual Studio 2019 Community Edtion and set BAZEL_VC to E:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC. I have installed the MSYS2 shell.
Here are my files (can be found also on GitHub):
WORKSPACE
workspace(name = "BoostFilesystemDemo")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# Boost
git_repository(
name = "com_github_nelhage_rules_boost",
commit = "6681419da0163d097d4e09c0756c0d8b785d2aa8",
remote = "https://github.com/nelhage/rules_boost",
shallow_since = "1556401984 -0700"
)
load("#com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps")
boost_deps()
main.cpp
#include <iostream>
#include <boost/filesystem.hpp>
using namespace boost::filesystem;
int main(int argc, char* argv[])
{
if (argc < 2)
{
std::cout << "Usage: tut1 path\n";
return 1;
}
std::cout << argv[1] << " " << file_size(argv[1]) << '\n';
return 0;
}
BUILD
cc_binary(
name = "FilesystemTest",
srcs = ["main.cpp"],
deps = [
"#boost//:filesystem",
],
)
When I try to build I receive the following error message (unfortunately mixed with some German language - Datei kann nicht gefunden werden means file not found)
PS E:\dev\BazelDemos\BoostFilesystemDemo> bazel build //...
INFO: Analyzed target //:FilesystemTest (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: E:/dev/bazeldemos/boostfilesystemdemo/BUILD:1:1: Linking of rule '//:FilesystemTest' failed (Exit 1104)
LINK : fatal error LNK1104: Datei "libboost_filesystem-vc141-mt-x64-1_68.lib" kann nicht ge÷ffnet werden.
Target //:FilesystemTest failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1.175s, Critical Path: 0.12s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
Does anyone have some idea to fix this problem (compiling the source using Bazel 0.25.0 or up, Visual Studio 2019 Community Edition, Windows 10 x64, Target should be x64)? Using Ubuntu 18.04 everything went fine.
Switching to another git repository that provides boost is also fine for me.
I want to also to use other parts of the boost library such as boost signals2, boost log, boost algorithm and boost compute.
Modify the BUILD.boost this way:
diff --git a/BUILD.boost b/BUILD.boost
index a3a2195..2cffdda 100644
--- a/BUILD.boost
+++ b/BUILD.boost
## -623,6 +623,14 ## boost_library(
":system",
":type_traits",
],
+ defines = select({
+ ":linux_arm": [],
+ ":linux_x86_64": [],
+ ":osx_x86_64": [],
+ ":windows_x86_64": [
+ "BOOST_ALL_NO_LIB",
+ ],
+ }),
)
boost_library(
## -1491,6 +1499,14 ## boost_library(
":predef",
":utility",
],
+ defines = select({
+ ":linux_arm": [],
+ ":linux_x86_64": [],
+ ":osx_x86_64": [],
+ ":windows_x86_64": [q
+ "BOOST_ALL_NO_LIB",
I cloned the rules_boost repo and applied the above changes - the cloned repository can be used direclty in the WORKSPACE file:
workspace(name = "BoostFilesystemDemo")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# Boost
git_repository(
name = "com_github_nelhage_rules_boost",
commit = "f0d2a15d6dd5f0667cdaa6da9565ccf87b84c468",
remote = "https://github.com/Vertexwahn/rules_boost",
shallow_since = "1557766870 +0200"
)
load("#com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps")
boost_deps()
Currently, a pull request is running to merge these changes into the original repository: https://github.com/nelhage/rules_boost/pull/123

Create frozen graph from pretrained model

Hi I am newbie to tensorflow. My aim is to convert .pb file to .tflite from pretrain model for my understanding. I have download mobilenet_v1_1.0_224 Model. Below is structure for model
mobilenet_v1_1.0_224.ckpt.data-00000-of-00001 - 66312kb
mobilenet_v1_1.0_224.ckpt.index - 20kb
mobilenet_v1_1.0_224.ckpt.meta - 3308kb
mobilenet_v1_1.0_224.tflite - 16505kb
mobilenet_v1_1.0_224_eval.pbtxt - 520kb
mobilenet_v1_1.0_224_frozen.pb - 16685kb
I know model already has .tflite file, but for my understanding I am trying to convert it.
My First Step : Creating frozen Graph file
import tensorflow as tf
imported_meta = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file,clear_devices=True)
graph_ = tf.get_default_graph()
with tf.Session() as sess:
#saver = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file, clear_devices=True)
imported_meta.restore(sess, base_dir + model_folder_name + checkpoint)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['MobilenetV1/Predictions/Reshape_1'])
with tf.gfile.GFile(base_dir + model_folder_name + './my_frozen.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
I have successfully created my_frozen.pb - 16590 kb . But original file size is 16,685kb, which is clearly visible in folder structure above. So this is my first question why file size is different, Am I following some wrong path.
My Second Step : Creating tflite file using bazel command
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/path_to_folder/my_frozen.pb --output_file=/path_to_folder/model.tflite --inference_type=FLOAT --input_shape=1,224,224,3 --input_array=input --output_array=MobilenetV1/Predictions/Reshape_1
This commands give me model.tflite - 0 kb.
Trackback for bazel Command
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.369s, Critical Path: 0.01s
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/frozengraph.pb' '--output_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/float_model.tflite' '--inference_type=FLOAT' '--input_shape=1,224,224,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2018-04-12 16:36:16.190375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: FIFOQueueV2
2018-04-12 16:36:16.190707: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: QueueDequeueManyV2
2018-04-12 16:36:16.202293: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211322: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211756: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Python Version - 2.7.6
Tensorflow Version - 1.5.0
Thanks In advance :)
The error Check failed: mean_shape.dims() == multiplier_shape.dims()
was an issue with resolution of batch norm and has been resolved in:
https://github.com/tensorflow/tensorflow/commit/460a8b6a5df176412c0d261d91eccdc32e9d39f1#diff-49ed2a40acc30ff6d11b7b326fbe56bc
In my case the error occurred using tensorflow v1.7
Solution was to use tensorflow v1.15 (nightly)
toco --graph_def_file=/path_to_folder/my_frozen.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_file=/path_to_folder/my_output_model.tflite \
--input_shape=1,224,224,3 \
--input_arrays=input \
--output_format=TFLITE \
--output_arrays=MobilenetV1/Predictions/Reshape_1 \
--inference-type=FLOAT

How to add the tags automatically before started the build creation?

I have setup the build and release definitions for my web application in VSTS. Whenever I commit the code then automatically start the build process, after build succeed I manually add the tags like shown in below figure.
But I want to add the build tags before started the build creation only. So, how can I add the tags automatically before started the build creation?
It seems you are using CI build, so if you want to add tags automatically, you can use pre-push hook in local git repo.
Or if it’s ok for you to add tags after build, you can set in build definition. In Get sources step -> show Advanced settings -> select Always for Tag sources -> specify Tag format -> save.
A sample example for pre-push hook (.git/hooks/pre-push), to add a tag with increment of tag version and the version format is major.minor, the number is not bigger then 9:
#!/bin/sh
temp1=0
temp2=0
for tag in $(git tag)
do
{
IFS=. read -r major minor <<< "$tag"
if [ $((major-temp1)) > 0 ]
then
{
temp1=$major
temp2=$minor
}
elif [ $major == $temp1 ]
then
{
if [ $((minor-temp2)) > 0 ]
then
temp2=$minor
else
{
temp1=$temp1
temp2=$temp2
}
fi
}
fi
}
done
if [ $temp2 != 9 ]
then
temp2=$((temp2+1))
else
temp1=$((temp1+1))
fi
nexttag=$temp1"."$temp2
git tag -a $nexttag -m $nexttag