Error on geotiff coordinate transformation - c++

Have error and crash in an application using GDAL for extracting latitude & longitude from GeoTiff image running it in openSUSE, while it works fine in Ubuntu for my colleagues. Errors are different for gdal-v3 and gdal-v2 versions, however seems problem is in OGRCreateCoordinateTransformation object creation: returns NULL in both cases. See details below:
Code:
QGeoCoordinate toGeoCoordinate(double* adGeotransform, OGRSpatialReference& srcRef, int x, int y)
{
double worldX = adGeotransform[0] + x * adGeotransform[1] + y * adGeotransform[2];
double worldY = adGeotransform[3] + x * adGeotransform[4] + y * adGeotransform[5];
OGRSpatialReference dstRef;
dstRef.importFromEPSG(4326);
QScopedPointer<OGRCoordinateTransformation> coordinateTransform(
OGRCreateCoordinateTransformation(&srcRef, &dstRef));
coordinateTransform->Transform(1, &worldX, &worldY);
return QGeoCoordinate(worldY, // lat
worldX); // lon
}
QGeoRectangle extractCoordinate(const QString& path)
{
GDALAllRegister();
GDALDataset *poDataset = (GDALDataset *) GDALOpen( path.toStdString().c_str(), GA_ReadOnly );
_height = GDALGetRasterYSize(poDataset);
_width = GDALGetRasterXSize(poDataset);
double adGeotransform[6];
poDataset->GetGeoTransform(adGeotransform);
OGRSpatialReference srcRef(poDataset->GetProjectionRef());
QGeoCoordinate _topLeft = toGeoCoordinate(adGeotransform, srcRef, 0, 0);
QGeoCoordinate _bottomRight = toGeoCoordinate(adGeotransform, srcRef, _width, _height);
return QGeoRectangle(_topLeft, _bottomRight);
}
GDAL 3 (openSUSE):
gdal - 3.0.4
libgeotiff5 - 1.5.1
libproj19 - 7.0.0
libgeos - 3.8.0
ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db
ERROR 1: PROJ: proj_create: unrecognized format / unknown name
ERROR 6: Cannot find coordinate operations from PROJCRS["WGS 84 / UTM zone 10N",BASEGEOGCRS["WGS 84",DATUM["World Geodetic System 1984",ELLIPSOID["WGS 84",6378137,298.257223563,LENGTHUNIT["metre",1]]],PRIMEM["Greenwich",0,ANGLEUNIT["degree",0.0174532925199433]],ID["EPSG",4326]],CONVERSION["UTM zone 10N",METHOD["Transverse Mercator",ID["EPSG",9807]],PARAMETER["Latitude of natural origin",0,ANGLEUNIT["degree",0.0174532925199433],ID["EPSG",8801]],PARAMETER["Longitude of natural origin",-123,ANGLEUNIT["degree",0.0174532925199433],ID["EPSG",8802]],PARAMETER["Scale factor at natural origin",0.9996,SCALEUNIT["unity",1],ID["EPSG",8805]],PARAMETER["False easting",500000,LENGTHUNIT["metre",1],ID["EPSG",8806]],PARAMETER["False northing",0,LENGTHUNIT["metre",1],ID["EPSG",8807]]],CS[Cartesian,2],AXIS["easting",east,ORDER[1],LENGTHUNIT["metre",1]],AXIS["northing",north,ORDER[2],LENGTHUNIT["metre",1]],ID["EPSG",32610]]' to'
GDAL 2 (openSUSE):
gdal2 - 2.4.2
libgeotiff5 - 1.5.1
libproj19 - 7.0.0
libgeos - 3.8.0
ERROR 6: Unable to load PROJ.4 library (libproj.so.15), creation of OGRCoordinateTransformation failed.
Ubuntu 18.03 LTS (works fine):
libgdal - 2.2.3
libgeotiff - 1.4.2
libproj12 - 4.9.3
So asking for possible solutions:
What the errors cause could be:
wrong libraries versions;
wrong build flags on openSUSE?
GeoTiff could be extracted other way?

Problem is in PROJ library version used. For GDAL v2 need to use libproj v6. However required libgeotiff5 and libspatialite built against libproj19 (proj v7) in openSUSE Tumbleweed. So need to
Uninstall all recent versions of: libspatialite, geotiff, libproj19, gdal.
Install libproj15 for example from this repo home:rogeroberholtzer
Rebuild libspatialite & geotiff libraries from src.rpm against this installed libproj15 ourselves:
rpmbuild --rebuild --clean libspatialite-4.3.0a-15.19.src.rpm
rpmbuild --rebuild --clean geotiff-1.5.1-31.13.src.rpm
These packages could be taken from science repo for example.
Install built packages: rpm -Uvh *
Install gdal2-2.4.2 rpm from science repo.
And all works! Enjoy! :)

Related

gst-plugins-base.wrap file not found

I am getting the same error as per the link
Yoto: gstreamer1.0-plugins-bad_1.16.3.bb:do_configure in yocto.
The difference is I am using Yocto dunfell branch commit: 40e448301edf142dc00a0ae6190017adac1e57b2 which is 3.1.3
Was the issue is with Poky recipes or open-embedded recipes?
I couldn't find solution anywhere.
Build Configuration:
BB_VERSION = "1.46.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "raspberrypi3"
DISTRO = "poky"
DISTRO_VERSION = "3.1.3"
TUNE_FEATURES = "arm vfp cortexa7 neon vfpv4 thumb callconvention-hard"
TARGET_FPU = "hard"
meta
meta-poky = "HEAD:40e448301edf142dc00a0ae6190017adac1e57b2"
meta-oe
meta-python
meta-multimedia
meta-networking = "HEAD:2a5c534d2b9f01e9c0f39701fccd7fc874945b1c"
meta-raspberrypi = "HEAD:f0c75016f06c0395d1e76fde0ef1beb71eaf404a"
meta-qt5 = "HEAD:1650757f4182435a63985f73e477ed80927f0eac"
| Found CMake: NO
| Run-time dependency gstreamer-gl-1.0 found: NO (tried pkgconfig and cmake)
| Looking for a fallback subproject for the dependency gstreamer-gl-1.0
|
| meson.build:283:0: ERROR: Subproject directory not found and gst-plugins-base.wrap file not found
I didn't got into any issue when I was using 'zeus' branch.
I appreciate everyone help.

Mask RCNN OpenVino - C++ API

I would like to implement a custom image classifier using MaskRCNN.
In order to increase the speed of the network, i would like to optimise the inference.
I already used OpenCV DNN library, but i would like to do a step forward with OpenVINO.
I used successfully OpenVINO Model optimiser (python), to build the .xml and .bin file representing my network.
I successfully builded OpenVINO Sample directory with Visual Studio 2017 and run MaskRCNNDemo project.
mask_rcnn_demo.exe -m .\Release\frozen_inference_graph.xml -i .\Release\input.jpg
InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] .\Release\input.jpg
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (4288, 2848) to (800, 800)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)
Average running time of one iteration: 2593.81 ms
[ INFO ] Processing output blobs
[ INFO ] Detected class 16 with probability 0.986519: [2043.3, 1104.9], [2412.87, 1436.52]
[ INFO ] Image out.png created!
[ INFO ] Execution successful
Then i tried to reproduce this project in a separate project...
First i had to watch dependancies...
<MaskRCNNDemo>
//References
<format_reader/> => Open CV Images, resize it and get uchar data
<ie_cpu_extension/> => CPU extension for un-managed layers (?)
//Linker
format_reader.lib => Format Reader Lib (VINO Samples Compiled)
cpu_extension.lib => CPU extension Lib (VINO Samples Compiled)
inference_engined.lib => Inference Engine lib (VINO)
opencv_world401d.lib => OpenCV Lib
libiomp5md.lib => Dependancy
... (other libs)
With it i've build a new project, with my own classes and way to open images (multiframe tiff).
This work without problem then i will not describe (i use with a CV DNN inference engine without problem).
I wanted to build the same project than MaskRCNNDemo : CustomIA
<CustomIA>
//References
None => I use my own libtiff way to open image and i resize with OpenCV
None => I will just add include to cpu_extension source code.
//Linker
opencv_world345d.lib => OpenCV 3.4.5 library
tiffd.lib => Libtiff Library
cpu_extension.lib => CPU extension compiled with sample
inference_engined.lib => Inference engine lib.
I added the following dll to the project target dir :
cpu_extension.dll
inference_engined.dll
libiomp5md.dll
mkl_tiny_omp.dll
MKLDNNPlugind.dll
opencv_world345d.dll
tiffd.dll
tiffxxd.dll
I successfully compiled and execute but i faced two issues :
OLD CODE:
slog::info << "Loading plugin" << slog::endl;
InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d);
/** Loading default extensions **/
if (FLAGS_d.find("CPU") != std::string::npos) {
/**
* cpu_extensions library is compiled from "extension" folder containing
* custom MKLDNNPlugin layer implementations. These layers are not supported
* by mkldnn, but they can be useful for inferring custom topologies.
**/
plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
}
/** Printing plugin version **/
printPluginVersion(plugin, std::cout);
OUTPUT :
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
NEW CODE:
VINOEngine::VINOEngine()
{
// Loading Plugin
std::cout << std::endl;
std::cout << "[INFO] - Loading VINO Plugin..." << std::endl;
this->plugin= PluginDispatcher({ "", "../../../lib/intel64" , "" }).getPluginByDevice("CPU");
this->plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
printPluginVersion(this->plugin, std::cout);
OUTPUT :
[INFO] - Loading VINO Plugin...
000001A242280A18 // Like memory adress ???
Second Issue :
When i try to extract my ROI and masks from New Code, if i have a "match", i always have :
score =1.0
x1=x2=0.0
y1=y2=1.0
But the mask looks well extracted...
New Code :
float score = box_info[2];
if (score > this->Conf_Threshold)
{
// On reconstruit les coordonnées de la box..
float x1 = std::min(std::max(0.0f, box_info[3] * Image.cols), static_cast<float>(Image.cols));
float y1 = std::min(std::max(0.0f, box_info[4] * Image.rows), static_cast<float>(Image.rows));
float x2 = std::min(std::max(0.0f, box_info[5] * Image.cols), static_cast<float>(Image.cols));
float y2 = std::min(std::max(0.0f, box_info[6] * Image.rows), static_cast<float>(Image.rows));
int box_width = std::min(static_cast<int>(std::max(0.0f, x2 - x1)), Image.cols);
int box_height = std::min(static_cast<int>(std::max(0.0f, y2 - y1)), Image.rows);
Image is resized from (4288, 2848) to (800, 800)
Detected class 62 with probability 1: [4288, 0], [4288, 0]
Then it is impossible for me to place the mask in the image and resize it while i don't have correct bbox coordinate...
Do anybody have an idea about what i make badly ?
How to create and link correctly an OpenVINO project using cpu_extension ?
Thanks !
First issue with version: look above printPluginVersion function, you will see overloaded std::ostream operators for InferenceEngine and plugin version info.
Second: You can try to debug your model by comparing output after very first convolution and output layer for original framework and OV. Make sure it's equal element by element.
In OV you can use network.addOutput("layer_name") to add any layer to output. Then read output by using: const Blob::Ptr debug_blob = infer_request.GetBlob("layer_name").
Most of the time with issues like this i finding missing of input pre-processing (mean, normalization, etc.)
cpu_extensions is a dynamic library, but you still can change cmake script to make it static and link it with your application. After that you would need to use your application path with call to IExtensionPtr extension_ptr = make_so_pointer(argv[0])

Create frozen graph from pretrained model

Hi I am newbie to tensorflow. My aim is to convert .pb file to .tflite from pretrain model for my understanding. I have download mobilenet_v1_1.0_224 Model. Below is structure for model
mobilenet_v1_1.0_224.ckpt.data-00000-of-00001 - 66312kb
mobilenet_v1_1.0_224.ckpt.index - 20kb
mobilenet_v1_1.0_224.ckpt.meta - 3308kb
mobilenet_v1_1.0_224.tflite - 16505kb
mobilenet_v1_1.0_224_eval.pbtxt - 520kb
mobilenet_v1_1.0_224_frozen.pb - 16685kb
I know model already has .tflite file, but for my understanding I am trying to convert it.
My First Step : Creating frozen Graph file
import tensorflow as tf
imported_meta = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file,clear_devices=True)
graph_ = tf.get_default_graph()
with tf.Session() as sess:
#saver = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file, clear_devices=True)
imported_meta.restore(sess, base_dir + model_folder_name + checkpoint)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['MobilenetV1/Predictions/Reshape_1'])
with tf.gfile.GFile(base_dir + model_folder_name + './my_frozen.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
I have successfully created my_frozen.pb - 16590 kb . But original file size is 16,685kb, which is clearly visible in folder structure above. So this is my first question why file size is different, Am I following some wrong path.
My Second Step : Creating tflite file using bazel command
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/path_to_folder/my_frozen.pb --output_file=/path_to_folder/model.tflite --inference_type=FLOAT --input_shape=1,224,224,3 --input_array=input --output_array=MobilenetV1/Predictions/Reshape_1
This commands give me model.tflite - 0 kb.
Trackback for bazel Command
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.369s, Critical Path: 0.01s
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/frozengraph.pb' '--output_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/float_model.tflite' '--inference_type=FLOAT' '--input_shape=1,224,224,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2018-04-12 16:36:16.190375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: FIFOQueueV2
2018-04-12 16:36:16.190707: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: QueueDequeueManyV2
2018-04-12 16:36:16.202293: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211322: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211756: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Python Version - 2.7.6
Tensorflow Version - 1.5.0
Thanks In advance :)
The error Check failed: mean_shape.dims() == multiplier_shape.dims()
was an issue with resolution of batch norm and has been resolved in:
https://github.com/tensorflow/tensorflow/commit/460a8b6a5df176412c0d261d91eccdc32e9d39f1#diff-49ed2a40acc30ff6d11b7b326fbe56bc
In my case the error occurred using tensorflow v1.7
Solution was to use tensorflow v1.15 (nightly)
toco --graph_def_file=/path_to_folder/my_frozen.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_file=/path_to_folder/my_output_model.tflite \
--input_shape=1,224,224,3 \
--input_arrays=input \
--output_format=TFLITE \
--output_arrays=MobilenetV1/Predictions/Reshape_1 \
--inference-type=FLOAT

R 3.4 and mclapply strange behavior - is this a bug?

I am not sure if this is a bug, so I prefer to post it here before filing.
After upgrading to from R 3.3.3 to R 3.4 I encounter the following message with mclapply:
Assertion failure at kmp_runtime.cpp(6480): __kmp_thread_pool == __null.
OMP: Error #13: Assertion failure at kmp_runtime.cpp(6480).
OMP: Hint: Please submit a bug report with this message, compile and run commands used, and machine configuration info including native compiler and operating system versions. Faster response will be obtained by including all program sources. For information on submitting this issue, please see
Note that this behavior was not present in R 3.3.3 on the same machine and all the batch was working without any errors. Also note that I tried this with all possible values for enableJIT(X) with the same result.
The batch to (hopefully) reproduce it is here:
library(data.table)
load(file = "z.RData")
firmnames <- as.list(unique(z[, firm_name]))
f <- function(x, d = z) {
tmp <- d[dealid %in% unique(d[firm_name %in% x, dealid]), .(firm_name, firm_type, dealid, investment_year, investment_yearQ, round_number)][firm_name != x, ]
tmpY <- tmp[, .N, by = .(firm_type, investment_year, round_number)]
tmpQ <- tmp[, .N, by = .(firm_type, investment_yearQ, round_number)]
return(list(
firm_name = x,
by_year = tmpY,
by_quarter = tmpQ,
allroundsY = tmpY[, sum(N), by = .(firm_type, investment_year)],
allroundsQ = tmpQ[, sum(N), by = .(firm_type, investment_yearQ)]))
}
r <- mclapply(firmnames, f, mc.cores = detectCores(), mc.preschedule = FALSE)
The data for the reproducible example is here:
https://www.dropbox.com/s/2enoeapu7jgcxwd/z.Rdata?dl=0
The sessionInfo():
R version 3.4.0 (2017-04-21)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.4
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] parallel compiler stats graphics grDevices utils datasets methods base
other attached packages:
[1] data.table_1.10.4 numbers_0.6-6 microbenchmark_1.4-2.1 zoo_1.8-0 doParallel_1.0.10 iterators_1.0.8
[7] foreach_1.4.3 RSclient_0.7-3 stringi_1.1.5 stringr_1.2.0 lubridate_1.6.0 plyr_1.8.4
loaded via a namespace (and not attached):
[1] Rcpp_0.12.10 lattice_0.20-35 codetools_0.2-15 grid_3.4.0 gtable_0.2.0 magrittr_1.5 scales_0.4.1 ggplot2_2.2.1
[9] lazyeval_0.2.0 tools_3.4.0 munsell_0.4.3 colorspace_1.3-2 tibble_1.3.0
Thank you in advance for help/hints,
Yan
EDIT: Slight edit, it turns out that this code cannot really reproduce the issue... However, I leave it here following the advice of data.table developer just in case someone else finds it helpful.

django-videothumbs and "list index out of range" error

Im using django-videothumbs
Video field is:
video = videothumbs.VideoThumbField(upload_to='videos', sizes=((125,125),(300,200),))
In uploading, video uploads but in thumbnail creation I have this error:
Exception Value: list index out of range
Exception Location:/library/videothumbs.py in generate_thumb, line 51
And line 51:
for c in range(len(histogram[0])):
ac = 0.0
for i in range(n):
ac = ac + (float(histogram[i][c])/n)
avg.append(ac)
What is wrong about video filed?
Edit:
with print histogram I have:
sh: ffmpeg: command not found
But in terminal:
FFmpeg version CVS, Copyright (c) 2000-2004 Fabrice Bellard
Mac OSX universal build for ffmpegX
configuration: --enable-memalign-hack --enable-mp3lame --enable-gpl --disable-vhook -- disable-ffplay --disable-ffserver --enable-a52 --enable-xvid --enable-faac --enable-faad --enable-amr_nb --enable-amr_wb --enable-pthreads --enable-x264
libavutil version: 49.0.0
libavcodec version: 51.9.0
libavformat version: 50.4.0
built on Apr 15 2006 04:58:19, gcc: 4.0.1 (Apple Computer, Inc. build 5250)
usage: ffmpeg [[infile options] -i infile]... {[outfile options] outfile}...
Hyper fast Audio and Video encoder
Thanks in advance
Have you checked the value of histogram[0]? Most probably histogram doesn't have any elements.
I would change the code in:
if len(histogram) > 0:
for c in range(len(histogram[0])):
ac = 0.0
for i in range(n):
ac = ac + (float(histogram[i][c])/n)
avg.append(ac)