Kotlin/Native OpenGL interop - opengl

I'm trying to setup kotlin/native project that utilizes OpenGL C libraries.
OS: ArchLabs, linux 5.1.15 (shares repositories with arch)
Packages installed: glu, glew, freeglut, glfw
In my main() there's only one function called (it's copied from samples):
glutInit(argc.ptr, null)
There was no out of the box support for OpenGL in my project, so I decided to make opengl.def:
package = platform.OpenGL
headers = GL/glut.h
compilerOpts = -I/usr/include
$ ls /usr/include/GL
freeglut_ext.h glcorearb.h gl.h glu_mangle.h glxext.h glx_mangle.h glxtokens.h wglew.h
freeglut.h glew.h gl_mangle.h glut.h glx.h glxmd.h internal
freeglut_std.h glext.h glu.h glxew.h glxint.h glxproto.h osmesa.h
And here's my gradle.build.kts:
plugins {
id("org.jetbrains.kotlin.multiplatform") version "1.3.41"
}
repositories {
mavenCentral()
}
kotlin {
linuxX64("opengl") {
val main by compilations.getting
val opengl by main.cinterops.creating
binaries {
executable {
entryPoint = "opengl.main"
}
}
}
}
There is a .kt file generated: build/classes/.../OpenGL/OpenGL.kt which contains definition of glutInit function (well, more of a reference I guess)
And here's the output of runReleaseExecutableOpengl
> Configure project :
Kotlin Multiplatform Projects are an experimental feature.
> Task :cinteropOpenglOpengl
> Task :linkReleaseExecutableOpengl
/home/Opengl/.konan/dependencies/clang-llvm-6.0.1-linux-x86-64/bin/ld.lld: error: undefined symbol: glutInit
>>> referenced by ld-temp.o
>>> /tmp/konan_temp5065866915785286367/combined.o:(platform_OpenGL_kniBridge520)
e: /home/Opengl/.konan/dependencies/clang-llvm-6.0.1-linux-x86-64/bin/ld.lld invocation reported errors
> Task :linkReleaseExecutableOpengl FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':linkReleaseExecutableOpengl'.
> Process 'command '/usr/lib/jvm/java-11-openjdk/bin/java'' finished with non-zero exit value 1
Is there a way to fix it? My best guess is that I have to have mingw-w64-* packages installed, for example mingw-w64-freeglut. Is that the case? It could also be that I'm pointing to the wrong headers (I'm not really into C yet and it's been a long time since I used C++) and it can't find the implementation of these headers.
Thanks in advance!

You need linkerOpts = -L/usr/lib -lglut in the def file, to dynamically link against freeglut.

Related

VS Code building .exe files in Windows 10, how do I change to build for windows 10 compatibility

So I am having trouble building my .exe files in visual studio code for my current windows OS. for some reason, my .exe files, when I run a diagnostic on them, seem to only be compatible for windows 8, not 10.
Using
Processor architecture: AMDx64
system: x64 based PC
VS code version: 1.74.3
When creating a simple "Hello World" application I tried following this tutorial, and it didn't have any problems. It was when I followed the tutorial for importing external libraries that the problems ocured.
I’ve tried importing an external library, and used msys2 to install the files in the bin/include/lib folders for mingw64.
I set my include path to the include folder, and I’ve set my compiler to default. My JSON tasks document appears correct, and when I build the .exe file, it builds successfully… but it only builds an executable file compatible for windows eight
I ran the properties compatibility test, and this is the output I get
What exactly do I need to do in order to change the OS version to make it compatible to run on both the visual studio code terminal, and my system terminal as well?
the following is the output once I ran the build process
Starting build...
C:\msys64\mingw64\bin\cpp.exe -IC:\msys64\mingw64\include -fdiagnostics-color=always -g "D:\Documents\C++\VS_Code\FMT Import\FMTImport.cpp" -o "D:\Documents\C++\VS_Code\FMT Import\FMTImport.exe" -lfmt
>Build finished successfully.
edit:
config name:C:/msys64/mingw64/bin/g++.exe
compiler path: C:/msys64/mingw64/bin/g++.exe
intellisense mode: ${default}
edit 2:
running terminal attempts to build..
for g++
D:\Documents\C++\VS_Code\FMT Import>g++ -o FMTImport FMTImport.cpp
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/12.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\William\AppData\Local\Temp\ccshGhdE.o:FMTImport.cpp:(.text+0x8c): undefined reference to `fmt::v9::vprint(fmt::v9::basic_string_view<char>, fmt::v9::basic_format_args<fmt::v9::basic_format_context<fmt::v9::appender, char> >)'
collect2.exe: error: ld returned 1 exit status
for clang++
D:\Documents\C++\VS_Code\FMT Import>clang++ FMTImport.cpp -o FMTImport
FMTImport.cpp:1:10: fatal error: 'fmt/format.h' file not found
#include <fmt/format.h>

Tensorflow Lite c++ Build

I have already put a question about the access violation of the TensorFlow lite c++ API. No one answered it so far, I believe the error I made is with selecting the wrong header- and library files from the Bazel build.
The steps that I have done to get the Tensorflow Lite Header and Libraries are from Youtube Tutorial and from Tensorflow.
Get Required Python (for me Python 3.9.5)
Install required Packages locally
Install Bazel (for me 3.7.2) and MSYS2 (after installation run pacman -S git patch unzip) and add it to Path
Check VS Build Tools 2019 for C++ (I have VS 19 Community with MSVC v142 & Windows 10 SDK)
Download and Unzip Tensorflow Sources from Github (Release of 2.5.3)
Inside the Tensorflow Sources, use python .\configure.py to start configure the bazel build (I only used Yes for override eigen strong inline, the rest is kept on the default value)
The I opened GitBash cmd inside the tensorflow source bazel build -c opt //tensorflow/lite:tensorflowlite
After a successful build later I get the "bazel-bin", "bazel-out", "bazel-tensorflow-2.5.3" and "bazel-testlogs" folder.
I created the following folders tensorflow/include/tensorflow/lite & core and tensorflow/include/flatbuffers for the headers and finally the tensorflow/lib for the libraries.
I copied the tensorflowlite.dll & tensorflow.dll.if.lib from the build directory (tensorflow-2.5.3\bazel-bin\tensorflow\lite) into the tensorflow/lib directory together with the flatbuffers.lib (from tensorflow-2.5.3\bazel-bin\external\flatbuffers\src)
I copied the tensorflow-2.5.3\bazel-bin\external\flatbuffers\src_virtual_includes\flatbuffers\flatbuffers headers into the tensorflow/include/flatbuffers directory
I copied the tensorflow-2.5.3\tensorflow\lite and tensorflow-2.5.3\tensorflow\core from the original sources into the tensorflow/include/tensorflow/lite & core directory.
After those steps, I could create a new VS Project and add the created linker and include information. And created the following short example to read the input layer.
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#define TFLITE_MINIMAL_CHECK(x) \
if (!(x)) \
{ \
fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
exit(1); \
}
int main()
{
std::string filename = "C:/project/tflitetesting/models/classification/mobilenet_v1_1.0_224_quant.tflite";
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename.c_str());
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder builder(*model, resolver);
std::unique_ptr<tflite::Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
printf("=== Pre-invoke Interpreter State ===\n");
tflite::PrintInterpreterState(interpreter.get());
interpreter->SetAllowFp16PrecisionForFp32(true);
interpreter->SetNumThreads(1);
// Get Input Tensor Dimensions
unsigned char* input = interpreter->typed_input_tensor<unsigned char>(0);
}
But I am still receiving the access violation exception inside interpreter.h at
const Subgraph& primary_subgraph() const {
return *subgraphs_.front(); // Safe as subgraphs_ always has 1 entry.
}
What am I doing wrong? I dont want to build the shared library since the target (Coral Edge) has direct access to those functions (ex. interpreter->typed_input_tensor<unsigned char>(0); too.
The thing is, you cannot Debug a Release (Optimized) version.
with the command bazel build -c opt //tensorflow/lite:tensorflowlite you will create an "Release" version of the dll's and lib's.
Therefore just apply bazel build -c dbg //tensorflow/lite:tensorflowlite to get the debug tflite c++ version.

Cygwin libzint(zint) goes into infinite loop

Cygwin Zint(libzint) stucks on ZBarcode_Create() function call with basic example code which perfectly works on Linux system:
#include <zint.h>
int main()
{
struct zint_symbol *my_symbol;my_symbol = ZBarcode_Create();
if(my_symbol != NULL)
{
printf("Symbol successfully created!\n");
}
ZBarcode_Delete(my_symbol);
return 0;
}
Steps to reproduce:
Downloaded and installed Cygwin, zlib, libpng and libzint(zint) packages
In Visual Studio created a new project, added Include Path, added libzint.a library name, added library path in Linker options
Added Cygwin path to PATH variable
Tried to build libzint by myself result is the same
Can someone help me to find out is it common behavior for libs that were created with Cygwin or is it only Zint(libzint)?
The code is working fine. After installing the needed libraries with cygwin setup and
checking their presence
$ cygcheck -cd |grep zint
libzint-devel 2.4.3-3
libzint2.4 2.4.3-3
zint 2.4.3-3
the compilation is without any problem and the testing also
$ gcc -o prova prova.c -Wall -lzint
$ ./prova.exe
Symbol successfully created!

Conflict Protobuf version when using Opencv and Tensorflow c++

I am currently trying to use Tensorflow's shared library in a non-bazel project, so I creat a .so file from tensorflow using bazel.
but when I launch a c++ program that uses both Opencv and Tensorflow, it makes me the following error :
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Abandon (core dumped)
Can you help me?
Thank you
You should rebuild TensorFlow with a linker script to avoid making third party symbols global in the shared library that Bazel creates. This is how the Android Java/JNI library for TensorFlow is able to coexist with the pre-installed protobuf library on the device (look at the build rules in tensorflow/contrib/android for a working example)
Here's a BUILD file that I adapted from the Android library to do this:
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
)
exports_files([
"version_script.lds",
])
# Build the native .so.
# bazel build //tensorflow/contrib/android_ndk:libtensorflow_cc_inference.so \
# --crosstool_top=//external:android/crosstool \
# --host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
# --cpu=armeabi-v7a
LINKER_SCRIPT = "//tensorflow/contrib/android:version_script.lds"
cc_binary(
name = "libtensorflow_cc_inference.so",
srcs = [],
copts = tf_copts() + [
"-ffunction-sections",
"-fdata-sections",
],
linkopts = if_android([
"-landroid",
"-latomic",
"-ldl",
"-llog",
"-lm",
"-z defs",
"-s",
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
linkshared = 1,
linkstatic = 1,
tags = [
"manual",
"notap",
],
deps = [
"//tensorflow/core:android_tensorflow_lib",
LINKER_SCRIPT,
],
)
And the contents of version_script.lds:
{
global:
extern "C++" {
tensorflow::*;
};
local:
*;
};
This will make everything in the tensorflow namespace global and available through the library, while hiding the reset and preventing it from conflicting with protobuf.
(wasted a ton of time on this so I hope it helps!)
The error indicates that the program was complied using headers (.h files) from protobuf 2.6.1. These headers are typically found in /usr/include/google/protobuf or /usr/local/include/google/protobuf, though they could be in other places depending on your OS and how the program is being built. You need to update these headers to version 3.1.0 and recompile the program.
This is indeed a pretty serious problem! I get the below error similar to you:
$./ceres_single_test
[libprotobuf FATAL google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
Aborted
My workaround:
cd /usr/lib/x86_64-linux-gnu
sudo mkdir BACKUP
sudo mv libmirprotobuf.so* ./BACKUP/
Now, the executable under test works, cool. What is not cool, however, is that things like gedit no longer work without running from a shell that has the BACKUP path added to LD_LIBRARY_PATH :-(
Hopefully there's a better fix out there?
The error complains about the Protocol Buffer runtime library, which is not compatible with the installed version. This error is coming from the GTK3 library. GTK3 use Protocol Buffer 2.6.1. If you use GTK3 to support Opencv, you get this error. The easiest way to fix this, you can use QT instead of GTK3.
If you use Cmake GUI to install Opencv, just select QT support instead of using GTK3. You can install QT using the following command.
sudo apt install qtbase5-dev
rebuild libprotobuf with -Dprotobuf_BUILD_SHARED_LIBS=ON
then make install to cover the older version

How to debug Qt dll issues with cross compiled builds?

After cross-compiling Qt 5 applications (host: Fedora 19/64 bit, target: Windows 32 bit) I execute following steps for deploying the executable:
$ DEST=/windows/testdir
$ cp /usr/i686-w64-mingw32/sys-root/mingw/bin/*.dll $DEST
$ mkdir $DEST/platforms
$ cp /usr/i686-w64-mingw32/sys-root/mingw/lib/qt5/plugins/platforms/qwindows.dll\
$DEST/platforms
$ cp release/main.exe $DEST # the cross-compiled Qt5 binary
I test it on Windows like this:
say /windows is mounted on f:
start command prompt window
f:
cd testdir
main
And there I get:
Failed to load platform plugin "windows". Available platforms are:
Microsoft Visual C++ Runtime Library
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
I don't really believe the first message because:
a) the above steps worked in the past (executed on the same Fedora 19 system)
b) the platforms directory is there as documented in the qt docs.
What changed is that now the application includes some PNGs/JPGs in dialogs (read via Qt's resource file system, as QIcons).
Thus, I've also copied some plugins:
$ cp -r /usr/i686-w64-mingw32/sys-root/mingw/lib/qt5/plugins $DEST
Which didn't help to resolve the above issue.
Conclusion
Is there a way to debug dynamic runtime linker issues like these?
Can I instruct it such that I somehow get output on which dll's the app/linker tries to load and where is does its lookups? (and why they fail ...)
For example something like this would be great:
ldd: main.exe -> load of foo.dll in work-dir failed (no such file)
ldd: main.exe -> load of bar.dll in work-dir/platforms failed (wrong file format)
ldd: main.exe -> load of baz.dll in work-dir successful
...
Compile steps
I used following steps for cross-compiling on Fedora 19:
$ mingw32-qmake-qt5 main.pro -o win32.mf
$ mingw32-make -f win32.mf
$ # -> binary is created in release/main.exe
Wine
I've looked at wine for testing purposes. It is helpful because it displays an error message when it can't find a DLL, e.g.:
$ wine $DEST/main.exe
err:module:import_dll Library libEGL.dll (which is needed by L"Z:\\usr\\i686-w64-mingw32\\sys-root\\mingw\\lib\\qt5\\plugins\\platforms\\qwindows.dll") not found
err:module:import_dll Library libjpeg-62.dll (which is needed by L"Z:\\usr\\i686-w64-mingw32\\sys-root\\mingw\\lib\\qt5\\plugins\\imageformats\\qjpeg.dll") not found
Interestingly, it directly finds the platforms library and needed plugin under Z:\\usr\\i686-w64-mingw32\\sys-root\\mingw\\lib\\qt5\\.
But when all needed DLLs from /usr/i686-w64-mingw32/sys-root/mingw/bin/*.dll are copied to $DEST, wine runs the same main.exe just fine - where on native windows (7) I get the above error boxes.
You can use tools like Dependency Walker for checking dependencies of a single DLL, Wine for quickly checking startup on the compile-host and Process Monitor to see which directories/files are accessed during runtime of the process.
It also makes sense to debug-output the library path from the Qt application, e.g.
int main(int argc, char **argv)
{
qDebug() << "Library paths: " << QApplication::libraryPaths();
QApplication app(argc, argv);
...
With that I get following output on native windows:
Library paths: ()
(To enable qDebug() statements - even with release binaries - you have to add CONFIG += console to your qmake project file.)
Looking at the Process Monitor output it seems that the binary does not try to open any plugins (platforms or other) in its current working directory (CWD) nor in its base directory.
When I extend the library path the binary finds all needed plugins in its CWD:
int main(int argc, char **argv)
{
QApplication::addLibraryPath(QDir::currentPath());
QApplication app(argc, argv);
...
I don't know if this qualifies as a work-around - perhaps one is supposed to do something like this. But the Qt documentation seems to suggest opposite:
To deploy the application, we must make sure that we copy the relevant Qt DLL (corresponding to the Qt modules used in the application) and the windows platform plugin as well as the executable to the same directory in the release subdirectory.
The complete deploy procedure is now:
$ cp /usr/i686-w64-mingw32/sys-root/mingw/bin/*.dll $DEST
# copying platforms, imageformats etc. plugin directories:
$ cp /usr/i686-w64-mingw32/sys-root/mingw/lib/qt5/plugins/* $DEST -r
$ cp release/main.exe $DEST
(Depending on what packages are installed on your compile-host you probably don't need to copy all DLLs - with wine it is easy to start a fixpoint iteration for all needed non-plugin dlls.)
Missing non-platform plugins don't necessarily abort the programs startup - e.g. without a jpeg plugin some icons are just not displayed.