I am trying to use a custom node package that I wrote in an Electron application and I am having trouble getting the resulting DLL/Node package to initialize. When I launch my Electron application, I get the following error:
Uncaught Error: A dynamic link library (DLL) initialization routine failed.
The DLL being linked is a simple library written in C++ that has one function that takes a double as an input and simply adds one to it, returning the result. To build the C++ library, I use SWIG (http://www.swig.org/) and node-gyp with the following commands:
swig -c++ -javascript -node ./src/mace_api.i
node-gyp clean configure build
mace_api is the package I am trying to build. mace_api.i, the binding.gyp file, and the source files for my library are defined as follows:
mace_api.i
%module MaceAPI
%{
#include "./mace_api.cpp"
%}
%include <windows.i>
%include "./mace_api.h"
binding.gyp
{
"targets": [
{
"target_name": "mace-api",
"sources": [ "./src/mace_api_wrap.cxx" ]
}
]
}
mace_api.h
#ifndef MACE_API_H
#define MACE_API_H
#include <iostream>
#include <functional>
using namespace std;
class MaceAPI
{
public:
MaceAPI();
double addOne(double input);
};
#endif // MACE_API_H
mace_api.cpp
#include "mace_api.h"
MaceAPI::MaceAPI()
{
}
double MaceAPI::addOne(double input)
{
double ret = input + 1.0;
return ret;
}
SWIG takes the C++ source files and basically writes a wrapper that can be used, in this case, by node-gyp to build a Node package. Has anyone tried to use a custom Node module built in this manner in an Electron application? Am I missing something with how SWIG generates a wrapper for my C++ library, or how Electron handles custom Node packages? I am able to link the library with Node, but not with Electron. Any help would be appreciated.
For completeness, below is how I am trying to include and use my package in my Electron application:
var libMaceTest= require('mace-api/build/Release/mace-api');
var test = new libMaceTest.MaceAPI();
console.log(test.addOne(5));
Have you checked out https://github.com/electron/electron/blob/master/docs/tutorial/using-native-node-modules.md#manually-building-for-electron
Specifically,
Manually building for Electron
If you are a developer developing a native module and want to test it
against Electron, you might want to rebuild the module for Electron
manually. You can use node-gyp directly to build for Electron:
cd /path-to-module/ HOME=~/.electron-gyp node-gyp rebuild
--target=1.2.3 --arch=x64 --dist-url=https://atom.io/download/atom-shell
The HOME=~/.electron-gyp changes where to find development headers.
The
--target=1.2.3 is version of Electron. The --dist-url=... specifies where to download the headers. The --arch=x64 says the module is built
for 64bit system.
Related
I have already put a question about the access violation of the TensorFlow lite c++ API. No one answered it so far, I believe the error I made is with selecting the wrong header- and library files from the Bazel build.
The steps that I have done to get the Tensorflow Lite Header and Libraries are from Youtube Tutorial and from Tensorflow.
Get Required Python (for me Python 3.9.5)
Install required Packages locally
Install Bazel (for me 3.7.2) and MSYS2 (after installation run pacman -S git patch unzip) and add it to Path
Check VS Build Tools 2019 for C++ (I have VS 19 Community with MSVC v142 & Windows 10 SDK)
Download and Unzip Tensorflow Sources from Github (Release of 2.5.3)
Inside the Tensorflow Sources, use python .\configure.py to start configure the bazel build (I only used Yes for override eigen strong inline, the rest is kept on the default value)
The I opened GitBash cmd inside the tensorflow source bazel build -c opt //tensorflow/lite:tensorflowlite
After a successful build later I get the "bazel-bin", "bazel-out", "bazel-tensorflow-2.5.3" and "bazel-testlogs" folder.
I created the following folders tensorflow/include/tensorflow/lite & core and tensorflow/include/flatbuffers for the headers and finally the tensorflow/lib for the libraries.
I copied the tensorflowlite.dll & tensorflow.dll.if.lib from the build directory (tensorflow-2.5.3\bazel-bin\tensorflow\lite) into the tensorflow/lib directory together with the flatbuffers.lib (from tensorflow-2.5.3\bazel-bin\external\flatbuffers\src)
I copied the tensorflow-2.5.3\bazel-bin\external\flatbuffers\src_virtual_includes\flatbuffers\flatbuffers headers into the tensorflow/include/flatbuffers directory
I copied the tensorflow-2.5.3\tensorflow\lite and tensorflow-2.5.3\tensorflow\core from the original sources into the tensorflow/include/tensorflow/lite & core directory.
After those steps, I could create a new VS Project and add the created linker and include information. And created the following short example to read the input layer.
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#define TFLITE_MINIMAL_CHECK(x) \
if (!(x)) \
{ \
fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
exit(1); \
}
int main()
{
std::string filename = "C:/project/tflitetesting/models/classification/mobilenet_v1_1.0_224_quant.tflite";
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename.c_str());
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder builder(*model, resolver);
std::unique_ptr<tflite::Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
printf("=== Pre-invoke Interpreter State ===\n");
tflite::PrintInterpreterState(interpreter.get());
interpreter->SetAllowFp16PrecisionForFp32(true);
interpreter->SetNumThreads(1);
// Get Input Tensor Dimensions
unsigned char* input = interpreter->typed_input_tensor<unsigned char>(0);
}
But I am still receiving the access violation exception inside interpreter.h at
const Subgraph& primary_subgraph() const {
return *subgraphs_.front(); // Safe as subgraphs_ always has 1 entry.
}
What am I doing wrong? I dont want to build the shared library since the target (Coral Edge) has direct access to those functions (ex. interpreter->typed_input_tensor<unsigned char>(0); too.
The thing is, you cannot Debug a Release (Optimized) version.
with the command bazel build -c opt //tensorflow/lite:tensorflowlite you will create an "Release" version of the dll's and lib's.
Therefore just apply bazel build -c dbg //tensorflow/lite:tensorflowlite to get the debug tflite c++ version.
I try to create an example using SWIG and NodeJS on my M1 (arm64) Mac,
but I want to mention this as early as possible:
this issue appears also on an Intel (x64) Mac.
I create my simple example Files like this:
example.h
#pragma once
class Die
{
public:
Die();
~Die();
int foo(int a);
};
Die* getDie();
//to test if the issue also appears getting a simple functiom without any class context.
extern "C"
{
bool getFoo();
}
here is the implementation.
example.cpp
#include <iostream>
#include "example.h"
int Die::foo(int a)
{
std::cout << "foo: running fact from simple_ex" << std::endl;
return 1;
}
Die::Die()
{
}
Die::~Die()
{
}
// out of Class Context
Die* getDie()
{
return new Die();
}
extern "C"
{
bool getFoo()
{
return true;
}
}
my Swig interface is as follows:
example.i
%module example
%{
#include "example.h"
%}
%include "example.h"
then i create my example_warp.cxx file. But as actually the 4.0.2 Version of Swig is not compatible with NodeJs v16.0.0 (read SWIG support for NodeJS v12 #1520 and Prepare SWIG for Node.js v12 #1746).
Therefore i needed to build swig from source using master branch with the current version (4.1.0). Please keep that in mind.
Swig Command:
swig -Wall -c++ -javascript -node example.i
Here now some files preparing to create the .node file
package.json
{
"name": "SwigJS",
"version": "0.0.1",
"scripts": {
"start": "node index.js",
"install": "node-gyp clean configure build"
},
"dependencies": {
"nan": "^2.16.0",
"node-gyp": "^9.0.0"
},
"devDependencies": {
"electron-rebuild": "^3.2.7"
}
}
the package.json i got from a mate as an example an edited it to work with my project so there may be some lines not really needed by me.
binding.gyp
{
"targets": [
{
"target_name": "SwigJS",
"sources": [ "example_wrap.cxx" ],
"include_dirs" : [ "<!(node -e \"require('nan')\")" ]
}
]
}
now i build my SwigJS.node file using:
node-gyp configure
node-gyp build
it runs through without any errors.
Now i try to access the node-file in my JavaScript but i always get the error message:
missing symbol called
index.js
const Swigjs = require("./build/Release/SwigJS.node");
console.log("exports :", Swigjs); //show exports
die = Swigjs.getDie(); //try to get the Class
console.log(die.foo(5)); //call a function from the class
the output looks like this:
[Running] node "/Users/rolf/Documents/SwigJS/index.js"
exports : {
getDie: [Function (anonymous)],
getFoo: [Function (anonymous)],
Die: [Function: Die]
}
dyld[49745]: missing symbol called
[Done] exited with code=null in 0.12 seconds
What i have tried to find the error:
tried to build the .node file on an x64 architecture to check if it is an arm topic with NodeJS v16:17 on an Intel x64 Mac from a mate.
installed NodeJS 16.0.0 (first version supporting arm64 on mac)
as the Github Issues suggest NodeJS version 12 tried to build and run this on an x64 Intel Mac with NodeJS v12.13.0
tried to force x64 architecture leading to different error but because of incompatible library (x64) using a arm64 mac
all of it (except last mentioned) ended with the same result "missing symbol called"
help would be appreciated big.
Your question is an interesting take on a FAQ on this site: What is an undefined reference/unresolved external symbol error and how do I fix it?.
The role of SWIG is indeed to generate glue code between Node.js and C++ code. But it does only that. If you inspect the .dylib file that is associated with your NodeJS module using the nm command, you will see that it has Undefined references to the C++ functions it wraps.
This is by design. SWIG expects that the code it wraps is somehow already loaded into memory.
There are three approaches to do so:
Compile example.cpp directly into the SWIG wrapper. Or compile it to a static library first (example.a) and link that statically into the wrapper. I think it suffices to add example.cpp to the sources section of binding.gyp
Compile example.cpp into a library (example.dylib) and dynamically link it to the SWIG wrapper. I have not used GYP myself yet, but I think it means adding the following to your targets entry in bindings.gyp:
'link_settings': {
'libraries': [
'-lexample',
],
},
Compile example.cpp into a library (example.dylib) and use dlopen to load it explicitly. This puts a tremendous burden on your users and is very hard to debug. Do not do this.
I want to run my C++ code no micropython
For that I referred https://github.com/stinos/micropython-wrap this wrapper.
Uploaded required wrapper files on the board and tried to run
#include <micropython-wrap-master/functionwrapper.h>
//function we want to call from within a MicroPython script
std::vector< std::string > FunctionToBeCalled ( std::vector< std::string > vec )
{
for( auto& v : vec )
v += "TRANSFORM";
return vec;
}
//function names are declared in structs
struct CppFunction
{
func_name_def( TransformList )
};
extern "C"
{
void RegisterMyModule(void)
{
//register a module named 'foo'
auto mod = upywrap::CreateModule( "foo" );
//register our function with the name 'TransformList'
//conversion of a MicroPython list of strings is done automatically
upywrap::FunctionWrapper wrapfunc( mod );
wrapfunc.Def< CppFunction::TransformList >( FunctionToBeCalled );
}
}
Run it using
import foo
print(foo.TransformList(['a', 'b'])) # Prints ['aTRANSFORM', 'bTRANSFORM']
But later I found this will not help. Because I need to integrate my C++ code into micropython code and rebuild the firmware to get it run.
I am not able to figure out
How to integrate my C++ in existing micropython code
How to recompile the firmware ( Because when I try to use make command, it does not seem working )
Any help highly appreciated.
MicroPython offers two options of adding C/C++ code to it.
Note that both require cross-compilation of that code on a PC.
AFAIK there is no option to compile on a microcontroller (due to obvious constraints)
1) MicroPython Native Module
One of the main advantages of using native .mpy files is that native machine code can be imported by a script dynamically, without the need to rebuild the main MicroPython firmware.
Essentially you compile C to MicroPython bytecode and store that in a module.
Then you import that module into Python
http://docs.micropython.org/en/latest/develop/natmod.html#minimal-example
2) MicroPython C Module
You add you C/C++ code as a custom module, and then compile and link that together into a new firmware image.
http://docs.micropython.org/en/latest/develop/cmodules.html#micropython-external-c-modules
The approach you refer to, is based on the 2nd method, and requires cross-compilation.
C or C++ support will depend on the compiler/languages supported for you port/hardware.
You don't need necessarily need to recompile, you can build a C++ user module. Which has been done for ESP32 already, see discussion at https://github.com/stinos/micropython-wrap/issues/5#issuecomment-704328111.
Specifically if you run make USER_C_MODULES=../../../micropython-wrap CFLAGS_EXTRA="-DMODULE_UPYWRAPTEST_ENABLED=1" from the ESP32 port directory, this should build a user C++ module containing micropython-wrap's unittests, so that should be a good starting point: copy the relevant files (cmodule.c, module.cpp and micropython.mk from the tests directory) and modify the code.
I'm trying to use Eclipse to do the development for a project that involves Gazebo (a popular robotics simulator). Gazebo provides a plugin system to allow external interaction with the simulator and a series of tutorials on how to write plugins.
Having followed the tutorials successfully, I tried migrating the code to Eclipse, using cmake -G "Eclipse CDT4 - Unix Makefiles" [buildpath] to generate an eclipse rpoject, then importing it into my Eclipse workspace.
Everything generally went well, but I've run into a problem that is a bit odd:
When I compile my project, Eclipse comes back with "Member declaration not found" error referring to an SDFormat data type used in the signature to the ModelPush::Load function (see code snippets below). SDFormat, incidnetally is a robotics XML used for describing how a robot is put together.
Despite this error (which should result in nothing being built), the resulting shared library is built anyway.
I guess I can live with it, but I'd obviously like to resolve this issue, which appears to be internal to Eclipse / CDT...
TO CLARIFY:
I'm trying to determine why Eclipse gives me the error: "Member declaration not found" on the Load() function signature in model_push.cc. The guilty party is the sdf::ElementPtr _sdf parameter. Something's wrong with the SDFormat library or with the way that Eclipse / CDT looks at it. This isn't an include issue. And, even though Eclipse gives me the error, it still builds the .so file. Running make from the command line also generates the file, but without any errors.
Again, I can live with it, but I'd rather not. I just don't know where to start looking for a solution since this isn't a problem finding an include or the sdf library file.
Here's the class declaration (mode_push.hh):
#ifndef MODEL_PUSH_HH_
#define MODEL_PUSH_HH_
#include <boost/bind.hpp>
#include <gazebo/gazebo.hh>
#include <gazebo/physics/physics.hh>
#include <gazebo/common/common.hh>
#include <stdio.h>
#include <sdf/sdf.hh>
namespace gazebo
{
class ModelPush : public ModelPlugin
{
public:
void Load (physics::ModelPtr _parent, sdf::ElementPtr _sdf);
//Called by the world update start event
void OnUpdate (const common::UpdateInfo & /*_info*/);
//Pointer to the model
private:
physics::ModelPtr model;
//Pointer to the update event connection
private:
event::ConnectionPtr updateConnection;
};
}
#endif /* MODEL_PUSH_HH_ */
Here's the implementation file (model_push.cc):
#include "model_push.hh"
namespace gazebo
{
void ModelPush::Load(physics::ModelPtr _parent, sdf::ElementPtr _sdf)
//void ModelPush::Load (physics::ModelPtr _parent, sdf::ElementPtr /*sdf*/)
{
//Store the pointer to the model
this -> model = _parent;
//Listen to the update event. This event is broadcast every
//simulation iteration.
this -> updateConnection = event::Events::ConnectWorldUpdateBegin(
boost::bind (&ModelPush::OnUpdate, this, _1));
}
//Called by the world update start event
void ModelPush::OnUpdate (const common::UpdateInfo & /*_info*/)
{
//Apply a small linear velocity to the model.
this -> model -> SetLinearVel (math::Vector3 (0.03, 0.0, 0.0));
}
//Register this plugin with the simulator
//GZ_REGISTER_MODEL_PLUGIN(ModelPush)
}
I've been struggling with this exact problem. I've found a solution that works, but I still don't think is ideal. Instead of generating the eclipse project using cmake (or catkin_make) I'm generating it using the CDT project builder. Here's the process I'm using in Eclipse 2018-09.
Create a New C/C++ Project of type C++ Managed Build (A C++ Project build using the CDT's managed build system.)
Project name: ROSWorkspace
Location: /home/username/eclipse-workspace/ROSWorkspace
Project type: Makefile project / Empty Project
Toolchain: Linux GCC
Finish.
Right click on the project and select Properties.
C/C++ Build / Builder Stetings:
Uncheck "Use default build command"
Build command: catkin_make
Build directory: ${workspace_loc:/../catkin_ws}/
C/C++ General / Paths and Symbols / Includes tab
Add /usr/include/gazebo-8
Add /usr/include/sdformat-5.3
C/C++ General / Preprocessor Includes / Providers tab
CDT GCC Built-in Compiler Settings / Command to get compiler specs: ${COMMAND} ${FLAGS} -E -P -v -dD "${INPUTS}" -std=c++11
Click Ok, then from the drop down menu choose:
Project / C/C++ Index / Freshen all files
Ideally I'd make the time to dig in to figure out how to get the preprocessor to properly work with the generated project, but I just don't have the time right now. I hope this helps.
QtWebkit-plugins is a library that provides features to the QWebView, eg SpellCheck and Notification Web API.
Read about:
SpellCheck
Notification Web API
I tried to compile the code in Windows, but my QWebView not working as expected, in other words, SpellCheck and Notification Web API not working. It's like I've been not-using QtWebkit-plugins. Which can be?
In the documentation that says to compile I have to run:
$ qmake
$ make && make install
Read more in QtWebkit-plugins repository
I'm using mingw, instead of make I used mingw32-make:
I compiled hunspell
Copied hunspell for C:\Qt5.4.0\5.4\mingw491_32\bin and C:\Qt5.4.0\5.4\mingw491_32\lib
I compiled qtwebkit-plugins using in cmd:
qmake
mingw32-make && mingw32-make install
mingw32-make generated libqtwebkitpluginsd.a and qtwebkitplugins.dll
Copied libqtwebkitpluginsd.a for C:\Qt5.4.0\5.4\mingw491_32\lib
Copied qtwebkitplugins.dll for C:\Qt5.4.0\5.4\mingw491_32\plugins\webkit and C:\Qt5.4.0\5.4\mingw491_32\bin
After that I compiled another simple project that uses QWebView then tested the SpellCheck in a <textarea spellcheck="true"></textarea> and did not work.
I tested the Notification Web API and also did not work.
Note: When running my project using QT_DEBUG_PLUGINS=1 and use Notification Web API in the application output tab (in QtCreator) returns:
Found metadata in lib C:/Qt5.4.0/5.4/mingw491_32/plugins/webkit/qtwebkitplugins.dll, metadata=
{
"IID": "org.qtwebkit.QtWebKit.QtWebKitPlugin",
"MetaData": {
},
"className": "QtWebKitPlugin",
"debug": false,
"version": 328704
}
loaded library "C:/Qt5.4.0/5.4/mingw491_32/plugins/webkit/qtwebkitplugins.dll"
QLibraryPrivate::unload succeeded on "C:/Qt5.4.0/5.4/mingw491_32/plugins/webkit/qtwebkitplugins.dll"
QSystemTrayIcon::setVisible: No Icon set
It seems to me that the dll is loaded, it just is not working.
How do my projects work these features?
For this work in QT-5.2+ is necessary to modified the qwebkitplatformplugin.h file
Change this:
QT_BEGIN_NAMESPACE
Q_DECLARE_INTERFACE(QWebKitPlatformPlugin, "com.nokia.Qt.WebKit.PlatformPlugin/1.9");
By this:
QT_BEGIN_NAMESPACE
Q_DECLARE_INTERFACE(QWebKitPlatformPlugin,
"org.qt-project.Qt.WebKit.PlatformPlugin/1.9");
If needed compatibility with QT-4.8 change the code for this:
QT_BEGIN_NAMESPACE
#if QT_VERSION >= 0x050200
Q_DECLARE_INTERFACE(QWebKitPlatformPlugin, "org.qt-project.Qt.WebKit.PlatformPlugin/1.9")
#else
Q_DECLARE_INTERFACE(QWebKitPlatformPlugin, "com.nokia.Qt.WebKit.PlatformPlugin/1.9")
#endif
QT_END_NAMESPACE