Compiling Rust that calls C++ to WASM - c++

I've found this How do I use a C library in a Rust library compiled to WebAssembly?, but this relies on wasm-merge, which has been discontinued. My problem is the following, I have some C++ code that I would like to call from Rust in order to have the option to compile the resulting package either to native code for use in mobile apps or to Webassembly for use in Node.js. At the moment, I have the following setup:
libTest.cpp
extern "C"{
int test_function(int i){
return i;
}
}
lib.rs
use wasm_bindgen::prelude::*;
#[link(name = "Test")]
extern "C"{
pub fn test_function(i: i32) -> i32 ;
}
#[wasm_bindgen]
pub fn test_function_js(i : i32) -> i32{
let res = unsafe{test_function(i)};
res
}
build.rs
fn main() {
cc::Build::new()
.cpp(true)
.file("libTest.cpp")
.compile("libTest.a");
}
This compiles and works when compiling to native code using a simple cargo build, but does not work for building to wasm, for which I'm doing cargo build --target wasm32-unknown-unknown. There I get the two errors
= note: rust-lld: error: /[path to my project]/target/wasm32-unknown-unknown/debug/build/rustCpp-cc5e129d4ee03598/out/libTest.a: archive has no index; run ranlib to add one
rust-lld: error: unable to find library -lstdc++
Is this the right way to go about this and if yes, how do I resolve the above error? If not, how do I best go about calling C++ from Rust and compiling it to wasm?

(This is not really a full answer, but too long for a comment.)
I can compile your example with
cc::Build::new()
.archiver("llvm-ar") // Takes care of "archive has no index" - emar might be an alternative
.cpp_link_stdlib(None) // Takes care of "unable to find library -lstdc++"
… // rest of your flags
but I'm not sure whether the resulting binary is useful to you. Especially, it contains WASI imports when compiled in debug mode, and you'll probably get linker errors if you start using any interesting functions (e.g. sin).
You could in theory give the C++ compiler a full stdlib to work with through .flag("--sysroot=/usr/share/wasi-sysroot/") (if you have wasi-sdk or wasi-libc++ installed), but
I'm unsure how to best account for differences of where this folder is usually installed (maybe like this)
I think you have to also pass this flag at link time, but I don't know how (it seems to work without, though)
that would target wasi, and may not be useful for whatever bindgen-based environment you have in mind.

Related

How to use Swift static library (.a) in C++ project?

I have a pure Swift package that I built with the Swift Package Manager. My Package.Swift looks like this:
// File: Package.swift
// swift-tools-version:5.2
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "SwiftPackage",
products: [
.library(
name: "SwiftPackage",
type: .static,
targets: ["SwiftPackage"]),
],
dependencies: [
],
targets: [
.target(
name: "SwiftPackage",
dependencies: []),
.testTarget(
name: "SwiftPackageTests",
dependencies: ["SwiftPackage"]),
]
)
This Swift code I'm building contains a public function that I want to call from my C++ code:
// File: SwiftPackage.swift
public func StartWatcher() {
// code ...
}
I created a header file SwiftPackage.hh where I define the StartWatcher function like so:
// File: SwiftPackage.hh
void (*StartWatcher)();
Now I have my main.cc file where I include the SwiftPackage.hh and call the StartWatcher function:
// File: main.cc
#include <SwiftPackage.hh>
int main() {
StartWatcher();
return 0;
}
However, when I run the built executable I'm getting the following error
./swift_package' terminated by signal SIGSEGV (Address boundary error)
Building
My build process is following:
First, I build the Swift package by running swift build --package-path SwiftPackage. This creates the libSwiftPackage.a library.
Second, I build the C++ project where I link the libSwiftPackage.a library that was created in the previous step:
g++ -std=c++11 -L./SwiftPackage/.build/debug/ main.cc -lSwiftPackage -o swift_package
What am I doing wrong? I suspect that the Swift library isn't properly linked.
Edit
Based on the #Acorn's answer I did two things:
Added my StartWatcher declarition in an extern "C" block
Added an attribute #_cdecl("StartWatcher") to my StartWatcher Swift function which should make sure that the name isn't mangled in the library.
Now I get a different output which is a bunch of messages like this:
Undefined symbols for architecture x86_64:
"static Foundation.Notification._unconditionallyBridgeFromObjectiveC(__C.NSNotification?) -> Foundation.Notification", referenced from:
#objc SwiftPackage.AppDelegate.applicationDidFinishLaunching(Foundation.Notification) -> () in libSwiftPackage.a(AppDelegate.swift.o)
#objc SwiftPackage.AppDelegate.applicationWillTerminate(Foundation.Notification) -> () in libSwiftPackage.a(AppDelegate.swift.o)
It seems to me that the there is some kind of problem accessing other libraries that are used in the Swift package?
Summary: this can be made to work (probably), but if this is intended to be used in production code, then the only correct approach is to consult the Swift documentation and, if it is still the case there is no official support, ask the Swift team how to approach the problem.
You should follow whatever documentation Swift has for exporting functions in the C ABI convention. It is doubtful doing it blindly works, and if it does, it is probably by chance and may stop working at any point in the future.
Sadly, there does not seem to be official support for such thing, at least according to questions like:
Passing a Swift string to C
What is the best way to call into Swift from C?
To make any kind of FFI to work unofficially, there are several things to take into account:
Figure out which calling convention Swift follows, including if it uses hidden parameters or things like that. The best case scenario is that Swift uses the usual one in your system.
Check what are the names of the actual symbols exported by Swift. Perhaps they are mangled.
Research if there are any semantics to hold that the other language runtime does automatically. For instance, if some code needs to be called before/after, or perhaps something need to be initialized, etc.
On the C++ side, you should write your declaration in an extern "C" block so that the symbol is not expected to be C++-mangled:
extern "C" {
void StartWatcher();
}
There should be no need to declare it as a function pointer either.

llvm JIT add library to module

I am working on a JIT that uses LLVM. The language has a small run-time written in C++ which I compile down to LLVM IR using clang
clang++ runtime.cu --cuda-gpu-arch=sm_50 -c -emit-llvm
and then load the *.bc files, generate additional IR, and execute on the fly. The reason for the CUDA stuff is that I want to add some GPU acceleration to the runtime. However, this introduces CUDA specific external functions which gives errors such as:
LLVM ERROR: Program used external function 'cudaSetupArgument' which could not be resolved!
As discussed here, this is usually solved by including the appropriate libraries when compiling the program:
g++ main.c cmal.o -L/usr/local/cuda/lib64 -lcudart
However, I am not sure how to include libraries in JITed modules using LLVM. I found this question which suggested that is used to be possible to add libraries to modules in the JIT like this:
[your module]->addLibrary("m");
Unfortunately, this has been deprecated. Can anyone tell me the best way to do this now? Let me know if I need to provide more information!
Furthermore, I am not really sure if this is the best way to be incorporating GPU offloading into my JIT, so if anyone can point me to a better method then please do :)
Thanks!
EDIT: I am using LLVM 5.0 and the JIT engine I am using is from llvm/ExecutionEngine/ExecutionEngine.h, more specifically I create it like this:
EngineBuilder EB(std::move(module));
ExecutionEngine *EE = EB.create(targetMachine);
You need to teach your JIT engine about other symbols explicitly.
If they are in a dynamic library (dylib, so, dll) then you can just call
sys::DynamicLibrary::LoadLibraryPermanently("path_to_some.dylib")
with a path to the dynamic library.
If the symbols are in an object file or an archive, then it requires a bit more work: you would need to load them into memory and add to the ExecutionEngine using its APIs.
Here is an example for an object file:
std::string objectFileName("some_object_file.o");
ErrorOr<std::unique_ptr<MemoryBuffer>> buffer =
MemoryBuffer::getFile(objectFileName.c_str());
if (!buffer) {
// handle error
}
Expected<std::unique_ptr<ObjectFile>> objectOrError =
ObjectFile::createObjectFile(buffer.get()->getMemBufferRef());
if (!objectOrError) {
// handle error
}
std::unique_ptr<ObjectFile> objectFile(std::move(objectOrError.get()));
auto owningObject = OwningBinary<ObjectFile>(std::move(objectFile),
std::move(buffer.get()));
executionEngine.addObjectFile(std::move(owningObject));
For archives replace template types ObjectFile with Archive, and call
executionEngine.addArchive(std::move(owningArchive));
at the end.

How to turn off submodule of a C++ library based on preprocessor defined macro

What I'm doing I'm writing a C++ library with dependence on NetCDF library. For example,
include <netcdf>
class myLib {
public:
myLib();
myLib(const myLib&);
virtual ~myLib();
std::string probe_data(std::string & file_path);
...
And the function probe_data uses the functions from NetCDF library.
What is the problem I have defined a preprocessor macro CANALOGSIO_WITHOUT_NETCDF. Because in some system, there is no NetCDF library installed. So I would like to turn off this functionality in my library, for example, the library will still have probe_data function, but it simply returns NetCDF not installed.
What would be a good practice of doing that? Thank you!
I'll list 2 different methods which came to my mind for this.
1, just use ifdefs in the definition of the function. This will keep the uniform interface and everyone could call it.
class myLib {
...
std::string probe_data(std::string & file_path) {
#ifdef NETCDF
return do_real_probe(file_pat);
#else
cerr << "NOt implemented function" << endl;
return "";
#endif
}
...
This requires separate compilation for separate systems. YOu need to supply the definition of the macro at the compilation comand line, i.e. with gcc:
g++ -DNETCDF ..
the second method wold be based on the library approach. You can compile separate implementation libraries for different systems. Then at link time you can choose which static library to use (or at run time for dynamic libs). Most likely you would only deliver the library which work on the target system and nothing will. You might get away without #ifdefs if you choose so, just have different implementation in different files>
sys1.cpp
string probe(string &) { return do_probe();}
gcc sys1.cpp -o sys1.so -shared
sys2.cpp
string probe(string &) {cerr << messate; return "";}
gcc sys2.cpp -o sys2.so -shared
Now you just need to deliver the correct library (sys1 or sys2) to the correct system. Or a correct statically linked image of your program.
There are multiple way to use conditional compilation to do it. So you decide.

Is it possible to determine (at runtime) if a function has been implemented?

One of Objective C's primary features is simple introspection. A typical use of this functionality is the ability to check some method (function), to make sure it indeed exists, before calling it.
Whereas the following code will throw an error at runtime (although it compiles just fine (Apple LLVM version 7.0.2 (clang-700.1.81)))...
#import Foundation;
#interface Maybe : NSObject + (void) maybeNot; #end
#implementation Maybe #end
int main (){ [Maybe maybeNot]; }
By adding one simple condition before the call...
if ([Maybe respondsToSelector:#selector(maybeNot)])
We can wait till runtime to decide whether or not to call the method.
Is there any way to do this with "standard" C (c11) or C++ (std=c14)?
i.e....
extern void callMeIfYouDare();
int main() { /* if (...) */ callMeIfYouDare(); }
I guess I should also mention that I am testing/using this is in a Darwin runtime environment.
On GNU gcc / Mingw32 / Cygwin you can use Weak symbol:
#include <stdio.h>
extern void __attribute__((weak)) callMeIfYouDare();
void (*callMePtr)() = &callMeIfYouDare;
int main() {
if (callMePtr) {
printf("Calling...\n");
callMePtr();
} else {
printf("callMeIfYouDare() unresolved\n");
}
}
Compile and run:
$ g++ test_undef.cpp -o test_undef.exe
$ ./test_undef.exe
callMeIfYouDare() unresolved
If you link it with library that defines callMeIfYouDare though it will call it. Note that going via the pointer is necessary in Mingw32/Cygwin at least. Placing a direct call callMeIfYouDare() will result in a truncated relocation by default which unless you want to play with linker scripts is unavoidable.
Using Visual Studio, you might be able to get __declspec(selectany) to do the same trick: GCC style weak linking in Visual Studio?
Update #1: For XCode you can use __attribute__((weak_import)) instead according to: Frameworks and Weak Linking
Update #2: For XCode based on "Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)" I managed to resolve the issue by compiling with the following command:
g++ test_undef.cpp -undefined dynamic_lookup -o test_undef
and leaving __attribute__((weak)) as it is for the other platforms.
If you can see a function of an object (not pointer) is called in a source code and the code is compiled successfully - then the function does exist and no checking needed.
If a function being called via a pointer then you assume your pointer is of type of the class that has that function. To check whether it's so or not you use casting:
auto* p = dynamic_cast<YourClass*>(somepointer);
if (p != nullptr)
p->execute();
C++ or C don't have introspection. You could add some with your additional layer (look at Qt metaobject, or GTK GObject introspection for examples); you might consider customizing GCC with MELT to get some introspection... (but that would take weeks). You could have some additional script or tool which emits C or C++ code related to your introspection needs (SWIG could be inspirational).
In your particular case, you might want to use weak symbols (at least on Linux). Perhaps use the relevant function attribute so code.
extern void perhapshere(void) __attribute__((weak));
if (perhapshere)
perhapshere();
and you might even make that shorter with some macro.
Maybe you just want to load some plugin with dlopen(3) and use dlsym(3) to find symbols in it (or even in the whole program which you would link with -rdynamic, by giving the NULL path to dlopen and using dlsym on the obtained handle); be aware that C++ uses name mangling.
So you might try
void*mainhdl = dlopen(NULL, RTLD_NOW);
if (!mainhdl) { fprintf(stderr, "dlopen failed %s\n", dlerror());
exit(EXIT_FAILURE); };
then later:
typedef void voidvoidsig_t (void); // the signature of perhapshere
void* ad = dlsym(mainhdl, "perhapshere");
if (ad != NULL) {
voidvoidsig_t* funptr = (voidvoidsig_t*)ad;
(*funptr)();
}

Go shared library as C++ plugin

I have a project where I would like to load Go plugins inside a C++ application.
After a lot of research, it is not clear for me whether or not Go supports this.
I encountered a lot of discussions pointing out the bad habit of dynamic linking, proning IPC instead. Moreover it is not clear for me if dynamic linking is intended by the language or not (new Go philosophy ?).
cgo provides the ability to call C from Go or Go from C (inside Go), but not from plain old C. Or does it ?
gc doesn't seem to support shared library (even if https://code.google.com/p/go/issues/detail?id=256 mentions it does)
gccgo support Go shared libraries but I couldn't make it work (probably because the main entry point is not in Go ...)
SWIG doesn't seem to help either :(
Apparently something is going on upstream as well (https://codereview.appspot.com/7304104/)
main.c
extern void Print(void) __asm__ ("example.main.Print");
int main() {
Print();
}
print.go
package main
import "fmt"
func Print() {
fmt.Printf("hello, world\n")
}
Makefile :
all: print.o main.c
gcc main.c -L. -lprint -o main
print.o: print.go
gccgo -fno-split-stack -fgo-prefix=example -fPIC -c print.go -o print.o
gccgo -shared print.o -o libprint.so
Output :
/usr/lib/libgo.so.3: undefined reference to `main.main'
/usr/lib/libgo.so.3: undefined reference to `__go_init_main'
Is there a solution for that ? What is the best approach ? forking + IPC ?
References :
cgo - go wiki
callback in cgo
c callbacks and non go threads
cgo - golang
call go function from c
link c code with go code
I don't think you can embed Go into C. However you can embed C into Go and with a little stub C program you can call into C first thing which is the next best thing! Cgo definitely supports linking with shared libraries so maybe this approach will work for you.
Like this
main.go
// Stub go program to call cmain() in C
package main
// extern int cmain(void);
import "C"
func main() {
C.cmain()
}
main.c
#include <stdio.h>
// Defined in Go
extern void Print(void);
// C Main program
int cmain() {
printf("Hello from C\n");
Print();
}
print.go
package main
import "fmt"
import "C"
//export Print
func Print() {
fmt.Printf("Hello from Go\n")
}
Compile with go build, and produces this output when you run it
Hello from C
Hello from Go
AFAIK, you cannot compile a Go package to a shared library witg 'gc' ATM, it may change in the future. There might be some chance with 'gccgo' as 'libgo' (Go runtime, I suppose) is a shared library already. I guess the missing piece is then only to properly initialize the runtime, which normally a Go command handles automatically.
The gccgo expert is I.L. Taylor, he is reachable at the golang-nuts mailing list almost daily. I suggest to ask him directly.
PS: Other problems possibly include the interaction of the Go garbage collector and the (C++) process memory etc. Maybe I'm too optimistic and it is not at all feasible.