Dealing with decorated external binaries when building a package with Rcpp - c++

I am using a Window 32-bit machine to compile an R package developed using Rcpp and compiled with Rtools 3.4 in RStudio 1.0.28. I keep getting an error about the # signs within the 32-bit external dll (NYCgeo.dll):
thefile.o:thefile.cpp:(.text+0x913): undefined reference to `_imp__NYCgeo#8'
collect2.exe: error: ld returned 1 exit status
Sure enough, when I opened the 32-bit NYCgeo.dll in a text editor, I found #8 suffix. This is weird because when I developed the 64-bit version, the 64-bit NYCgeo.dll did not contain #8 suffix and I did not have any errors. Anyway, I read about the --kill-at command and was wondering where I would include it. I tried RStudio's Configure Build Tools settings as well as my makevars.win.in file but had no luck.
Response to #Dirk
Updated title as requested.
I am compiling the package from within RStudio using Rtools so I assumed it might have something to do with RStudio's Project Options.
I have spent the past week checking existing documentation. This post, this post, and this post describe the issue I am having. My issue is that I do not know where to specify either "--kill-at" or "--add-stdcall-alias"
The whole point of my package is to leverage NYC Dept of City Planning's geocoding software. I did not "just throw" the binary NYCgeo.dll "into the mix." In fact, my 64-bit version of the package works fine. My issue is with developing the 32-bit version... specifically, the presence of an #8 suffix in the NYCgeo.dll binary which is causing an error.
NYCgeo.dll is a C binary. I am not using Visual Studio.
The previous question you mentioned dealt with creating Makevars files for the 64-bit version of my package (thanks again, #Coatless for providing useful information). The 64-bit NYCgeo.dll binary did not contain an #8 suffix.
UPDATE:
I tried to create a better title for this question. The question pertains to creating an R package which utilizes functionality from another piece of software... in my case, geocoding software. Specifically, the issue I experienced is that the 32-bit version of the geocoding software has a decorated dll files while the 64-bit version does not. A decorated binary contains # symbols which trigger an error during compiling. My task was to devise a way to demangle (not sure if that is a real word) the 32-bit dll but leave the 64-bit dll alone.
Many thanks.
Gretchen

The rJava package was incredibly helpful in understanding how to deal with decorated binaries.
I created a def file named NYCgeo.def and saved it in my src directory:
LIBRARY NYCGEO.DLL
EXPORTS
NYCgeo#8
I then updated my Makevars.win.in file which is also in my src directory:
GBAT_PATH = #GBAT_PATH#
GBAT_DLL = #GBAT_DLL#
PKG_LIBS = -L"$(GBAT_PATH)/Bin" -l$(GBAT_DLL)
PKG_CPPFLAGS = -I"$(GBAT_PATH)/Include"
ifeq "${R_ARCH}" "/i386"
$(SHLIB): $(OBJECTS) NYCGEO.a
NYCGEO.a: NYCGEO.def
$(DLLTOOL) -k -d NYCGEO.def -l NYCGEO.a -D "$(GBAT_PATH)/Bin/$(GBAT_DLL)" $(DT_ARCH)
endif
I am now able to compile the package on both 32-bit and 64-bit machines running Windows.

Related

`cabal repl` causes GHC panic on simple project with C++ files

I've uploaded the project as a zip file so you can try it out.
https://dl.dropboxusercontent.com/u/35032740/ShareX/2015/11/Buggy.zip
I wanted to write a wrapper around the clipper library. The code compiles fine with cabal build, runs with cabal run but cabal repl produces this error:
Preprocessing executable 'Buggy' for Buggy-0.1.0.0...
GHCi, version 7.10.2: http://www.haskell.org/ghc/ :? for help
GHC runtime linker: fatal error: I found a duplicate definition for symbol
_ZNSt6vectorIN10ClipperLib8IntPointESaIS1_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS1_S3_EERKS1_
whilst processing object file
dist\build\Buggy\Buggy-tmp\wrapper.o
This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
loaded twice.
ghc.exe: panic! (the 'impossible' happened)
(GHC version 7.10.2 for x86_64-unknown-mingw32):
loadObj "dist\\build\\Buggy\\Buggy-tmp\\wrapper.o": failed
Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug
For reference, here's the cabal file
-- Initial Buggy.cabal generated by cabal init. For further documentation,
-- see http://haskell.org/cabal/users-guide/
name: Buggy
version: 0.1.0.0
-- synopsis:
-- description:
-- license:
license-file: LICENSE
author: Luka Horvat
maintainer: lukahorvat9#gmail.com
-- copyright:
-- category:
build-type: Simple
-- extra-source-files:
cabal-version: >=1.10
executable Buggy
main-is: Main.hs
c-sources: clipper.cpp
, wrapper.cpp
-- other-modules:
-- other-extensions:
build-depends: base >=4.8 && <4.9
-- hs-source-dirs:
default-language: Haskell2010
extra-libraries: stdc++
Any ideas what the cause might be here?
I'm running Windows 10, 64bit.
I don't know the details of object file formats on Windows, so I'm guessing a bit.
Probably clipper.o and wrapper.o both define a weak symbol named _ZNSt6vectorIN10ClipperLib8IntPointESaIS1_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS1_S3_EERKS1_. (I see the same on Linux.) This probably came from a template instantiation (of vector). Weak symbols instruct the system linker to just pick any copy of the symbol if it encounters duplicates.
GHCi on Windows doesn't use the system linker, it has its own runtime linker that can load object files into itself while it runs. As a result it is generally not feature compatible with the system linker. Probably the runtime linker does not understand weak symbols, at least on Windows (https://ghc.haskell.org/trac/ghc/ticket/3333). From the error you got, we can assume that it treats them as regular symbols, and two regular symbols are not allowed to have the same name.
As a workaround, you may be able to build your C++ files with -fno-weak as described in https://stackoverflow.com/a/26454930/190376.
If that doesn't work, an alternative is to build your C++ files into a DLL, which you can have GHCi load using the system dynamic loader, avoiding this whole issue. On Linux this would look like
g++ wrapper.cpp clipper.cpp -shared -fPIC -o libclipper.so
ghci -L. -lclipper
though I imagine the details are different on Windows.
The specific error isn't what I'm used to seeing, but those backslashes say you're on Windows, and this otherwise looks like GHC bug #3242 which has been causing pain for years now. Good news: the cause was finally isolated two weeks ago. Bad news: the fix didn't make the deadline for 7.10.3, though at least the 8.0.1 milestone seems secure at this point.
Probably still worth posting your error text to that bug's thread; mine is only an educated guess, someone there will know for sure.

Unable to run Woden Physics Example in Pharo

I am trying to run the Woden Physics Example inside Pharo which involves getting Bullet properly compiled and the smalltalk bindings properly installed in Pharo.
I am using Linux Mint 17 x64.
But NativeBoost seems unable to load the compiled libraries. I have been using the sources provided here:
https://github.com/ronsaldo/bullet-pharo
https://github.com/ronsaldo/swig
I built the modified version of swig as well as the bullet libraries and bindings with the provided build scripts.
I also have doublechecked that the bullet libraries are 32 bit.
Opening up the Woden physics example returns this error:
failed to get a symbol address:
PharoNB_new_BTDefaultCollisionConfiguration__SWIG_1
When examining the call stack in the debugger, it turns out that the module handle is 0.
I verified this by executing the same message as
BulletCInterface nbLibraryNameOrHandle
executes:
NativeBoost forCurrentPlatform loadModule: 'BulletPharo'
This message returns 0. I tried to specify the full path to libPharoBullet.so in the workspace, like:
NativeBoost forCurrentPlatform loadModule:
'/home/martin/.local/share/Pharo/bullet-pharo/libBulletPharo.so'
with the same result. I also verified it with a 32 bit system library of mine (liblzma) and there NativeBoost was able to load it, as it returned a non-zero handle.
So i suspect something during compilation went wrong...
I also did
readelf -h libPharoBullet.so
and its ABI was "UNIX - GNU" while the ABI of pharo-vm is "UNIX - System V"
Could this be the problem here ?
How can i force the ABI to be System V when compiling ? I use gcc 4.8.2
Or what steps could i otherwise perform ?

LLVM libc++ not compiling with clang 3.3 on Mac OS

I have just downloaded clang 3.3 (homebrew) from the LLVM web page to my mac (OS X 10.8.4), but get this compiler error when using std=c++11 stdlib=libc++:
In file included from /usr/include/c++/v1/string:434:
In file included from /usr/include/c++/v1/algorithm:594:
In file included from /usr/include/c++/v1/memory:590:
In file included from /usr/include/c++/v1/typeinfo:61:
/usr/include/c++/v1/exception:146:5: error: an attribute list cannot appear here
_LIBCPP_NORETURN friend void rethrow_exception(exception_ptr);
^~~~~~~~~~~~~~~~
/usr/include/c++/v1/__config:190:28: note: expanded from macro '_LIBCPP_NORETURN'
# define _LIBCPP_NORETURN [[noreturn]]
^~~~~~~~~~~~
It seems that I also need another libc++ (even though it was said that it was 100% complete on MAC ...), but I cannot find any. Any help appreciated. Just for your info:
> clang++ -v
clang version 3.3 (tags/RELEASE_33/final)
Target: x86_64-apple-darwin12.4.0
Thread model: posix
And, yes, I googled it and found this: http://comments.gmane.org/gmane.comp.compilers.llvm.bugs/24138 claiming it's resolved in libc++ trunk ???
Okay, as suggested by Howard, I've downloaded tip-of-the-trunk libc++ into /opt/local/share/libcxx, but have trouble building it. The manual says to cd libcxx/lib, export TRIPLE=-apple-, and run ./buildit. I presume this implies bash (I'm usually a tcsh user, so I moved my .tcshrc, got a new shell and started bash). I did that and the compilations worked, but the library build failed. Apparently ./buildit doesn't see $TRIPLE=-apple-, as it picks the wrong LDSHARED_FLAG (not that on line 81, but that on line 103, which is to be used if $TRIPLE is not set), even though echo $TRIPLE yields -apple- as it should. When I add the statement echo TRIPLE = $TRIPLE at the top of buildit, it reports nothing. How come? What is wrong here?
The failure was that because the wrong LDSHARED_FLAG was picked the loading didn't work (ld complaint about the unknown option -soname which, I think, makes sense under linux). I don't know why buildit (a #! /bin/sh file) didn't pick up the TRIPLE environment variable (it did pick up several unwanted ones such as CXX and CC). I now simply added TRIPLE=-apple- at the top of that file and it did built the library. However, the loader spitted out several warnings all of which were of the form
ld: warning: direct access in ___cxa_bad_typeid to global weak symbol typeinfo for std::bad_typeid means the weak symbol cannot be overridden at runtime. This was likely caused by different translation units being compiled with different visibility settings.
But most importantly, it works (the compilation at least, I have yet to test the library). I have one final question. The advice was to use -I and -L to tell the compiler about the whereabouts of this version. Is it not possible to put it into the usual place /usr/include/c++/v1/? Note that Xcode has its version somewhere else anyway and I had put in a symbolic link (/usr/include/c++/v1/) to that one to get my homebrew clang 3.2 working (after the some Xcode update). What about the library? Can I also put it in a standard place?
Here is the home page of libc++:
http://libcxx.llvm.org
You can download the tip-of-trunk libc++ from there. You can tell clang to point to your download with -nostdinc++ -I<path-to-libc++>/include. You can also tell clang to link to your tip-of-trunk libc++ with -L<path-to-libc++>/lib and export DYLD_LIBRARY_PATH=<path-to-libcxx>/lib. The directions are all on the libc++ home page.
Xcode is the easiest way to get clang + libc++. But if you want the very latest, this is the place to go.
Congratulations!
Don't worry about the ld warning. It is a harmless ld bug that will be fixed in a future release. I see it on 10.8.4 too and it doesn't hurt anything.
The libc++ headers no longer live at /usr/include/c++/v1. Xcode has migrated them into itself. Having libc++ headers at /usr/include/c++/v1 from older installs has been a source of confusion and bugs. I regularly use -nostdinc++ -I to point to the libc++ headers I want (I often have several versions going at the same time), and that works well for me.
It is possible for you to replace your /usr/lib/libc++.1.dylib with that you have built. I do not recommend doing this. I have to sometimes to do a proper test, but I always do so very carefully because sometimes this causes me to have to reboot onto a backup disk and restore my /usr/lib to its original state. If you do go this route, it is a very good idea to have a backup of the original /usr/lib/libc++.1.dylib very handy.
I recommend instead -L on the command line, and export DYLD_LIBRARY_PATH=<path-to-libcxx>/lib in the shell. More than one person (including myself) has gotten their computer into a really nasty place by not following this advice.
If you run testit (under test/), all you need is DYLD_LIBRARY_PATH in that shell. The testit script is set up to point to the right places without an install.
Also I recommend figuring out why you had to modify buildit. No one else is seeing that behavior. printenv on your command line may help in this endeavor.
libc++ is updated often. We try to keep tip-of-trunk always in a shippable state.

Creating R package containing C++ on Windows

My goal is to create a package in R with C++ code: So my questions is how?
I am following the tutorial http://www.stat.columbia.edu/~gelman/stuff_for_blog/AlanRPackageTutorial.pdf on creating an R package containing C++ code. The specific code Im trying to compile and package is exactly as described in the tutorial.
R CMD SHLIB seems to be working creating .dll file.
I can load in R using dyn.load() and test it on simulated data (as described in tutorial)
R CMD INSTALL is where the problem begins. I have done two things encountering two different errors supposedly related:
1) The tutorial says the NAMESPACE file is supposed to contain the code:
useDynLib(XDemo)
export(XDemoAutoC)
When it does R CMD INSTALL fail resulting in error:
Error in inDL(x,as.logical(local), as.logical(now),...): unable to
load shared object 'C:/.../libs/i386/XDemo.dll': Loadlibrary failure:
1% is not a valid Win32-program
2) Removing the above mentioned lines in NAMESPACE file will result in installation of package. I can succesfully load it in R but when I try to use the R function that makes a .C() call to the C++ written function I another error:
library(newpackage)
ls(package:newpackage)
[[1]] "XDemoAutoC"
Warning message:
In ls(package:newpackage) :
‘package:newpackage’ converted to character string
XDemoAutoC(c(1,2,3,4))
Error in .C("DemoAutoCor", OutVec = as.double(vector("numeric", OutLength)), :
C symbol name "DemoAutoCor" not in load table
Im running version R2.15.2 on windows 64-bit and using R64 bit.
I read the following post with a similar problem:
http://r.789695.n4.nabble.com/Include-C-DLL-error-in-C-symbol-name-not-in-load-table-td3464021.html
Except they mention nothing about the NAMESPACE-matter.
Also I read this post:
Problem with loading compiled c code in R x64 using dyn.load
So I am thinking: that based on the fact that I am able to use dyn.load() in Rx64 means that I have succesfully created x64 .dll. Assuming that the NAMESPACE file is supposed to be left as in the tutorial - hopefully fixing the >>not in load table<< error - this would mean I should focus on fixing problem one. This problem seems to be caused by something related to 32-bit. I have used Dependency Walker on the .dll file but I am not sure how to interpret the results
I really don't have any ideas on how to fix this problem so any suggestion on what to do would be welcome?
I think you are doing it wrong. Two quick suggestions:
Read the Writing R Extensions manual written to explain just this: writing R extensions including those with compiled code
Have a look at Rcpp which makes R and C++ extensions, including package building so much easier. Or so we think. Writing a package is as easy as calling Rcpp.package.skeleton(). The documentation in 1) still help.
That said, if R CMD INSTALL fails you may have some mixup in your $PATH. Never ever mix MinGW and Cygwin. Make sure no Cygwin DLLs are found when you build or call R. Path order matters greatly. See the manual for details.

DOS-reported error: Bad file number

I have a batch file that tries to compile a static library using Borland C++ Builder 6.0
It is called from Borland make (makefile created with bpr2mak) which is called from a .bat file (used to compile the whole project with Visual Studio and some Borland C++ Builder legacy projects), which is called from a bash shell script running inside Cygwin.
When I run the .bat file directly from a Cygwin shell, it runs OK, but when its being run from a Program calling cygwin with Boost::Process::launcher I'm getting this error:
C:\ARQUIV~1\Borland\CBUILD~1\Bin\..\BIN\TLib /u bclibs.lib #MAKE0000.###
DOS-reported error: Bad file number
TLIB 4.5 Copyright (c) 1987, 1999 Inprise Corporation
opening 'MAKE0000.###'
** error 1 ** deleting bclibs.lib
It's a complicated scenario, but this Program which calls cygwin is run whenever we need to build our software package which needs to be build for various Linux distos and Windows 32 and 64-bit.
Note: It's the only Borland Project failing, the other compile just fine (it's the only static library using borland also, so it can be some problem with the TLib tool.
The problem was that TLib does not like to have his output redirected (seen here) without having an input pipe as well. Solved by creating an input pipe to in the Boost::Process::launcher using set_stdin_behavior
I'm just guessing here, but this may have to do with long filenames and/or spaces in paths.
1) Modify your makefile so it would save current environment to a file, immediately before executing the failing command (set > d:\env.txt & echo CD=%CD% >> d:\env.txt). Then run it both ways (directly and via program) and compare the environments of good run and bad run.
2) Using filemon from Sysinternals, capture logs of disk access in both cases (these logs are going to be huge, though you can uncheck everything except Open in the filter to reduce the size). Again, compare and check for clues...
3) Try instaling everything involved to paths conforming to 8.3 scheme.
This error is not related to C++ itself. It happens when your build script opens too much files (more than defined in DOS command processor environment). To resolve this issue try to set value of files variable to 253. For Windows XP this variable defined in the file %WINDIR%\system32\config.nt.
files=253
Seems it is known bug in Borland C++ tools. Here is description and possible workaround for this issue:
Problem: Some static Lib projects will
not link correctly when compiled. You might see something
like this :
J:\Borland\CBUILD~1\bin\..\BIN\TLib /u debug\jpegD.lib #MAKE0000.###
DOS-reported error: Bad file number
TLIB 4.5 Copyright (c) 1987, 1999 Inprise Corporation
opening 'MAKE0000.###'
** error 1 ** deleting debug\jpegD.lib
MAKE failed, returned : 1
Workaround : In some cases (where the "Bad file number" error is seen) it may be possible to work around this by specifying -tDEFLIB.BMK in the BPR2MAKE Options field, and Turning off the "Capture Make Output" option.
I have not tested it, but I hope that helps.