Linux linking to any available library version - c++

When compiling my binary (addon for node.js) on Ubuntu 13.10. then the linker takes libudev.so.1 to link.
Then I copy the binary to a Ubuntu 12.04 machine and run the binary. Then there is an error that libudev.so.1 can't be found. On Ubuntu 12.04 is libudev.so.0 installed.
I provide gcc with the param -ludev
The binary expects libudev.so.1. I checked it with this command:
$> strings bin | grep udev
$> ...
$> libudev.so.1
How can I tell the linker it should take any provided libudev version of the OS. So the binary would require something like libudev.so*.

In the magical Linux world it is very difficult to compile and link on one machine and then copy the binary to another machine and run it there. There are a lot of variations, as you have also experienced not all aer compatible with each other, so this make the porting of binaries very difficult, if not altogether impossible. It might work with some setups, and not with others. Now you have two possibilities:
I assume, your 12.04 is a production environment so you cannot do whatever you want... So, in this case create an identical (virtual) machine to the Ubuntu 12.04 compile and link on it. Copy the executable to the 12.04 you need to run on it. Chances are that it will work without problems.
If the assumption about the production environment is not true, then install the compilers and necessary environment and compile the source on the remote machine. This way you will know it will work all the time on that machine.

In your 12.04 find out at which location libudev.so.0 is there. then make a symbolic link to that library with libudev.so.1 and see.
To make symbolic link:
ln -sf /opt/lib/libudev.so.1 /opt/lib/libudev.so.0

Related

How to run g++-6 on debian 10, need to compile older buildroot system (Cross compiling)

How to run g++-6 on debian 10, need to compile older buildroot files.(NCurves(host-ncurses-5.9) is crashing)
I have tried to patch the files in the buildroot but it like walking into a swamp.
Fixing one problem to find the next problem.
Tried compiling 6.3 compiler from source but this is crashing with the latest gcc-8 compiler.
Any suggestions? (I always assumed that older compilers should compile with newer compilers)
My other options are:
* Running virtual machine (VM or docker) with Debian Jessie
* Compile an older compiler with a docker GCC compiler.(No idea if this works)
* maybe turn off the compiling of the local files in buildroot? (Could not find any info on this)
There is a gcc-6 package available in Debian. So you just need to sudo apt install gcc-6. link
No reason to compile gcc from source unless you need a very specific version, but even then Docker is the far easier solution since gcc has an official repo on Docker Hub. I'd also double check that you have the proper ncurses dev library installed.
The overall best solution is to containerize the correct build environment (compiler, libraries, etc.), though. It ensures you'll always be able to build the product, especially if a re-factor is not viable.

Running linux g++ compiled code on Mac

Is is possible to compile c++ code on linux using g++ and run the code on Mac OSX? I have a few c++ programs that use one .cpp file, a few .h files, and a MakeFile altogether that produces a .o file that I typically run through the terminal. However, I'd like to find a way to send only the executable to my partner's home mac so he may review my program locally. (I've also use a few of these programs for automated math calculations, so it would be very convenient to run locally). I understand OSX typically uses .app bundles, but I'm not extremely familiar with how this works. Will the .o file (or ./a.out that's more common around here) simply run on OSX? I'd rather not install xCode on this machine if I don't have to.
Thanks a ton
It is possible to compile C++ on Linux and produce an executable on OS X. However you have to compile the code in a special way, called 'cross compilation'. It's not particularly simple to set up cross compilation and you need certain files from the platform you're targeting.
It's much simpler to just compile directly on the target platform.
If the Mac has a recent version of OS X installed you can easily install the necessary command line tools: Just try to run one of them or run the command xcode-select --install and OS X will ask to install the command line tools. (This will install just the necessary tools for compilation on the command line, and not the entire Xcode applicaiton.)
I understand OSX typically uses .app bundles, but I'm not extremely familiar with how this works.
You don't need to worry about .app bundles for simple C++ programs. OS X can run regular executable files just like linux. (Though the executable file format is different: OS X uses the Mach-O format instead of Linux's ELF format.)
Will the .o file (or ./a.out that's more common around here) simply run on OSX?
.o files, called 'object' files, don't run on their own anyway; They have to be 'linked' into an executable file. The default name for executable files created by the gcc toolchain is 'a.out' (as specified in the POSIX standard).
If you set up cross compilation to OS X then, yes, you could produce a.out files that would just run on OS X. The a.out files you produce normally for Linux , i.e. without cross compilation, won't run on OS X.
I'd rather not install xCode on this machine if I don't have to.
Xcode doesn't run on Linux anyway, so you couldn't run it. Instead you'd get a version of gcc that cross-compiles to OS X, or you'd install a different compiler, clang (and linker, lld instead of ld or gold).
A simple answer to your question is no. You cannot compile a program under linux and expect it to run on MacOSX.
However, MacOSX is just another UNIX OS under the hood and you can build your project with Make and GCC. If your partner doesn't know how to do this, I would suggest asking him to let you SSH into his machine.
However, if you're building executable on a mac you will want to install XCode. Even if you're using GCC from brew.
Another alternative is have your partner install a linux VM. He can use oracles virtual box to install linux and run your code w/in macosx.
I would suggest you sign up at Amazon for a free EC2 (Elastic Compute Cloud) account and take the free Basic Linux box. Install and build your software on Linux on there and then let your partner log into it and run it using just ssh which is already on OSX anyway. So no need to install anything on his Mac.
That way there is no need to transfer files between yourselves and update stuff when you make changes - just one environment to manage and keep up to date that you can both access from anywhere, any time.

How to compile C++ programs in codeblocks for 32bit computers with the dual targets MinGw compiler [duplicate]

I've downloaded MinGW with mingw-get-inst, and now I've noticed that it cannot compile for x64.
So is there any 32-bit binary version of the MinGW compiler that can both compile for 32-bit Windows and also for 64-bit Windows?
I don't want a 64-bit version that can generate 32-bit code, since I want the compiler to also run on 32-bit Windows, and I'm only looking for precompiled binaries here, not source files, since I've spent countless hours compiling GCC and failing, and I've given up for a while. :(
AFAIK mingw targets either 32 bit windows or 64 bit windows, but not both, so you would need two installs. And the latter is still considered beta.
For you what you want is either mingw-w64-bin_i686-mingw or mingw-w64-bin_i686-cygwin if you want to compile for windows 64. For win32, just use what you get with mingw-get-inst.
See http://sourceforge.net/apps/trac/mingw-w64/wiki/download%20filename%20structure for an explanation of file names.
I realize this is an old question. However it's linked to the many times the question has been repeated.
I have found, after lots of research that, by now, years later, both compilers are commonly installed by default when installing mingw from your repository (i.e. synaptic).
You can check and verify by running Linux's locate command:
$ locate -r "mingw32.*[cg]++$"
On my Ubuntu (13.10) install I have by default the following compilers to choose from... found by issuing the locate command.
/usr/bin/amd64-mingw32msvc-c++
/usr/bin/amd64-mingw32msvc-g++
/usr/bin/i586-mingw32msvc-c++
/usr/bin/i586-mingw32msvc-g++
/usr/bin/i686-w64-mingw32-c++
/usr/bin/i686-w64-mingw32-g++
/usr/bin/x86_64-w64-mingw32-c++
/usr/bin/x86_64-w64-mingw32-g++
Finally, the least you'd have to do on many systems is run:
$ sudo apt-get install gcc-mingw32
I hope the many links to this page can spare a lot of programmers some search time.
for you situation, you can download multilib (include lib32 and lib64) version for Mingw64:
Multilib Toolchains(Targetting Win32 and Win64)
By default it is compiled for 64bit.You can add -m32 flag to compile for 32bit program.
But sadly,no gdb provided,you ought to add it manually.
Because according to mingw-64's todo list, gcc multilib version is done,but gdb
multilib version is still in progress,you could use it maybe in the future.
Support of multilib build in configure and in gcc. Parts are already present in gcc's 4.5 version by using target triplet -w64-mingw32.
gdb -- Native support is present, but some features like multi-arch support (debugging 32-bit and 64-bit by one gdb) are still missing features.
mingw-64-todo-list

How can I work out why a specific version of a library is in the dependencies?

I'm building a large C++ project using cmake on ubuntu 12.04 and then taking the resulting binary package and trying to run it on ubuntu 11.04. However the program fails saying it needs glibc version 2.14 but can only find up to version 2.13.
How can I find out exactly why glibc=>2.14 is required?
Unlike most libraries, glibc versions its symbols. Every symbol is tagged with a value (e.g. "GLIBC_2.3.4") representing the version of the library where it's interface was last changed. This allows the library to contain more than one version of a given symbol and support binaries compiled against older versions while preserving the ability to evolve. You can see this detail with objdump -T /lib/libc.so.6.
Basically, something in your app was linked against a symbol that was changed since 11.04. Try objdump -T on your binary and see what tags it's looking for.
But broadly, backwards compatibility doesn't work like that in Linux. If you want something to run on older software, you should build it on older software. It's possible to set up a backwards-compatible toolchain on more recent distros, but it's not the default.
When you build your C++ project, it will link to the version of the glibc library on your 12.04 installation. What are the linker options in your build command?
Without knowing exactly what you are building, I'd say you might be better off building on 11.04 and then running on 12.04.

Building zlib libz.a for 32 bit

I am trying to compile a 32-bit version (MinGW) of a program I wrote using zlib. Until now, I've never has to compile for 32-bit so the version of zlib I compiled from source (libz.a) is 64-bit. I tried to rerun the makefile in the zlib-1.2.5 directory but it only compiles a 64bit version of libz.a.
I can't seem to find an option to build 32-bit.
Does anyone know how to do this?
Thanks!
Jeffrey Kevin Pry
Checking the configure file, you can see some env.
On 64bit debian, following command line will build the 32bit version of libz
CFLAGS=-m32 ./configure
It turns out I had to get the 32bit version of MinGW and compile it with that. I was using MinGW64.
Using CFLAGS=-32 won't do it for me, configure script still shouts out telling me to use win32/Makefile.gcc instead all the time.
The recent version of zlib is 1.2.11, so it should be minimal gap of difference up until today. Without any context on system, the following might be useful for other users facing this similar problem these days.
I cross compile on Linux (Ubuntu 18.04), and target 32-bit version of zlib to be produced. What I did is as follows.
./configure (this is just to let us have required file to building process, we will be using different Makefile though)
Modify win32/Makefile.gcc for its PREFIX=i686-w64-mingw32- (for 64-bit you change it to PREFIX=x86_64-w64-mingw32-.
make -fwin32/Makefile.gcc
Install to your desire location via make install -fwin32/Makefile.gcc SHARED_MODE=1 INCLUDE_PATH=/tmp/zlib-win32/include LIBRARY_PATH=/tmp/zlib-win32/lib BINARY_PATH=/tmp/zlib-win32/bin. Notice that you need to specify INCLUDE_PATH, LIBRARY_PATH, and BINARY_PATH. BINARY_PATH will contains result .dll file.