one thing I haven't been able to work out running with bazel is using libstd dynamically
that is, using
libstdc++-6.dll
I have to manually copy this into the bazel-bin in order to use it. I was hoping this would be something I can have bazel manage such that it will do this each time automatically (on a full project clean for example)
I am able to have it work by specifying linkopts=["-static"] to statically link against the standard library, however this is not ideal, since sometimes I might be using a 3rd party library whose built files might require the DLL.
Any help here would be massively appreciated, thanks
Related
I'm developing a c++ program on Ubuntu 16.04 using cmake, compiling with g++5 and clang++-3.8.
Now I'd like to make this Program availabile for 14.04, too, but as I'm using a lot of c++14 features I can't just recompile it on that system. Instead, I wanted to ask if/how it is possible to package all dependencies (in particular the c++ standard library) in a way that I can just unpack a folder on the target system and run the app.
Ideally I'm looking for some automated/scripted solution that I can add to my cmake build.
Bonus Question:
For now, this is just a simple command line program for which I can easily recompile all 3rd party dependencies (and in fact I do). In the long run however, I'd also like to port a QT application. Ideally the solution would also work for that scenario.
The worst part of your contitions is an incompatible standard library.
You have to link it statically anyway (see comments to your answer).
A number of options:
Completely static linking:
I think it's easiest way for you, but it requires that you can build (or get by any way) all third-party libs as static. If you can't for some reason it's not your option.
You just build your app as usual and then link it with all libs you need statically (see documentation for your compiler). Thus you get completely dependencies-free executable, it will work on any ABI-compatible system (you may need to check if x86 executable works on x86_64).
Partially static linking
You link statically everything you can and dynamically other. So you distribute all dynamic libs (*.so) along with you app (in path/to/app/lib or path/to/app/ folder), so you don't depend on system libraries. Create your deb package which brings all files into /opt or $HOME/appname folder. You have to load all dynamic libs either "by hand" or ask compiler to do it on linking stage (see documentation).
Docker container
I don't know much about it but I know exactly it requires that docker be installed on target system (not your option).
Useful links:
g++ link options
static linking manual
Finding Dynamic or Shared Libraries
There are similar docs for clang, google it.
If I dynamically link against Boost libraries, I still have to copy the respective Boost DLLs into the folder of the executable for the program to work.
I installed Boost into the recommended path C:\local\boost_1_59_0
Also, taking redistribution into consideration, probably very few people will have Boost installed, and there isn't really a user-friendly redistributable package, like with the Visual C++ libraries.
So does it make more sense to just statically link Boost in order to save some time? I don't really see the benefit of dynamic linking for Boost (on Windows that is!).
Thank you for your tips.
You are correct in saying that you'll have to distribute the dynamic library too, since not many people will be having it. But a dynamic library is not made just for this purpose only.
It can be useful in case multiple applications have to use that library simultaneously.
For example, if your client has multiple applications using the boost dynamic libraries, it makes sense to just send the dynamic library once, and install it into a commonly accessible location and let all those applications use it. This way, the individual sizes of those applications will remain small.
Another use case could be your client simultaneously running multiple instances of a single executable which requires the library.
I'm new to C++ and wonder if it is good practice to include a library by source code. If it is, what would be the best way to achieve this? Just copying in a subfolder and using include?
In my special case, I have written a small library and I'm going to use it on two different microprocessors. Compiling the library separately, copying all headers and using this "package" seems to be overkill for me.
Compiling the library separately is what should be done.
It's not that overkill either : you're just compiling the .o files for your library, then wrapping them in an archive and handling that archive around.
Normally libraries are used as libraries because it is much easier and comfortable that way. If you are using dynamic libraries (.dll or .so) things get even better because you can replace libraries on the fly and things should continue to work smoothly.
You decided to use code repositories instead of libraries which means probably more work for you. If you are happy this way that's OK, but just make sure you do not break any license, some lgpl packages (like Qt) clearly
require their libraries to be linked dynamically.
The best way to do this: hard to say but in your place I would probably use git and include the libraries as submodules.
Just #includeing source code is a bad idea since it means just to copy the code into your own, things can go wrong that way. For example if there is a static variable somewhere in the library code and the same named static variable in your code you will have a conflict.
Instead you should probably compile the library separately and link it, possibly the same way as you would do anyway (ie you build the library and then you link with that library). But the light weight alternative would be just to compile the additional C++ files and then link the object files together to an executable. Details on how you do that is compiler specific.
There's valid reasons for including the library source in this way, for example if your project needs to modify the library during development it would be easier to do so if the rebuilding of the library is done as a part of the build process of the project. With a well designed build process the library shouldn't have to be rebuilt unless there are actual changes to it.
The value of a library is in part that you link it more often than you compile it, leading to a net saving.
If you control all the source, then whatever build process works best for you is fine.
I agree with πάντα ῥεῖ but I'll also add that the reason it is bad practice is because the compiled library can be stored in your computer in a common location and used by tons of different programs, thereby reducing the amount of data your computer has to store, in memory as well as RAM(if more than one running program uses the same library). An example is openGL which is a library that many games use and is probably already in your system somewhere. If you use windows, software installers link up these libraries to their programs and add them if you don't have them. If you use linux, you will be notified if libraries are missing and prompted to install them. All of that aside, you can, technically use un-compiled libraries but that introduces a number of potential licensing problems as well as additional problems with THEIR dependencies.
By copying source code to other projects and "mixing" it with other source code will stop this library from being a "library". Later on you will be tempted to make a small change in one copy (for CPU) or fix a bug and forget to do the same in the other copy.
There might be additional consideration but you should try to keep the code in one place. Do not Repeat Yourself (DRY) is a very strong and fundamental principal of software engineering with many benefits.
I'm a little confused by what I've learned today. I hope somebody can help me out.
I understand the concept of dynamic and static linking, but the problem is as follows. On windows, or at least the paradigm on windows, you can have a .lib (which is like .a) and .dll (which is like .so, except... different) and you must statically link in the .lib which contains code that calls functions from the dll at runtime. Is this correct? In other words, gcc or g++ must have .lib files available at compile/link time, and be able to find .dll files at runtime. Please correct any wrong assumptions here.
However, I'm splitting a few of my source files in my small application away because I want to make them a library. When I run g++ on my object files, with the -shared option, this basically creates a shared library (.so)? This is where the confusion arises. The same so file is needed both at link time and runtime? I have trouble understanding how I need it in the -L/-l option at link time but it still needs the file at runtime. Is this actually the norm? Is a dll fundamentally different?
Finally, a final question. Take a library like boost on Windows. I built boost according to the instructions. In the end, the stage/lib directory contains libraries in a repeating sequence of name.a, name.dll.a, name.dll. What is the purpose of this scheme? I know I need the dll files at runtime, but when I use the -L/-l option at link time, what files is it using THEN?
Sorry if this is really scattered, but I hope someone can help clear this up. Thanks a lot!
On windows, or at least the paradigm on windows, you can have a .lib (which is like .a) and .dll (which is like .so, except... different) and you must statically link in the .lib which contains code that calls functions from the dll at runtime. Is this correct?
Yes and no. That is one way that DLLs work on Windows, but it is not the only way.
You can load a DLL manually, using Win32 API calls. But if you do, you have to get function pointers manually to actually access the DLL. The purpose of the import library (that static library you're talking about) is to do this automatically.
The nice thing about doing it manually is that you can pick and choose what DLLs you want. This is how some applications provide plugin support. Users write a DLL that exports a well-defined set of API functions. Your program loads them from a directory, and they bundle the function pointers for each DLL into its own object, which represents the interface to that plugin.
GCC works the same way, on Windows. Building a DLL produces a DLL and an import library. It's a ".a" file instead of ".lib" because of the compiler, but it still does the same thing.
On Linux, .so files are a combination of the .dll and the import library. So you link to the .so when compiling the program in question, and it does the same job as linking to the import library.
It's just two ways of giving infos at compile time about the shared library. Maybe a comparison would explain it better ?
In Windows, it's : "You will use a shared library (.dll) and here (.a or .dll.a) is the manual on how to use it."
In Linux, it's :" You will use a shared library (.so) so look at it beforehand (.so) so you'll know how to use it."
I've compiled Boost and it works just fine. I would like to to copy specific .dll's and .libs into my project for deployment. The problem is I'm having a hard time finding which packages contain the libraries I need. I've looked around but haven't seen any documentation on what's actually inside the compiled libraries.
For instance, if I wanted to use boost:asio and boost::prt_vector in my project, which .dll/.libs should I copy over?
The entire library folder is over 1.2 GB so I'd rather not use the entire thing. I'm using Windows, vs2008.
Any ideas?
Are you deploying your application as an executable or as a project to be compiled by the user? If it is the former, you shouldn't need to send static libraries, as they're linked into your executable. If you build Boost libraries as dynamic libraries, you will need them of course.
But if you're deploying your app as something to be compiled, or if you have Boost DLLs, then as martiall said, you should use BCP.
You can use the bcp which is bundled in Boost
BCP Docs