This is a possible duplicate to this question but the answer is not solving my problem...
I want to use third party software delivered by c++ headers, libraries (.lib) and dlls for a project. Because it will run on a BeagleBone, I am writing this project for Linux. Is there any proper way to link dlls for Linux applications?
Highly unlikely to ever work:
Windows uses a different file-format for executable files (and DLL's are essentially executable files - same for shared libraries in Linux). This means that relocation information and symbol linkage is different.
Even if you manage to load and relocate the DLL, you most likely will need many other DLL's (such as the compiler runtime and system runtime [kernel32.dll for example]) to actually run the application. And of course, those dependent files need to have a correct interface to work on Linux (see #3)
There's nothing stopping a Windows DLL from making system calls directly in the shared library - these won't work in Linux.
Your best choices are:
Request Linux versions from the supplier of those libraries. Depending on what the functionality is, it may or may not be an easy task for the supplier to produce alternatives for different architectures.
Run the application inside the WINE (WINdows Emulator) package.
Run a Windows virtual machine, and run the application inside that.
Write your own replacement libraries, or find others already available out there.
Related
I'm struggling to deploy my Qt/C++ application, probably because I have not found a good introduction about this online. In brief my question is how do I setup an installation framework which requires only minimal, or preferably no, compilation before shipping to users.
I want to deploy the GUI to users on different platforms, who may or may not have admin rights on their machines. I have found different options:
Statically compile Qt -> statically compile an executable -> distribute the executable. With this setup I have encountered a Windows security warning, which requires admin privileges (I have not yet tried on Linux / macOS). And frankly this approach seems sub-optimal, as my compiler has no idea about how to compile optimally for my users.
Create an installer. But there I start to be confused... Do I need to provide a statically compiled executable of my GUI, or just of the installer, or neither? Or can I avoid pre-compiling on my side all together by using an installer from Qt with built-in compiler/libraries?
With this setup I have encountered a Windows security warning
You didn't sign the binaries. This issue has nothing to do with Qt. You'd face it even when distributing a trivial "Hello World".
Ensure that you sign all of the following:
The executables.
All DLLs that you redistribute and are not signed (verify each one).
The installer.
my compiler has no idea about how to compile optimally for my users.
Since C++ doesn't use just-in-time compilation, this statement is a truism. When you dynamically link your compiler will also have "no idea" how to compile "optimally for your users" if you imply that you need CPU-specific variants of your code. This has to be addressed by having multiple executables, each compiled for a particular CPU, and selecting them on installation. I don't think you meant that, though. But then I have no idea what you mean by "compile optimally for my users".
Do I need to provide a statically compiled executable of my GUI
It's up to you. If you don't provide a statically compiled executable, you will need to provide all of the dependencies: the C++ runtime of your compiler, and all the libraries and plugins needed by Qt.
The procedure for producing a statically linked executable on Windows, Linux and OS X is identical. You start with a statically configured copy of Qt (configure -static -static-runtime), then build it, and then use that to build your application. The end product will be statically linked against C++ runtime and Qt libraries.
Do I need to provide a statically compiled executable of [...] the installer
Only if you compile the installer program yourself using a C++ compiler. Most installer generator packages take care of creating an installer that has no additional dependencies, i.e. you can run it on a bare Windows system.
can I avoid pre-compiling on my side all together by using an installer from Qt
Qt provides no pre-built installers for re-use.
You can use e.g. NSIS to deploy the compiler runtime, Qt libraries and plugins, and your application and any data files it needs.
Or you can statically compile your application so that it has no dependencies and is a single .exe file, and have it as a portable application. It could also self-install, i.e. you could bundle the installer within the application, and on startup the application could detect whether it's already installed, and if not it'd relaunch itself in administrative mode and perform the installation.
Obviously you need to build your application on each platform you want to distribute it to. Easiest way is to link all the QT libraries dynamically to your application. After that all you need to do is provide your application (as in exe file on windows, or executable on linux etc) and the QT libraries you used (DLLs on windows, SO file I think on linux etc)
For example (on windows) if your app is called MyApp and uses QTGui, QTWidgets and QTNetwork, then you have the following files to distribute:
MyApp.exe
QTCore.dll and few other DLLs needed called icu*.dll something, can't remember)
QTGui.dll
QTWidgets.dll
QTNetwork.dll
and you can zip them all in one zip, create an installer etc.
EDIT Few notes after the follow up in the comment.
The standard library (what you called default library that has vector class) is part of the c/c++ runtime (on windows) or installed on linux systems etc, so no, you don't have to worry about this. I can't say for all compilers but for some you can specify a flag/parameter to link this runtime statically (rarely there is a need to do this).
On windows there is a tool called dependency walker, which gives you the list of all DLLs needed for the application to run. On linux systems I don't know, never needed one really. But for your own application, you do know which libraries you need, since you wrote it :)
Well, trying to build a simple exe in visual studio 2012, with c++ win32 console app, just with a
printf("-----");
After build the release version, its running ok.
When transfer to another windows 7 clean installation, at running i get notice that the MSVCP110.DLL is missing...
Its not a native app ??? why extern dll is needed ?
In old win95 I make many executables with visual C 6 and its run standalone withou any dll.
I will always deplay this dll's with the "native" exe ?
When you write a C++ program, you use a few low-level libraries to interface with the machine. The C++ Standard Library is one example. Consider for example, new. When you call new in your program, you're invoking a piece of code that implements that functionality. Where is that actual code?
It's in a library. That library is deployed in a few different ways. One way is through dynamic linking, where the library is in the form of a DLL that needs to be present on the machine where you run your program. That's what MSVCP110.DLL is -- it's one of the library files your program was compiled against. Another way is to use static linking, where the code from that library is compiled directly in to your program. This results in a signifigant increase in the size of your application, but the other side of that coin is you don't need those library files to be on your target machine. You also need to make sure that other libraries your program use are also built against the same static library. If your program shares data with other programs, you further may need to ensure that those programs use the same static libraries.
Microsoft and Windows aren't unique in this. The same thing happens under Linux, albeit the libraries have different names.
There are pros and cons to using either shared libraries (eg dynamic linking) or static libraries. It's simple and catchy to say "gahrrr I hate shared libraries" but unless you understand why either is appropriate in what situation you stand to deploy a poorly-designed program.
This question already has answers here:
Static linking vs dynamic linking
(16 answers)
Closed 9 years ago.
Probably on any OS it is possible to compile C++/C standard library statically or dynamically. On Windows I prefer static builds always, because it helps to avoid "dll hell" problem with different versions of libraries installed or not installed on specific Windows version, edition and service pack, etc. Static linking makes software more portable and less dependent on what end user did with his operating system (I even saw examples when end user could make SHIFT+DEL on some DLLs in system32, he couldn't explain why, or when users claim that my app contains virus because it tried to download dynamically linked prerequisites from official Microsoft website...) So, on Windows static linking is usually better than dynamic one in my experience.
However, I am new to Linux, so can anyone share his experience? My question is: what kind of linking (dynamic or static) is preffered on Linux if we ignore the fact that dynamic one allows to save memory & hard drive space and if we plan to distribute software with automated install program (hard drive space and memory are cheap enough now, so there are not reason to sacrifice hours of working time required to create really good and portable installer to win some megabytes of RAM or hard drive space). Are there any Linux-specific issues with dynamic/static linking?
On Linux you normally have a package manager that ensures you only have one version of libraries installed. So there normally is no dll hell and no problem with linking dynamically. Linking dynamically is the standard way on Linux.
I'd say the answer depends on how you distribute the software.
If you package the software for a specific Linux distribution and version dynamic linking is usually preferred. You know which libraries to find on the system and you can specify dependencies.
However, if you want to distribute the software as a Linux binary that runs on "any" system (such as various games or software like Matlab for example) you will end up with the same dll (or .so) hell problem as on windows. You don't know which versions of which libraries are on the system. Thus, you will have to provide your own .so files or link statically.
See the whole point in using dynamic linking is to reduce the size of executables and memory usage.If you neglect that there is too less to talk about.
On the other hand you mentioned about saving memory and disk space.It is necessary to save disk space because when you want to export your app/program, you can't put a 2Gb app on the internet for download(for example openCV library is about 2.1GB). The solution is to dynamically link them and load only those modules which are necessary to you.This enables efficient multitasking also(creates just one copy of the module and the whole program uses the same copy).
peculiarly:
For example, a media player application might originally be shipped
with a codec that
supports the mp3 file format. If the media player were statically linked it would not
be possible to dynamically update it to support a different file format, without
replacing the entire application. Dynamic linking means that a new version of the
shared library containing a more up-to-date codec, which includes some enhancements
and bug fixes, could be dynamically loaded by a dynamic linker into memory at run-time
to replace the original shared library. A shared library can also be shared by more than one application. For example, two
different media players could both use the same shared library containing the same
codec. This potentially means that the device running the application requires less
physical memory, depending on the size of the dynamic linker.
third, in linux everything is dynamically linked except for the /bin/ash.static whic also has its dynamic version /bin/ash but this shouldn't stop you from static linking in linux.
when using gcc the linking is by default dynamic.I guess you should use the "-static" flag to statically link the libraries
#Vitaliy good that you brought this up.The important thing to note here is that Smart linking and the creation of shared (or dynamic) libraries are mutually exclusive, that is, if you turn on smart linking, then the creation of shared libraries is turned of.
smart linking breaks the code into small code blocks and their dependencies are loaded.
So if you are calling a dependency multiple times, it gets loaded multiple times.
This gives a very good execution time but very high compilation time especially for large units.So there is a certain trade-off.
I am looking to create a C++ library that can be used by both Linux and Windows clients. The OS specific functionality will be hooked up by the client by implementing the interfaces provided by the library.
Is this possible to achieve? Do I need to recompile the C++ project again in linux.
P.S: I am using CodeBlocks IDE
The short answer is no, you still need to compile your library for each targetted platform -- however, assuming your code is written such that it is cross-platform, you can set up your build to target both Windows and Linux environments with little fuss. I do this now using CMake to generate both Visual Studio projects for Windows environments and Makefiles for Linux environments.
I'm pretty confident that Linux will not accept a .dll :) And yes, you will need to recompile. Unless you run windows as a virtual machine under linux which sort of preempts the question.
It certainly cannot be the same binary file: shared objects ELF format on Linux, DLL "PE" format on Windows. And dynamic loading has different semantics on both systems. See Levine's linker and loader book for details.
You could, if done carefully, have the same source code giving the two different files (the DLL on Windows, the dynamic shared object on Linux).
But you probably would need some conditional compilation tricks like #ifdef WINDOWS etc...
You might use libraries providing you a common abstraction for such things. For instance, both GTK/Glib and Qt have some mechanism giving a common abstraction of dynamically linked (or dynamically loaded - ie dlopen-ed) libraries.
You probably want to read the Program Library Howto (at least for Linux).
I have written some code that makes use of an open source library to do some of the heavy lifting. This work was done in linux, with unit tests and cmake to help with porting it to windows. There is a requirement to have it run on both platforms.
I like Linux and I like cmake and I like that I can get visual studios files automatically generated. As it is now, on windows everything will compile and it will link and it will generate the test executables.
However, to get to this point I had to fight with windows for several days, learning all about manifest files and redistributable packages.
As far as my understanding goes:
With VS 2005, Microsoft created Side By Side dlls. The motivation for this is that before, multiple applications would install different versions of the same dll, causing previously installed and working applications to crash (ie "Dll Hell"). Side by Side dlls fix this, as there is now a "manifest file" appended to each executable/dll that specifies which version should be executed.
This is all well and good. Applications should no longer crash mysteriously. However...
Microsoft seems to release a new set of system dlls with every release of Visual Studios. Also, as I mentioned earlier, I am a developer trying to link to a third party library. Often, these things come distributed as a "precompiled dll". Now, what happens when a precompiled dll compiled with one version of visual studios is linked to an application using another version of visual studios?
From what I have read on the internet, bad stuff happens. Luckily, I never got that far - I kept running into the "MSVCR80.dll not found" problem when running the executable and thus began my foray into this whole manifest issue.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Is this in fact true? Did I miss something?
Furthermore, if this seems to be the case, then I can't help but think that Microsoft did this on purpose for nefarious reasons.
Not only does it break all precompiled binaries making it unnecessarily difficult to use precompiled binaries, if you happen to work for a software company that makes use of third party proprietary libraries, then whenever they upgrade to the latest version of visual studios - your company must now do the same thing or the code will no longer run.
As an aside, how does linux avoid this? Although I said I preferred developing on it and I understand the mechanics of linking, I haven't maintained any application long enough to run into this sort of low level shared libraries versioning problem.
Finally, to sum up: Is it possible to use precompiled binaries with this new manifest scheme? If it is, what was my mistake? If it isn't, does Microsoft honestly think this makes application development easier?
Update - A more concise question: How does Linux avoid the use of Manifest files?
All components in your application must share the same runtime. When this is not the case, you run into strange problems like asserting on delete statements.
This is the same on all platforms. It is not something Microsoft invented.
You may get around this 'only one runtime' problem by being aware where the runtimes may bite back.
This is mostly in cases where you allocate memory in one module, and free it in another.
a.dll
dllexport void* createBla() { return malloc( 100 ); }
b.dll
void consumeBla() { void* p = createBla(); free( p ); }
When a.dll and b.dll are linked to different rumtimes, this crashes, because the runtime functions implement their own heap.
You can easily avoid this problem by providing a destroyBla function which must be called to free the memory.
There are several points where you may run into problems with the runtime, but most can be avoided by wrapping these constructs.
For reference :
don't allocate/free memory/objects across module boundaries
don't use complex objects in your dll interface. (e.g. std::string, ...)
don't use elaborate C++ mechanisms across dll boundaries. (typeinfo, C++ exceptions, ...)
...
But this is not a problem with manifests.
A manifest contains the version info of the runtime used by the module and gets embedded into the binary (exe/dll) by the linker. When an application is loaded and its dependencies are to be resolved, the loader looks at the manifest information embedded in the exe file and uses the according version of the runtime dlls from the WinSxS folder. You cannot just copy the runtime or other modules to the WinSxS folder. You have to install the runtime offered by Microsoft. There are MSI packages supplied by Microsoft which can be executed when you install your software on a test/end-user machine.
So install your runtime before using your application, and you won't get a 'missing dependency' error.
(Updated to the "How does Linux avoid the use of Manifest files" question)
What is a manifest file?
Manifest files were introduced to place disambiguation information next to an existing executable/dynamic link library or directly embedded into this file.
This is done by specifying the specific version of dlls which are to be loaded when starting the app/loading dependencies.
(There are several other things you can do with manifest files, e.g. some meta-data may be put here)
Why is this done?
The version is not part of the dll name due to historic reasons. So "comctl32.dll" is named this way in all versions of it. (So the comctl32 under Win2k is different from the one in XP or Vista). To specify which version you really want (and have tested against), you place the version information in the "appname.exe.manifest" file (or embed this file/information).
Why was it done this way?
Many programs installed their dlls into the system32 directory on the systemrootdir. This was done to allow bugfixes to shared libraries to be deployed easily for all dependent applications. And in the days of limited memory, shared libraries reduced the memory footprint when several applications used the same libraries.
This concept was abused by many programmers, when they installed all their dlls into this directory; sometimes overwriting newer versions of shared libraries with older ones. Sometimes libraries changed silently in their behaviour, so that dependent applications crashed.
This lead to the approach of "Distribute all dlls in the application directory".
Why was this bad?
When bugs appeared, all dlls scattered in several directories had to be updated. (gdiplus.dll) In other cases this was not even possible (windows components)
The manifest approach
This approach solves all problems above. You can install the dlls in a central place, where the programmer may not interfere. Here the dlls can be updated (by updating the dll in the WinSxS folder) and the loader loads the 'right' dll. (version matching is done by the dll-loader).
Why doesn't Linux have this mechanic?
I have several guesses. (This is really just guessing ...)
Most things are open-source, so recompiling for a bugfix is a non-issue for the target audience
Because there is only one 'runtime' (the gcc runtime), the problem with runtime sharing/library boundaries does not occur so often
Many components use C at the interface level, where these problems just don't occur if done right
The version of libraries are in most cases embedded in the name of its file.
Most applications are statically bound to their libraries, so no dll-hell may occur.
The GCC runtime was kept very ABI stable so that these problems could not occur.
If a third party DLL will allocate memory and you need to free it, you need the same run-time libraries. If the DLL has allocate and deallocate functions, it can be ok.
It the third party DLL uses std containers, such as vector, etc. you could have issues as the layout of the objects may be completely different.
It is possible to get things to work, but there are some limitations. I've run into both of the problems I've listed above.
If a third party DLL allocates memory that you need to free, then the DLL has broken one of the major rules of shipping precompiled DLL's. Exactly for this reason.
If a DLL ships in binary form only, then it should also ship all of the redistributable components that it is linked against and its entry points should isolate the caller from any potential runtime library version issues, such as different allocators. If they follow those rules then you shouldn't suffer. If they don't then you are either going to have pain and suffering or you need to complain to the third party authors.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Alternatively (and the solution we have to use where I work) is that if the third-party libraries that you need to use all are built (or available as built) with the same compiler version, you can "just" use that version. It can be a drag to "have to" use VC6, for example, but if there's a library you must use and its source is not available and that's how it comes, your options are sadly limited otherwise.
...as I understand it. :)
(My line of work is not in Windows although we do battle with DLLs on Windows from a user perspective from time to time, however we do have to use specific versions of compilers and get versions of 3rd-party software that are all built with the same compiler. Thankfully all of the vendors tend to stay fairly up-to-date, since they've been doing this sort of support for many years.)