I am looking to release a 'sneak peak' of my project on my website, in Windows I know how to properly distribute the required files, like .dll's and such. But for Linux (Ubuntu distro) I am having trouble running my program on any other machine but my development machine. I have been able to statically link all dependancies but one
-Tiny XML
-FreeType2
-SDL
-Lua
-LuaBind
-GLU
These all have their own appropriate .a static library's. However OpenGL or lGL as I have linked it, apparently points to a .so dynamically linked library. I am unable to find a static library for OpenGL, I do understand the benefits of using Dynamically linked library's. So my question is, what is the proper process to set up a client computer to run my file? ie. How to install the dependencies (In this case only libGL.so) on their system? I imagine I will also have to automatically chmod +x the file before it will run for them also.
You should create a .deb file with appropriate dependencies, so a tool like synaptic can automatically take care of satisfying the dependencies.
Related
I have a programm which I compile on my host machine and want to run it in a Docker-Container.
Now, it needs very many dependencies to run.
First I tried to find out all dependencies with running
ldd ./my_prog
and then started copying them to my docker ubuntu image.
But then the copied dependencies need more dependencies such that this did not seem promising if I did not want to copy my whole host system into the container.
Then I read that statically linking my program might cure this.
But since I have a CMake-project I don't know what to do to make it statically link my dependencies. Is this even a valid approach? And if so, how do I do this?
TL;DR: I have a c++ programm which I want to run in a docker-container and try to avoid to copy very many dependencies to the image.
Edit: running my prog with ldd gave me a list of dependencies which I extracted all paths from and I copied all these .so files into my container. Yet it still complained of not finding everything.
This question already has an answer here:
Deploying Qt applications in Linux properly
(1 answer)
Closed 4 years ago.
I have created a nice little scientific Qt application and I want to distribute it.
It was very easy to do this in Windows, I simply created a folder, put my executable there and called the windeployqt program, which put all necessary .dll files in the folder like Qt5Core.dll, Qt5Gui.dll, Qt5Charts.dll... and it created some subfolders like iconengines/, imageformats/, platforms/ and many more.
All together this folder now contains 43 files.
When I copy this folder to any other computer with Windows 10 it runs well.
I would like to do the same on Linux, because it is the preferred operating system that we use.
However, I struggle a bit because I do not really know how to start.
Is it possible to do it the same way? Copy all necessary libraries in a folder together with the executable and simply be able to copy it on a different computer with Linux and run it?
(To clarify: When I say Linux I mean Ubuntu 18.04 or 16.04)
Or is there a different way how to do it?
I only have a students license so I think I'm not allowed to statically link the libraries (but I have to read the license terms again to be sure)
In case it works the same. Is there a simple way to copy all necessary libraries in this folder? Or do I have to search the 42 libraries by myself?
I have read the manual but to be honest I did not understand everything and all the example codes in there.
Thank you for your help.
Look at the Creating the Application Package link in the documents. For Linux, a starting script is given to you via documentation as a starting point. Here it is:
#!/bin/sh
appname=`basename $0 | sed s,\.sh$,,`
dirname=`dirname $0`
tmp="${dirname#?}"
if [ "${dirname%$tmp}" != "/" ]; then
dirname=$PWD/$dirname
fi
LD_LIBRARY_PATH=$dirname
export LD_LIBRARY_PATH
$dirname/$appname "$#"
This script must be saved as an .sh script and must live in the directory of your executable.
Make the script executable , open a terminal and do the following:
$ cd /pathToScript/
$ sudo chmod +x scriptName.sh
Then double click on your script to run.
As stated in the Qt docs this will make:
...sure that the Qt libraries will be found by the dynamic linker. Note that you only have to rename the script to use it with other applications.
If you want to statically deploy your project, you must first have a static version of Qt built from source, that can be found at this headline from the Qt Docs.
A couple more notes: If you want to distribute this project for it to be used on a Linux system. You can simply package the build folder. But to actually run it, you would need to use the script (for the easy way at least) on a Linux system. It is not necessary to go hunting for the application's dependencies.
You need to install all qt libraries first.
I think using qmake could be a solution if you're creating a Qt-based project. Qt Creator uses it as default.
qmake generates all needed files used to compile the program. Try using it.
I built a simple c++ application using Netbeans on ubuntu.
in the application I use mysql_connection and curl.
the application is working fine on my local system (Ubuntu)
when I tried to run the application on my Centos server I get this message:
error while loading shared libraries: libmysqlcppconn.so.5: cannot open shared object file: No such file or directory.
tried to check if the libmysqlcppconn.so.5 library exists on the server or not I found that there is the following:
REMOTE (Centos)
**in [/usr/local/lib]**
libmysqlcppconn-static.a
libmysqlcppconn.so#
libmysqlcppconn.so.7#
libmysqlcppconn.so.7.1.1.3*
LOCAL (Ubuntu)
**in [/usr/lib]**
libmysqlcppconn-static.a
libmysqlcppconn.so#
libmysqlcppconn.so.5#
libmysqlcppconn.so.5.1.1.0*
why can't the application run? How can I fix it?
You should build and package it for your server.
Your application was linked against a different (incompatible) version of one of the libraries it uses.
IMHO the simplest is often to build it on the box it is going to run on.
In general, there is no guarantee that a binary built on a Linux system will work on a different Linux system (either a different distribution or a different version of the same distribution). For some applications it's enough to copy the library files (lib*.so*) or linking it statically (gcc -static), but in general distributing programs for multiple Linux systems is more complicated without an easy solution.
One solution is to recompile your program for each system you want to run it on. For that you need to install the compiler and the library dependencies (including the *-devel packages) first to those systems.
I have installed the latest ncurses library which my project is using. Now, I want to check in the ncurses static libraries into svn so that I can checkout the project on a different machine and compile it without having to install ncurses on the system again.
So the question is what is the difference between libncurses.a, libncurses++.a and libncurses_g.a files? And do I need all of them for my C++ project?
Thanks!
libncurses.a - This is the C compatible library.
libncurses++.a - This is the C++ compatible library.
libncurses_g.a - This is the debug library.
libncurses_p.a - This is the profiling library.
If you want to find out if you can get by without using libncurses.a, you can rename the library and run a build of your application.
My answer comes a little late [ :-) ] since you posted your question more than 4 years ago. But:
Archiving the pre-compiled library in your SVN means that your built application may fail if the target machine differs under some critical aspect.
Yes, you can safely run the application on other machines which are configured entirely in the same way (e.g., on a fully homogeneous computation cluster). However, if the machines differ (e.g., because one machine had a system upgrade and the other not), it may stop working. This is not very likely, so the risk may be acceptable for what you'd like to do.
I would suggest another solution: Commit a recent, stable version of the libncurses sources (tarball) to your SVN repo and add a little script (or make target) that runs the libncurses build and installs the built library to some project directory (not the system directory but next to your applciation build directories, without committing to SVN). This build step only needs to be repeated if the libary shall be upgraded or if you would like to build/run on another machine.
This does not apply to the ncurses library in special but to any library.
Depending on your project target, consider further reading about
package management
cross compile
I'm studying (well, trying to) C right now, but I'm limited to working in Windows XP. I've managed to set up and learn how to use Emacs and can compile simple C programs with gcc (from Emacs no less!), but I'm getting to the point where I'd like to install something like SDL to play around with it.
The thing is that the installation instructions for SDL indicate that, on a Win32 environment using MingW, I would need to use MSYS to run ./configure and make/make install to install SDL, like one would do on Linux. I noticed that when I unzipped the SDL-dev package (forgot the exact name, sorry) there were folders there that corresponded to a folder in the MinGW directory (SDL/include -> MinGW/include).
Am I right in saying that all the ./configure and make commands do is move these files from one directory to another? Couldn't I just move those files by hand and spare myself the trouble of installing and configuring MSYS (which, to be honest, confuses me greatly)?
The build process usually works like this: the configure script finds the appropriate settings for the compilation (like which features to enable, the paths to the required libraries, which compiler to use etc.) and creates a Makefile accordingly. make then compiles the source code to binaries. make install copies the created binaries, the headers, and the other files that belong to the library to the appropriate places.
You can't just copy the files from the source archive, because the source archive does not contain the binary files (or any other files that are created during the make step), so all you'd copy would be the headers, which aren't enough to use the library.
In most case, configure and make will discover the compiler/environment of your machine and build the suitable binary, respectively. Therefore, unfortunately, it will not be easy as moving/copying header files to new locations.
However, in some cases, the library can be the "header only" library. Which means you need only header files to use it.
I have no experience with MSYS and SDL. But the basics of configure and make is worth learning (especially if you are going to program any C/C++ in non-Windows environment.)