I am making a modified C++ compiler and I have it built and tested locally. However, I would like to be able to package my build for Windows, Linux (Debian), and Mac OSX.
All of the instructions I can find online deal with building gcc but have no regard for making something distributable (or perhaps I am missing something?). I know for Windows I will need to bundle MinGW somehow, but that only further confuses me - and I have no idea how well Mac works with GCC these days..
Can anyone layout a set of discrete high-level steps I could try on each system so I can allow people to install my modified compiler easily?
First make sure your project installs well including executables, headers, runtime dependencies. If you're using something like cmake, this is a matter of installing things to CMAKE_INSTALL_PREFIX while possibly appending GnuInstallDirs. If you are using make, then you need to ensure that make install --prefix=... works well.
From here, you can target each platform independently. Treat the packaging independently from your project. Like Chipster mentioned, making rpm files isn't so tough. deb files for Debian-based OSs, tar.xz files for Arch-based OSs are similar. The rules for creating these packages can use your install rules to create the package. You mentioned mingw. If you're targeting an msys distribution of mingw for Windows deployment, then the Arch-based packaging of pacman will work on msys as well. You can slowly work on supporting one-platform at a time with almost no changes to your actual project.
Typically in the open-source world, people will release a tar.gz file supporting ./configure && make && make install or similar. Then someone associated with the platform (like a Debian-developer) will find your project, make some packaging rules for it, and release it into their distribution. That means your project can be totally agnostic to where it's being release. It also means you don't really need to worry about MacOS yet, you can wait until you have someone who wants it there, or some hardware to test it on.
If you really want to be in control of how things are packaged for each platform from inside of your project, and you are already using cmake, cpack is a great tool which helps out. After writing cpack rules for your project, you can simply type cpack to generate many types of deployable archives. You won't get the resulting *.deb file into Debian or Ubuntu official archives, but at least people can using those formats can install your package.
Also, consider releasing one package with the runtime libraries, and one with the development content (headers, compiler, static libraries). This way, if someone uses your compiler, they can re-distribute the runtime libraries which is probably going to be a much simpler install.
Related
I'm trying to use the Quadprog++ library (http://quadprog.sourceforge.net/). I don't understand the instructions though.
To build the library simply go through the ./configure; make; make
install cycle.
In order to use it, you will be required to include in your code file
the "Array.hh" header, which contains a handy C++ implementation of
Vector and Matrices.
There are some "configure", and "MakeFile" files, but they have no extension and I have no idea what to do with them. There are also some ".am", ".in" and ".ac" extensions in the folder.
Does this look familiar to anyone? What do I do with this?
(Edit: On Windows.)
This package is built using the autotools. These files you talk to (*.am, *.in...) are because of the tools automake, and autoconf.
Autotools is a de-facto standard in the GNU/Linux world. Not everybody uses it, but if they do you ease the work of package and distribution managers. Actually they should be portable to any POSIX system.
That said, I'm guessing that you are using a non-unix machine, such as Windows, so the configure script is not directly runable in your system. If you insist in keep using Windows, wich you probably will, your options are:
Use MinGW and MSYS to get a minimal build enviroment compatible with autotools.
Use Cygwin and create a POSIX like environment in your Windows.
Create a VS project, add all the source of the library in there, compile and debug the errors they may arise, as if the code had been written by you.
Search for someone that already did the work and distributes a binary DLL, or similar.
(My favourite!) Get a Linux machine, install a cross-compiler environment to build Windows binaries, and do configure --host i686-mingw32 ; make.
This instruction say how can be build an program delivered like a tarball in Linux. To understand take a look on Why always ./configure; make; make install; as 3 separate steps?.
This can be confusing at first, but here you go. Type these in as shown below:
cd <the_directory_with_the_configure_file>
./configure
At this point, a bunch of stuff will roll past on the screen. This is Autoconf running (for more details, see http://www.edwardrosten.com/code/autoconf/index.html)
When it's done, type:
make
This initiates the build process. (To learn more about GNU make, check out Comprehensive gnu make / gcc tutorial). This will cause several build messages to be printed out.
When this is done, type:
sudo make install
You will be asked for the root password. If this is not your own machine (or you do not have superuser access), then contact the person who administers this computer.
If this is your computer, type in the root password and the library should install in /usr/local/lib/ or something similar (watch the screen closely to see where it puts the .so file).
The rest of it (include the .hh file) seems self-explanatory.
Hope that helps!
I'm working on a simple command line tool in c++. Half as a fun learning-process thing, and half to distribute to friends/colleagues etc.
I assume the easiest way to make it distributable is just packaging the source code with an installation script---can anyone point me to a good tutorial for setting that up?
In other words, what must a script include to compile the program, put the files in good places*, and make it executable from any directory from the command line?
E.g. I know the compiled binary should go in /usr/local/bin/ , but if I'm writing-to and accessing a text file (for instance), where should that go? What about a file that stores settings/configuration-parameters?
I'm on mac osx, so that would be the starting point, but portability to windows, linux, etc would be great.
You can use CMake to make a cross platform build system, and you can use it's CPack (Wiki here) feature in order to generate binary only packages. First you create a build script that runs and installs on each platform (which CMake makes as easy as can be expected). You then run CPack to generate a package which just includes your binaries.
There is a good tutorial that covers the basic cmake process (including install commands) here.
CMake is generally considered simpler then autoconf (and has better windows support), but each has it's own strengths.
Do not assume that the user installing the program has root access. Prompt, or provide a command-line option, like --install-prefix=/home/user/apps, to specify where to install.
I HATE programs that install shit in /usr/local. If you do that, you'd best wrap it up in an .rpm or .deb or whatever the platform package is so that your app can be cleanly uninstalled.
I would suggest checking out autoconf
I made a little app and built a release version. Now I want to upload it to my site. I have never done this before with Qt, so I'm unsure as to what I should include along with the binary.
How do I figure out which DLLs should be included with my app? And where do I get them? I'm running Windows, but I'd also like to know what I should do in case I want to release a Linux version.
For windows:
You can use dependency walker to see what Qt libraries (or others) you should ship. This is the depends.exe executable that is included with Visual Studio, but you can download it separately from: http://www.dependencywalker.com/
Load your app into that and it will list out all the modules it expects at runtime. You might also have to ship a Visual C++ Runtime Redistributable compatible with the compiler that you built the executable with (if it's VC++).
Do note that dependency walker does not account for things like Qt's plugins. An example of this would be the QtAssistant system (for help menu-type functionality), which as of Qt4 relies on Qt's sqlite functionality, which is typically built as a plugin (qtsqlite4.dll if I remember).
For Linux:
This is trickier because of wider disparities in Linux distributions. You can of course use the GNU build system if you want to ship source, but if you're shipping binaries, and want to support a variety of distros, you might do best to build packages for each platform you want to release on.
In my past, a company I worked for switched to using cmake and after setting up all the project and build files, used that to generate builds and packages for different OSes. On Windows, this meant hooking in with Inno Setup, and for Unix-like systems, cmake knows how to generate things like installable shell scripts. Definitely made life much easier.
Our QA department would test our software in virtual machine instances of our supported platforms, completely clean, and see if anything was missing.
If you're talking about DLLs, I assume it is about Windows.
Use Dependency Walker to see the DLL dependencies.
Or... take a clean system, with no dev tools installed, and put your executable, try to run it there, and see what DLLs are reported as necessary and inexistent. Put the DLLs near the executable.
For a Linux version, you can either create platform targeted releases of installers for each Linux fork or you can let people compile from source. If your app is new, the only way you get exposure is supply people with readymade installers, the targeted installers. New users loathe compiling packages from source.
You can try debian (.deb) and redhat (.rpm) packages first. These two are extremely popular lines and will let you have a taste of things.
I have access to computer in a public library and I want to try out some C++ and maybe other code. Problem is that there is no g++ installed and I can't install it using packages because I have no root access. Is there a "smart" way to make a full environment for programming in a home folder?
I have gcc installed (I can compile C code). Also, I have a home folder that is consistent. I don't know where to find precompiled g++, I found only source but I don't know what to do with it. I tried to ask them to install this but it didn't work :)
If you want to install it as a local user
GNU GSRC provides an easy way to do so
Link: http://www.gnu.org/software/gsrc/
After configuration, simply specify the following commands:
cd gsrc
make -C pkg/gnu/gcc
make -C pkg/gnu/gcc install
The second step could also be changed to speed up for an N-core system:
make -C pkg/gnu/gcc MAKE_ARGS_PARALLEL="-jN"
You can run the configure script with the --prefix parameter: ../gcc-4.5.0/configure --prefix=/home/foo/bar. Since it is very likely that the c++ standard library is different then the one on your system, you have to set export LD_LIBRARY_PATH=/home/foo/bar/lib before you can start a program compiled by this compiler.
Once, a long time ago (1992 or so), I went through something similar to this when I bought a SCO system with no development environment. Bootstrapping it up to having a full development environment was a gigantic pain, and not at all easy. Having library header files or gcc on a system would make your job a whole lot easier.
It depends a lot on just how obnoxious the library has been about what kinds of things are installed. If there is no gcc there, your job becomes a bit harder. If there are no header files for glibc there, your job is a LOT harder.
Also, do you get an account on the system so you have a home folder that's consistent from login to login?
If you have no gcc there, you need to find a pre-compiled binary of gcc/g++ and install it somewhere. If you have no header files there, you need to find copies of those and put them on the system.
There is no 'standard' way of installing gcc in your home folder though. All of the solutions are going to have some manner of hand-rolling involved.
Have you asked the librarians if they can change what's installed because you want to learn a bit of programming and only have access to their computers to do it with? That might well be the easiest solution.
From your comment it seems that you do have gcc and if you can compile C code, you have the library header files. So now it's a matter of actually compiling your own version of g++. You could probably find a way to entice the package manager on the system into installing a binary package somewhere other than in a system folder. I think this solution is less fun than compiling your own, and I think there may also be possible subtle problems as that installed package may be expecting to find things in particular places and not finding them there.
First thing to do is to make sure you've downloaded the right source for the gcc package. The place to find that is the GNU United States mirror page. You want to find the gcc-4.5.0.tar.bz2 or gcc-4.5.0.tar.gz file on the mirror site you choose. It will likely be in a gcc directory, and a gcc-4.5.0 sub-folder.
After you have that downloaded, you should untar it. In general you shouldn't build gcc in the folder you untar it into. So create another sibling folder that you actually want to build it in labeled gcc-build. Then the command you want is ../gcc-4.5.0/configure --prefix=$HOME/.local --enable-languages='c c++'.
gcc does require some other packages be installed in order to be able to compile itself. You can use the same --prefix line for these packages to install them in the same place. The gcc website has a list of pre-requisite packages.
$HOME/.local is sort of the standard accepted place for per-user installs of things.
If you have fakeroot, you can use that to set ~/some-path as root to install the packages from. Alternatively, you can setup a chroot environment to do the same.
Given this, you can then use dpkg -i package.deb to install the gcc package(s) on your system. You will need to download and install each package individually (e.g. from the debian website) -- at least binutils, glibc, linux-headers and gcc.
If you are on another system, you will need to get the right packages for that system and install them using the corresponding package manager.
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.