I contribute to an open source project and would like to use install4j to build installers. I further would like to use Travis-CI as my build server, using their free open-source offer.
Travis runs every build in a fresh VM, and install4j requires a command-line compiler to be present.
Thus, I need to provide/install install4j itself in my build script.
A crude way to go about it would be to fetch an archive from the ej-technologies website every time I build, unzip it and then compile. This requires an OS-switch and a load of archive-handling, depending on the system in use.
I hope there is a better way. How would you go about it?
[Making #IngoKegel's comments into an answer]
Downloading a fresh copy of Install4J from the ej-technologies website is the only way there currently is. This has to be done separately for each OS, since there are no platform-independent installers or archives.
Related
I tried to follow this link to deploy application for Windows, but I don't have the file configure and I am unable to run the command.
Any help ?
Skip the configure step. What you need is the windeployqt utility.
Afterwards you may need to remove some libraries your project doesn't use. For my projects, windeployqt insists to deploy a huge openglsw.dll (or something like that), even though I don't need it.
I also recommend simply copying the MSVC libraries into your distribution rather than installing from the supplied installer package. Just make sure you got all of them, the right version and for the right architecture.
If you use the Qt Network module, you may want to also deploy the OpenSSL binaries - you'll need them if you want to access HTTPS resources.
And my last advice - use Inno Setup for installer. It is well documented, very well written, very easy to use and produces no junk.
P. S. You could use one of my open source projects for reference, I have a script to copy all the necessary files into one folder and pack it into an installer using Inno Setup. I try to keep the scripts as simple and short as possible, no junk there.
I want to make it easy for others to work on my repository. However, since some of the compiled dependencies are over 100mb in size, I cannot include them into the repository. Github rejects those files.
What is the best way to handle large binaries of dependencies? Building the libraries from source is not easy under Windows and takes hours. I don't want every developer to struggle with this process.
I've recently been working on using Ivy (http://ant.apache.org/ivy/) with C++ binaries. The basic idea is that you build the binaries for every build combination. You will then zip each build combination into a file with a name like mypackage-windows-vs12-x86-debug.zip. In your ivy.xml, you will associate each zip file with exactly one configuration (ex: windows-vs12-x86-debug). Then you publish this package of multiple zip files to an Ivy repo. You can either host the repo yourself or you can try to upload to an existing Ivy repo. You would create a package of zip files for each dependency, and the ivy.xml files will describe the dependency chain among all the packages.
Then, your developers must set up Ivy. In their ivy.xml files, they will list your package as a dependency, along with the configuration they need (ex: windows-vs12-x86-debug). They will also need to add an ivy resolve/retrieve step to their build. Ivy will download the zip files for your package and everything that your package depends on. Then they will need to set up unzip & move tasks in their builds to extract the binaries you are providing, and put them in places their build is expecting.
Ivy's a cool tool but it is definitely streamlined for Java and not for C++. When it's all set up, it's pretty great. However, in my experience as a person who is not really familiar with DevOps at all, integrating it into a C++ build has been challenging. I found that it was easiest to create simple ant tasks that do the required ivy actions, then use my "regular" build system (make) to call those ant tasks when needed.
So I should also mention that the reason I looked into using Ivy was that I was implementing this in a corporate environment where I couldn't change system files. If you and your developers can do that, you may be better off with a RPM/APT system. You'd set up a repo and get your developers to add your repo to the appropriate RPM/APT config file. Then they would run commands like sudo apt-get install mypackage and apt-get would do all the work of downloading and installing the right files in the right places. I don't know how this would work on Windows, maybe someone has created a windows RPM/APT client.
I want to use google-url in my project as a shared library on Linux\Mac OS, but can not figure out the right way to build it...
Question: what is the way you suggest to build it from scratch form official sources?
Requirements - be able to stay in sync with official repo and use standard(make) tools.
As far as i can see, right now there are few ways to build it:
in the official repo itself only Visual Studio 2005 build files are included
it is in use at Chromium and so there is .gyp available for it but looks like it is tight integrated with Chromium build structure, so there is no easy way to generate Makefile for the standalone library build.
Although it has a comment inside "TODO(mark): Upstream this file to googleurl."
So at list this considered to be possible.
Googleurl is also integrated with PageSpeed project in .gyp form (thought no the same one as above) and so it is somehow built there
third-party bindings for python are available and also contain some build instructions, but with SCons this time, and AFAIK it is kind of obsolete system to rely on.
Looks like i'm not the only one with this trouble, so other people i found both just implemented their own build files using autotools:
https://github.com/artemg/Googleurl-separate-library
https://github.com/commoncrawl/commoncrawl-crawler/blob/master/src/native/src/libGoogleURL/googleurl/README.google
It could work but the filesystem layout is not the same as in official repo/they have local modification so there are no easy way to downstream changes and stay in sync.
The most tempting way would be to use GYP to generate platform-specific build files for the oficial repo once: make/xcode/visual studio, then just save and use them later as needed..but i have no idea how to approach this and where to start from.
I'm working on a simple command line tool in c++. Half as a fun learning-process thing, and half to distribute to friends/colleagues etc.
I assume the easiest way to make it distributable is just packaging the source code with an installation script---can anyone point me to a good tutorial for setting that up?
In other words, what must a script include to compile the program, put the files in good places*, and make it executable from any directory from the command line?
E.g. I know the compiled binary should go in /usr/local/bin/ , but if I'm writing-to and accessing a text file (for instance), where should that go? What about a file that stores settings/configuration-parameters?
I'm on mac osx, so that would be the starting point, but portability to windows, linux, etc would be great.
You can use CMake to make a cross platform build system, and you can use it's CPack (Wiki here) feature in order to generate binary only packages. First you create a build script that runs and installs on each platform (which CMake makes as easy as can be expected). You then run CPack to generate a package which just includes your binaries.
There is a good tutorial that covers the basic cmake process (including install commands) here.
CMake is generally considered simpler then autoconf (and has better windows support), but each has it's own strengths.
Do not assume that the user installing the program has root access. Prompt, or provide a command-line option, like --install-prefix=/home/user/apps, to specify where to install.
I HATE programs that install shit in /usr/local. If you do that, you'd best wrap it up in an .rpm or .deb or whatever the platform package is so that your app can be cleanly uninstalled.
I would suggest checking out autoconf
What we need in our firm is a sort of release management tool for Linux/C++. Our products consist of multiple libraries and config files. Here I will list the basic features we want such system to have:
Ability to track dependencies, easily increase major versions of libraries whose dependencies got their major version increased. It should build some sort of dependency graph internally so it can know who is affected by an update.
Know how to build the products it handle. Either a specific build file or even better - ability to read and understand makefiles.
Work with SVN so it can check for new releases from there and does the build.
Generate some installers - in rpm or tar.gz format. For that purpose it should be able to understand the rpm spec file format.
Currently we are working on such tool which is already pretty usable. However I believe that our task is not unique and there should be some tool out there which does the job.
You should look into using a mix between Hudson, Maven (for build management), Ivy (for dependencies management) and Archiva (for artifacts archival).
Also, if you are looking into cross.compilation, take a look at Make Project Creator (MPC) and Bakefile.
Have fun!!
In the project I'm currently working on we use cmake and other Kitware tools to handle most of this issues for native code (C++). Answering point by point:
The cmake scripts handle the dependencies for our different projects. We have a dependency graph but I don't know if is a home-made script or it is a functionality that cmake provides.
Well cmake generates the makefiles regarding the platform. I generates projects for eclipse cdt and visual studio if it is asked to do so in case of developing.
Cmake has a couple of tools, ctest and cdash that we use to do the daily build and see how the test are doing.
In order to create the installer cmake has cpack. From just one script it can generate tar.gz, deb or rpm files in Linux or an automatically generated NSIS script to generate installers in windows.
For Java code we use maven and hudson that have been already mentioned here.
Take a look at this article from DDJ, in which a more robust build system concept (than make) is presented and implemented. Not sure it will fit well to your requirements, but it's the closest I've ever seen. I was looking for the same thing months ago, and then I discovered the article.
http://www.drdobbs.com/architect/218400678
Maven has a native code plugin. I don't think it'll do everything you want, but it's good at tracking version numbers of dependencies, will build artefacts and it'll work with your VCS.
No idea
cmake/scons: I have used cmake but I don't exactly love it, but I have heard really good things about scons. But scons is python-based, so you need to have python installed on the build/dev machines.
I use Hudson, which has a plugin to fetch from svn. It performs intelligently in general, and in particular builds only if some file has changed in an svn update. Hudson is easy to get started with. Hudson is java-based and is pretty popular with the Java community. This means it is quite cross-platform, but you need to have JRE installed on the build machine.
Probably can call some rpm tool within hudson.