I've started a new project in git and use ./autogen.sh and xfce4-dev-tools to generate configure script and others files.
I'm wondering if it's a bad idea to only provide git release or I need to create dist tarball also?
A distribution tarball is easier to use but possibly limited to a certain Linux distribution and sometimes even to a certain version of said distribution.
autogen.sh makes things more flexible but at the cost of needing a more complex setup before you can use your project.
My approach to this problem is to have a script which
installs all dependencies or at least directs people to where they can get them
creates all system specific files
builds the whole project
runs the tests
creates distribution tarballs
I use the same script to build the dist tarballs, so the script is a) useful and b) executed often to keep it healthy.
Related
I have a project that depends on many external libraries like GLFW3, GLEW, GLM, FreeType2, zlib etc. It would be best to store/share installed dependencies between jobs so it wouldn't have to download/install them all the time which takes about half of the time. I can see couple ideas how to handle it:
a) for each job for each build download dependencies and install them
b) put dependencies (sources) inside my repo and have little speedup becouse i will no longer have to download them from outside servers (still have to compile and install them)
c) compile them by hand, put on some server and just download right package for each build
a) it leaves least work for me to update dependencies for building and testing, allows to use newest versions to build my project but it takes most time (both compiling and downloading)
b) bloats repository with extra code (not mine), gives little speedup (downloading is not that slow usually), adds manual work to update dependencies, i guess worse then a)
c) fastest but requires most work from me to constantly keep built dependencies up to date and upload them on fast server (also diffrent per each build task (compiler etc)), allows for fastest builds (just download & copy/install).
So, how are you managing your external dependencies and keep them up to date for your travis builds?
Note that i use free version of Travis and kinda need sudo for updating cmake, gcc etc. and installing dependencies... Could somehow trick CMake to use local versions of dependencies instead of /usr/... but this somehow bloats CMake which i believe should be very simple and clear.
Let's call the entire set of dependencies your build requires at some point in time the dependency "lineup".
I'd separate the use of (a certain version of) the lineup in your builds from the task of updating the lineup version (when a new version of a dependency is needed) - mixing them unnecessarily complicates the picture IMHO (I'm assuming that many of your builds would use the same dependency lineup version).
Let's look at the use of the lineup first.
From your description the installation of the (already built) dependencies is common to all 3 options and executed at every build, so let's put that aside for now.
The difference between your approaches remains only in how you obtain the built dependencies:
a - download them from outside servers - at every build
b - build them from repo sources - at every build
c - download them from a local fast server - at every build
It seems a and c are fundamentally the same except c would be faster due to local access. Most likely b would be the slowest.
So it looks like c is the winner from the build speed prospective in this context.
Now the question would be if/how you can get your built dependencies lineup on the local server faster for the c approach. I see 2 options (unsure if they're both possible in your context):
download dependencies already built (as you would in your a
approach) from outside servers (effectively just caching them on the
local servers)
building dependencies locally from source, except you don't have to neither place those sources in the same repo as your project nor build them for every project build (as in your b approach) - you only need to do this when you update your lineup (and only for new versions of the respective dependencies).
You could also look into mixing of the 2 options: some dependencies using option 1, others using option 2, as you see fit.
Now, if you're using VM (or docker or similar) images for your build machines and you have control over such images it might be possible to significantly speedup your builds by customizing these VM images - have the dependency lineup installed on them, making them immediately available to any build on a machine running such customized image.
Finally, when time comes to update your dependency lineup (which, BTW, should be present in your a and b approaches, too, not only in the c one) you'd have to just download the new dependencies versions, build them if needed, store the built dependencies on the local fast server and, if the customized VM image solution works for you, update/re-create the customized VM image with the installation of the new dependency lineup.
For instance, I have created a WebDriver boilerplate that I use across multiple projects and multiple workstations on the same project, however as I treat it as a library so I don't commit it to the project(s) repo. For instance
I have "Project X" running on my desktop, laptop and work computer and I update my boilerplate code on my laptop and commit the changes to the boilerplate repo, and then make some modifications to the Project X test cases and commit those. Later when I pull from the Project X repo to my Laptop, I make some code changes and run my WebDriver tests which can take about 5-10 minutes. After say 10 minutes I realise the tests have all run and some have failed because I forgot to update the library.
Some manual ways of dealing with this might be to have a library version number which is also referenced in the test cases code, however this is also a manual step that could be forgotten.
At the moment I'm leaning towards the library providing a function to generate a hash of itself which the test case code will then need to run first and if the hash mismatches then I know instantly that my test cases should be using a newer library.
What methods are common in this scenario?
Have you considered using the version control commit sha? Git also has a way of dealing with submodules http://git-scm.com/book/en/v2/Git-Tools-Submodules.
Usually languages have a dependency management tool, for Python i believe it's Pip, and usually those tools have a way to lock versions and update with one command. See Install specific git commit with pip.
I want to make it easy for others to work on my repository. However, since some of the compiled dependencies are over 100mb in size, I cannot include them into the repository. Github rejects those files.
What is the best way to handle large binaries of dependencies? Building the libraries from source is not easy under Windows and takes hours. I don't want every developer to struggle with this process.
I've recently been working on using Ivy (http://ant.apache.org/ivy/) with C++ binaries. The basic idea is that you build the binaries for every build combination. You will then zip each build combination into a file with a name like mypackage-windows-vs12-x86-debug.zip. In your ivy.xml, you will associate each zip file with exactly one configuration (ex: windows-vs12-x86-debug). Then you publish this package of multiple zip files to an Ivy repo. You can either host the repo yourself or you can try to upload to an existing Ivy repo. You would create a package of zip files for each dependency, and the ivy.xml files will describe the dependency chain among all the packages.
Then, your developers must set up Ivy. In their ivy.xml files, they will list your package as a dependency, along with the configuration they need (ex: windows-vs12-x86-debug). They will also need to add an ivy resolve/retrieve step to their build. Ivy will download the zip files for your package and everything that your package depends on. Then they will need to set up unzip & move tasks in their builds to extract the binaries you are providing, and put them in places their build is expecting.
Ivy's a cool tool but it is definitely streamlined for Java and not for C++. When it's all set up, it's pretty great. However, in my experience as a person who is not really familiar with DevOps at all, integrating it into a C++ build has been challenging. I found that it was easiest to create simple ant tasks that do the required ivy actions, then use my "regular" build system (make) to call those ant tasks when needed.
So I should also mention that the reason I looked into using Ivy was that I was implementing this in a corporate environment where I couldn't change system files. If you and your developers can do that, you may be better off with a RPM/APT system. You'd set up a repo and get your developers to add your repo to the appropriate RPM/APT config file. Then they would run commands like sudo apt-get install mypackage and apt-get would do all the work of downloading and installing the right files in the right places. I don't know how this would work on Windows, maybe someone has created a windows RPM/APT client.
I have been using ClojureScript on Windows since it first came out and I have noticed that Rich Hickey and others are making occassional updates to it. What is the easiest way to make sure I have the latest changes? Is just copying over the src directory from here enough:
https://github.com/clojure/clojurescript/tree/master/src
?
The ClojureScript setup on Windows is a little more cumbersome than on Unix-based systems (including Mac OS X). As you said, the best bet is to follow the initial setup instructions from the ClojureScript Wiki and then update the contents of the src directory from time to time. Occasionally you might want to check if the .bat files in the bin and script directories as well as the dependencies listed in the script/bootstrap shell script have changed.
On Unix-based systems the process is easier. Initially, clone the ClojureScript git repository:
git clone https://github.com/clojure/clojurescript.git
From time to time, update the contents of your local clone and re-download the dependencies:
git pull
./script/bootstrap
If the manual process bothers you enough, you might want to consider installing Cygwin to get a Unix environment on you Windows machine, but of course that's a matter of personal preference.
Alternatively, you can try to develop a Windows version of the bootstrap script. I'm sure the ClojureScript team would be happy to include it in the distribution.
If your project requirements for a large application with many 3rd party dependencies included:
1) Maintain a configuration
management system capable of
reproducing from source bit-for-bit
identical copies of any build for 25
years after the original build was run and
2) Use Maven2 as a build
tool to compile the build and to
resolve dependencies
What process would need to be followed to meet those requirements?
25 years? Let's see, I think I have my old Commodore 64 sitting around here somewhere...
Seriously though - if this is a real question, then you have to consider the possibility that the maven central repository will at some point in the future go away. Maven is heavily reliant on the maven central repository.
You will also need to archive any tools (besides maven) used to create the build. An ideal build process will create an identical binary file at any time, whether it is next week or in 25 years. In practice, there are a lot of things that can prevent you from being able to reliably reproduce your builds.
1) Use a maven repository manager to host all dependencies, and back up the contents of the maven repository.
2) Archive any tools used to create the build. Basically maven and the jdk, but if you are using any other maven plugins like NSIS or Ant, then you need to archive those as well. If you are creating any platform specific binaries (like using NSIS), then you need to archive those tools, and probably the OS used to run the tools.
3) Archive your source code repository and make sure the software needed to run it is also archived as well.