I want to make it easy for others to work on my repository. However, since some of the compiled dependencies are over 100mb in size, I cannot include them into the repository. Github rejects those files.
What is the best way to handle large binaries of dependencies? Building the libraries from source is not easy under Windows and takes hours. I don't want every developer to struggle with this process.
I've recently been working on using Ivy (http://ant.apache.org/ivy/) with C++ binaries. The basic idea is that you build the binaries for every build combination. You will then zip each build combination into a file with a name like mypackage-windows-vs12-x86-debug.zip. In your ivy.xml, you will associate each zip file with exactly one configuration (ex: windows-vs12-x86-debug). Then you publish this package of multiple zip files to an Ivy repo. You can either host the repo yourself or you can try to upload to an existing Ivy repo. You would create a package of zip files for each dependency, and the ivy.xml files will describe the dependency chain among all the packages.
Then, your developers must set up Ivy. In their ivy.xml files, they will list your package as a dependency, along with the configuration they need (ex: windows-vs12-x86-debug). They will also need to add an ivy resolve/retrieve step to their build. Ivy will download the zip files for your package and everything that your package depends on. Then they will need to set up unzip & move tasks in their builds to extract the binaries you are providing, and put them in places their build is expecting.
Ivy's a cool tool but it is definitely streamlined for Java and not for C++. When it's all set up, it's pretty great. However, in my experience as a person who is not really familiar with DevOps at all, integrating it into a C++ build has been challenging. I found that it was easiest to create simple ant tasks that do the required ivy actions, then use my "regular" build system (make) to call those ant tasks when needed.
So I should also mention that the reason I looked into using Ivy was that I was implementing this in a corporate environment where I couldn't change system files. If you and your developers can do that, you may be better off with a RPM/APT system. You'd set up a repo and get your developers to add your repo to the appropriate RPM/APT config file. Then they would run commands like sudo apt-get install mypackage and apt-get would do all the work of downloading and installing the right files in the right places. I don't know how this would work on Windows, maybe someone has created a windows RPM/APT client.
Related
I'm a DevOps engineer creating CI processes for projects. I was wondering what is the best way to deal with the following scenario: Let's say I have a C++ project (using CLion + CMake) with several developers working on it. Now in order to be built, the project has some libraries it depends on. That automatically reflects on the CMakeLists.txt file that should know where to look for those libraries.
Basically the problem is that we need to take care that every developer has these libraries in the correct paths on his machine, which is a big hassle.
One approach to handle this would be to keep those dependencies in the repository. That's great since all the developer has to do is to clone the repo and he got everything he needs in order to run compilation. But as we know, keeping binaries in SCM is not a good practice.
The question is, is there a good method to handle project dependencies in a C++ project?
I know that with C# for example, we could use NuGet packages to handle these kind of scenarios. So we'd have a NuGet repository in Artifactory that would host the dependency packages, and then in our project we'd keep a reference to the required packages, and in build time we would just download the dependecies and build the project.
Is there something alike in C++ (Running on Linux I mean)?
Hope the question is clear enough lol, had a hard time wording it..
It depends on how those dependencies are delivered and packaged. If whoever maintains your dependencies took CMake into account you can probably use find_package. If they didn't account for this, but they support pkg-config you can use FindPkgConfig. Now all you need to do is let the developers know what dependencies they need to install. This should work regardless of the OS used for development.
Other solutions may involve pulling and building the dependency code when you build your project (for example, by using git submodules if possible, or FetchContent, but this can become a nightmare if you have a lot of dependencies).
Additionally, you can try using a package manager like vcpkg, or conan (if all your dependencies are available there), or CPM.
I want to use google-url in my project as a shared library on Linux\Mac OS, but can not figure out the right way to build it...
Question: what is the way you suggest to build it from scratch form official sources?
Requirements - be able to stay in sync with official repo and use standard(make) tools.
As far as i can see, right now there are few ways to build it:
in the official repo itself only Visual Studio 2005 build files are included
it is in use at Chromium and so there is .gyp available for it but looks like it is tight integrated with Chromium build structure, so there is no easy way to generate Makefile for the standalone library build.
Although it has a comment inside "TODO(mark): Upstream this file to googleurl."
So at list this considered to be possible.
Googleurl is also integrated with PageSpeed project in .gyp form (thought no the same one as above) and so it is somehow built there
third-party bindings for python are available and also contain some build instructions, but with SCons this time, and AFAIK it is kind of obsolete system to rely on.
Looks like i'm not the only one with this trouble, so other people i found both just implemented their own build files using autotools:
https://github.com/artemg/Googleurl-separate-library
https://github.com/commoncrawl/commoncrawl-crawler/blob/master/src/native/src/libGoogleURL/googleurl/README.google
It could work but the filesystem layout is not the same as in official repo/they have local modification so there are no easy way to downstream changes and stay in sync.
The most tempting way would be to use GYP to generate platform-specific build files for the oficial repo once: make/xcode/visual studio, then just save and use them later as needed..but i have no idea how to approach this and where to start from.
All over the web, the answer to the question "uuid.h not found" is "install some kind of rpm or deb file". This may be appropriate for attempting to build some kind of open source project, which has dependencies on other open source, but it does not seem correct for building one's own software.
At my company, most of our own code can be built by getting the code from our source control and building it. Dependent headers, libs, etc. are included in the sync. However, whenever someone gets a uuid.h not found, soemone always says "do apt-get install uuid-dev" or something like that.
My question: what is so different about uuid.h that it must be installed like this? Our code uses ODBC too, but we don't need to "install" odbc headers. Ditto xml parsers, and many other third party code.
I don't think there's anything magical about uuid.h that requires a packages installation; just that installing the package is a simpler step than adding the required files one by one, and it will be easier for you to keep them up to date through your Linux distro's package update utilities.
So installing the package is the simplest way to get a user going, and will keep them up to date without manual intervention. I suspect there is a way to build from source and add the files one-by-one, but that is not as simple.
If your project requirements for a large application with many 3rd party dependencies included:
1) Maintain a configuration
management system capable of
reproducing from source bit-for-bit
identical copies of any build for 25
years after the original build was run and
2) Use Maven2 as a build
tool to compile the build and to
resolve dependencies
What process would need to be followed to meet those requirements?
25 years? Let's see, I think I have my old Commodore 64 sitting around here somewhere...
Seriously though - if this is a real question, then you have to consider the possibility that the maven central repository will at some point in the future go away. Maven is heavily reliant on the maven central repository.
You will also need to archive any tools (besides maven) used to create the build. An ideal build process will create an identical binary file at any time, whether it is next week or in 25 years. In practice, there are a lot of things that can prevent you from being able to reliably reproduce your builds.
1) Use a maven repository manager to host all dependencies, and back up the contents of the maven repository.
2) Archive any tools used to create the build. Basically maven and the jdk, but if you are using any other maven plugins like NSIS or Ant, then you need to archive those as well. If you are creating any platform specific binaries (like using NSIS), then you need to archive those tools, and probably the OS used to run the tools.
3) Archive your source code repository and make sure the software needed to run it is also archived as well.
What we need in our firm is a sort of release management tool for Linux/C++. Our products consist of multiple libraries and config files. Here I will list the basic features we want such system to have:
Ability to track dependencies, easily increase major versions of libraries whose dependencies got their major version increased. It should build some sort of dependency graph internally so it can know who is affected by an update.
Know how to build the products it handle. Either a specific build file or even better - ability to read and understand makefiles.
Work with SVN so it can check for new releases from there and does the build.
Generate some installers - in rpm or tar.gz format. For that purpose it should be able to understand the rpm spec file format.
Currently we are working on such tool which is already pretty usable. However I believe that our task is not unique and there should be some tool out there which does the job.
You should look into using a mix between Hudson, Maven (for build management), Ivy (for dependencies management) and Archiva (for artifacts archival).
Also, if you are looking into cross.compilation, take a look at Make Project Creator (MPC) and Bakefile.
Have fun!!
In the project I'm currently working on we use cmake and other Kitware tools to handle most of this issues for native code (C++). Answering point by point:
The cmake scripts handle the dependencies for our different projects. We have a dependency graph but I don't know if is a home-made script or it is a functionality that cmake provides.
Well cmake generates the makefiles regarding the platform. I generates projects for eclipse cdt and visual studio if it is asked to do so in case of developing.
Cmake has a couple of tools, ctest and cdash that we use to do the daily build and see how the test are doing.
In order to create the installer cmake has cpack. From just one script it can generate tar.gz, deb or rpm files in Linux or an automatically generated NSIS script to generate installers in windows.
For Java code we use maven and hudson that have been already mentioned here.
Take a look at this article from DDJ, in which a more robust build system concept (than make) is presented and implemented. Not sure it will fit well to your requirements, but it's the closest I've ever seen. I was looking for the same thing months ago, and then I discovered the article.
http://www.drdobbs.com/architect/218400678
Maven has a native code plugin. I don't think it'll do everything you want, but it's good at tracking version numbers of dependencies, will build artefacts and it'll work with your VCS.
No idea
cmake/scons: I have used cmake but I don't exactly love it, but I have heard really good things about scons. But scons is python-based, so you need to have python installed on the build/dev machines.
I use Hudson, which has a plugin to fetch from svn. It performs intelligently in general, and in particular builds only if some file has changed in an svn update. Hudson is easy to get started with. Hudson is java-based and is pretty popular with the Java community. This means it is quite cross-platform, but you need to have JRE installed on the build machine.
Probably can call some rpm tool within hudson.