My prefab contains no binaries or includes - c++

prefab runs successfully, but my resulting prefab directory only has CMake files in it, no other assets. My goal is to easily share binary assets with colleagues and include them in my Android builds. What am I doing wrong?
I'm attempting to create a prefab of boost (though google might make one if it gets enough votes). I used Boost-for-Android to build boost 1.73.0, copied the relevant includes and libraries out into an assembly directory, populated it with prefab.json, module.json, and abi.json files, and ran
prefab --output "/home/developer/workspace/boost-prefab/out/boost/x86_64" \
--build-system cmake \
--platform android \
--abi "x86_64" \
--os-version "21" \
--ndk-version "21" \
--stl "c++_static" \
"/home/developer/workspace/boost-prefab/assembly/boost"
But the resulting directory only contains two CMake files per abi.
$ find x86_64 -type f
x86_64/lib/x86_64-linux-android/cmake/boost/boostConfig.cmake
x86_64/lib/x86_64-linux-android/cmake/boost/boostConfigVersion.cmake
The assembly directory I created looks like (I followed the prefab structure prescribed in the docs):
boost/prefab.json
modules/filesystem/
module.json
include/boost/...
libs/android.x86_64/
abi.json
libboost_filesystem.a
modules/system/
module.json
include/boost/...
libs/android.x86_64/
abi.json
libboost_system.a
prefab.json:
{"schema_version": 1, "name": "boost", "version": "1.73.0", "dependencies": []}
modules/filesystem/module.json:
{"library_name": "libboost_filesystem"}
modules/filesystem/libs/android.x86_64/abi.json:
{"abi": "x86_64", "api": 21, "ndk": "21", "stl": "c++_static"}
I'll admit I'm confused about prefab's os-version, from what I can tell it's simply the NDK version (because it's not the API version or Android major version number), but I don't think that that's the problem.
I also tried to generate a boost prefab via vcpkg, but boost won't build in vcpkg right now (there are some PRs to address this, but I couldn't get it building.) In any case, vcpkg's example shows a slightly different layout as well where an AAR was inserted into the prefab.
My goal is to have a prebuilt archive of boost that's easy to my colleagues to import into their Android projects.
Am I simply misunderstanding what a prefab file is for?
The CMake files that do get generated seem primitive even compared to the default boost ones, i.e. I'd prefer the boost CMake files to the prefab ones being created here
Should I instead be trying to create an AAR for boost first?
End of the day, the only way I was able to use Boost in my Android project was to include its build in my CMAKE_FIND_ROOT_PATH, but if it comes down to this what's the advantage of prefab? I feel I might as well just integrate conan into my gradle build.

Am I simply misunderstanding what a prefab file is for?
Based on your next question, only slightly. It is a system for distributing prebuilt libraries in a build system agnostic manner. Aside from "build system agnostic", that lines up with what you're trying to do.
The CMake files that do get generated seem primitive even compared to the default boost ones, i.e. I'd prefer the boost CMake files to the prefab ones being created here
Prefab was designed to support arbitrary build systems without the need for that package author to care about supporting each individually. This is important for Android because while the plurality of people use CMake, ndk-build is also an officially supported option, and dozens of other build systems are also regularly used.
What features are you missing that would be provided by boost's own CMake files? Can file a feature request at https://github.com/google/prefab/issues if it's something that we can do in a build-system agnostic manner.
If it can't be described in a build system agnostic manner, Prefab isn't a good fit. vcpkg might be a better choice for that.
Should I instead be trying to create an AAR for boost first?
That's the easiest way to do use prefab packages from AGP, yeah. vcpkg is generally the easiest path to that given the existing corpus, but as you noted you'll need to send them a patch (or a bug report) to get their build fixed first.
Alternatively, https://android.googlesource.com/platform/tools/ndkports/ is how we build the handful that we currently publish. Should be easy to check out, add your own port file, and build your AAR. If you send us the patch I'll likely merge it when I've gotten the test infrastructure up and running (currently we can't contend with supporting many packages because testing is manual).
End of the day, the only way I was able to use Boost in my Android project was to include its build in my CMAKE_FIND_ROOT_PATH, but if it comes down to this what's the advantage of prefab?
You don't need to do that if you use an AAR. AGP handles the details for you if you do that. The intended use case is for build systems to integrate prefab, not for users to have to do the work. AGP already does that, but only when consuming from an AAR.

Related

Why do some Conan packages delete CMake Package information

I'm relatively new to Conan. I'm trying to use packages provided by conan in a very natural cmake way...i.e. I don't want anything conan specific in the consuming library's CMakeLists.txt. I just want to find_package my dependency, target_link_libraries to it, and move on just like I could pre-conan. If the dependency did their cmake correctly, all transitive dependencies are handled automagically. Per this blog article: https://blog.conan.io/2018/06/11/Transparent-CMake-Integration.html it seems the way to do this is using the cmake_paths generator. I can make and consume packages with that generator no problem.
I'm now trying to comsume a number of third party libraries, namely grpc, yaml-cpp, and Catch2, however none of those packages work with the cmake_paths generator because as part of their package recipe they explicitly delete the cmake package configuration files.
See
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/grpc/all/conanfile.py#L171
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/catch2/2.x.x/conanfile.py#L97
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/yaml-cpp/all/conanfile.py#L95-L96
I obviously haven't done an exhaustive search to see how many packages do this, I just find it hard to believe it's just a coincidence that the three libraries I want to pull in first are the only three that do this.
Is there a reason this is done? or is this a hold-over from times before the cmake_paths generator was a thing and should now be considered a bug?
In the blog article about transparent cmake integration, it states that one of the downsides of the cmake_paths generator is that transitive dependency information is not propagated, but the only reason I can see that would be is because the CMake config modules are deleted as shown above--a major feature of what cmake does (especially modern CMake) is to manage those transitive dependencies. Why does conan seem to want to throw that information away?
The reasons why ConanCenter (not Conan, this is only a requirement of public packages in ConanCenter) is removing the packaged findxxx.cmake or xxxx-config.cmake files from packages are:
Packages from ConanCenter should work from any other build system, not only CMake. Even if CMake is now used by a majority of devs (some surveys shows around 50-55%), there are still a lot of people using other build systems, MSBuild, Meson, Autotools, ec. Conan defines the information for consumers in its package_info() method, which is an abstraction that will work for any consumer build system. When this was not mandatory in ConanCenter, it resulted in many packages that only worked for CMake.
It happens that some of the packaged cmake files can be problematic, and depending on how they are generated, they will not handle transitive dependencies as expected, and they can find transitive dependencies in the system instead of other Conan packages. This happens sometimes when an open source library doesn't have a modern and correct CMakeLists.txt and locates some dependencies in the system directly. Unfortunately the quality of CMakeLists.txt files out there is varying and they not always follow best practices.
Conan handles binary configuration separately, to scale to support many different binary configurations (for example ConanCenter builds around 130 different binaries for each package version), so for example the Debug and Release packages are in separate locations. The native find_package() CMake files cannot handle this situation, so all users expecting to have multi-config setup (like Visual Studio and Xcode) will not manage to achieve this without the Conan generated .cmake files.
Again, this only applies to ConanCenter packages, because ConanCenter is trying to be as universal as possible (to support fairly all build systems) and to allow multi-configuration setups, while being as robust as possible given the complexities of the diverse ecosystem.
In any case, the modern CMakeDeps and CMakeToolchain experimental generators will achieve a transparent integration, they are already released in latest Conan 1.X and as those will be the ones that will survive in next 2.0, it is recommended to start trying them soon.

How to build and install google-url as a shared library on Mac OS/Linux

I want to use google-url in my project as a shared library on Linux\Mac OS, but can not figure out the right way to build it...
Question: what is the way you suggest to build it from scratch form official sources?
Requirements - be able to stay in sync with official repo and use standard(make) tools.
As far as i can see, right now there are few ways to build it:
in the official repo itself only Visual Studio 2005 build files are included
it is in use at Chromium and so there is .gyp available for it but looks like it is tight integrated with Chromium build structure, so there is no easy way to generate Makefile for the standalone library build.
Although it has a comment inside "TODO(mark): Upstream this file to googleurl."
So at list this considered to be possible.
Googleurl is also integrated with PageSpeed project in .gyp form (thought no the same one as above) and so it is somehow built there
third-party bindings for python are available and also contain some build instructions, but with SCons this time, and AFAIK it is kind of obsolete system to rely on.
Looks like i'm not the only one with this trouble, so other people i found both just implemented their own build files using autotools:
https://github.com/artemg/Googleurl-separate-library
https://github.com/commoncrawl/commoncrawl-crawler/blob/master/src/native/src/libGoogleURL/googleurl/README.google
It could work but the filesystem layout is not the same as in official repo/they have local modification so there are no easy way to downstream changes and stay in sync.
The most tempting way would be to use GYP to generate platform-specific build files for the oficial repo once: make/xcode/visual studio, then just save and use them later as needed..but i have no idea how to approach this and where to start from.

Is there a build system for C++ which can manage release dependencies?

A little background, we have a fairly large code base, which builds in to a set of libraries - which are then distributed for internal use in various binaries. At the moment, the build process for this is haphazard and everything is built off the trunk.
We would like to explore whether there is a build system which will allow us to manage releases and automatically pull in dependencies. Such a tool exists for java, Maven. I like it's package, repository and dependency mechanism, and I know that with either the maven-native or maven-nar plugin we could get this. However the problem is that we cannot fix the source trees to the "maven way" - and unfortunately (at least the maven-nar) plugins don't seem to like code that is not structured this way...
So my question is, is there a tool which satisfies the following for C++
build
package (for example libraries with all headers, something like the .nar)
upload package to a "repository"
automatically pull in the required dependencies from said repository, extract headers and include in build, extract libraries and link. The depedencies would be described in the "release" for that binary - so if we were to use CI server to build that "release", the build script has the necessary dependencies listed (like the pom.xml files).
I could roll my own by modifying either make+shell scripts or waf/scons with extra python modules for the packaging and dependency management - however I would have thought that this is a common problem and someone somewhere has a tool for this? Or does everyone roll their own? Or have I missed a significant feature of waf/scons or CMake?
EDIT: I should add, OS is preferred, and non-MS...
Most of the linux distributions, for example, contain dependency tracking for their packages. Of all the things that I've tried to cobble together myself to take on your problem, in the end they all are "not quite perfect". The best thing to do, IMHO, is to create a local yum/deb repository or something (continuing my linux example) and then pull stuff from there as needed.
Many of the source-packages also quickly tell you the minimum components that must be installed to do a self-build (as opposed to installing a binary pre-compiled package).
Unfortunately, these methods are that much easier, though it's better than trying to do it yourself. In the end, to be cross-platform supporting, you need one of these systems per OS as well. Fun!
I am not sure if I understand correctly what you want to du, but I will tell you what we use and hope it helps.
We use cmake for our build. It hat to be noted that cmake is quite powerful. Among other things, you can "make install" in custom directories to collect headers and binaries there to build your release. We combine this with some python scripting to build our releases. YMMV, but some things might just be too specific for a generic tool and a custom script may be the simpler solution.
Our build tool builds releases directly from a svn reposity (checkout, build, ...) which I can really recommend to avoid some local state polluting the release in some unforseen way. It also enforces reproducability.
It depends a lot on the platforms you're targeting. I can only really speak for Linux, but there it also depends on the distributions you're targeting, packages being a distribution-level concept. To make things a bit simpler, there are families of distributions using similar packaging mechanisms and package names, meaning that the same recipe for making a Debian package will probably make an Ubuntu package too.
I'd definitely say that if you're willing to target a subset of all known Linux distros using a manageable set of packaging mechanisms, you will benefit in the long run from not rolling your own and building packages the way the distribution creators intended. These systems allow you to specify run- and build-time dependencies, and automatic CI environments also exist (like OBS for rpm-based distros).

C++ Buildsystem with ability to compile dependencies beforehand

I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.

Linux programming environment configuration

The other day I set up an Ubuntu installation in a VM and went to gather the tools and libraries I figured I would need for programming mostly in C++.
I had a problem though, where to put things such as 3rd party source libraries, etc. From what I can gather, a lot of source distributions assume that a lot of their dependencies are already installed in a certain location and assume that a lot of tools are also installed in particular locations.
To give an example of what I currently do on Windows, is I have a directory where I keep all source code. C:\code. In this directory, I have a directory for all 3rd party libraries, c:\code\thirdparty\libs. This way I can easily set up relative paths for all of the dependencies of any projects I write or come across and wish to compile. The reason I am interested in setting up a linux programming environment is that it seems that both the tool and library dependency problems have been solved efficiently making it easy for example to build OpenSSH from source.
So what I was looking for was a decent convention I can use when I am trying to organize my projects and libraries on linux that is easy to maintain and easy to use.
Short answer: don't do a "heaps of code in local dir" thing.
Long answer: don't do a "heaps of code in local dir" thing, because it will be nightmare to keep up-to-date, and if you decide to distribute your code, it will be nightmare to package it for any decent distribution.
Whenever possible, stick to the libraries shipped in the distribution (ubuntu has 20000+ packets, it ought to have most of what you'll need prepackaged). When there is not package, you caninstall by hand to /usr/local (but see above about upgrades and DONT do that).
Better, use "stow" or "installwatch" (or both) to install to per-library dirs (/usr/local/stow/libA-ver123) and then symlink files from there to /usr/local or /usr/ (stow does the simlinking part). Or just package the lib for your distribution.
For libraries/includes...
/usr/local/lib
/usr/local/include
Where possible code against the system/distro provided libraries. This makes it easiest to ship a product on that distro.
However, if you are building a commercial application, because there are so many flavors of Linux distros that can mean you have to maintain a plethora of different application builds for each distro. Which isn't necessarily a bad thing as it means you can more cleanly integrate with the distro's package management system.
But in the case where you can't do that it should be fairly easy to download the source of each 3rd party dependency you have and integrate the building of that dependency into a static lib that is linked to your executable. That way you know exactly what you're linking against but has the downside of bloating out your executable size. This can also be required if you need a specific library (or version) not provided by the distro.
If you want your code to build on as broad a variety of different Unix systems then you're probably wise looking into GNU autoconf and automake. These help you construct a configure script and makefile for your project so that it will build on practically any Unix system.
Also look into pkg-config which is used quite a bit now on Linux distributions for helping you include and link to the right libraries (for libs that support pkg-config).
If you're using subversion to manage your source there is a "convention" that most subversion repositories use to manage their own code and "vendor" code.
Most svn repositories have a "vendor" tree (that goes along with the trunk, branches & tags trees). That is the top for all 3rd party vendor code. In that directory you have directories for each library you use. Eg:
branches/
tags/
trunk/
vendor/somelib
vendor/anotherlib
Beneath each of these libs is a directory for each library version and a "current" directory for the most up-to-date version in your repository.
vendor/somelib/1.0
vendor/somelib/1.1
vendor/somelib/current
Then your project's tree should be laid out something like this:
trunk/source # all your code in here
trunk/libs # all vendor code in here
The libs directory should be empty but it will have svn:externals meta data associated with it, via:
svn propedit svn:externals trunk/libs
The contents of this property would be something along the lines of (assumes subversion 1.5):
^/vendor/somelib/current somelib
^/vendor/anotherlib/1.0 anotherlib
This means that when you checkout your code subversion also checks out your vendor libraries into your trunk/libs directory. So that when checked out it looks like this:
trunk/source
trunk/libs/somelib
trunk/libs/anotherlib
This is described (probably a whole lot better) in the Subversion Book. Particularly the section on handling vendor branches and externals.
Ubuntu = Debian = apt-get goodness
Start with Linux Utilities:
%> sudo apt-get install util-linux