differences between sdist and hackage build test - hackage

I have problems with packages uploaded to hackage to pass the build test. The pacakges compile fine with cabal (and with stack) and I can upload the sdist files to hackage.
I found some confusing error messages, likely related to "single character directories".
Once cleaned this, upload completes but the packages fail the check during the build. Most of the issues seem related to the inclusion of C sources and C headers (see cabal build error `gcc` failed: connot find `#include`).
What are the differences in the requirements for a package to build on hackage? Could sdist check these requirements - to avoid a plethora of futile package versions on hackage, which can never disappear. Or, is there a way to check a package whether it builds on hackage without publishing (i.e. make it permanent)?
Can somebody point me to a comprehensive explanations of the requirements of hackage uploads?

Related

Why do some Conan packages delete CMake Package information

I'm relatively new to Conan. I'm trying to use packages provided by conan in a very natural cmake way...i.e. I don't want anything conan specific in the consuming library's CMakeLists.txt. I just want to find_package my dependency, target_link_libraries to it, and move on just like I could pre-conan. If the dependency did their cmake correctly, all transitive dependencies are handled automagically. Per this blog article: https://blog.conan.io/2018/06/11/Transparent-CMake-Integration.html it seems the way to do this is using the cmake_paths generator. I can make and consume packages with that generator no problem.
I'm now trying to comsume a number of third party libraries, namely grpc, yaml-cpp, and Catch2, however none of those packages work with the cmake_paths generator because as part of their package recipe they explicitly delete the cmake package configuration files.
See
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/grpc/all/conanfile.py#L171
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/catch2/2.x.x/conanfile.py#L97
https://github.com/conan-io/conan-center-index/blob/ce2f6b89606cc582ccabbb5420f18a29e705dae3/recipes/yaml-cpp/all/conanfile.py#L95-L96
I obviously haven't done an exhaustive search to see how many packages do this, I just find it hard to believe it's just a coincidence that the three libraries I want to pull in first are the only three that do this.
Is there a reason this is done? or is this a hold-over from times before the cmake_paths generator was a thing and should now be considered a bug?
In the blog article about transparent cmake integration, it states that one of the downsides of the cmake_paths generator is that transitive dependency information is not propagated, but the only reason I can see that would be is because the CMake config modules are deleted as shown above--a major feature of what cmake does (especially modern CMake) is to manage those transitive dependencies. Why does conan seem to want to throw that information away?
The reasons why ConanCenter (not Conan, this is only a requirement of public packages in ConanCenter) is removing the packaged findxxx.cmake or xxxx-config.cmake files from packages are:
Packages from ConanCenter should work from any other build system, not only CMake. Even if CMake is now used by a majority of devs (some surveys shows around 50-55%), there are still a lot of people using other build systems, MSBuild, Meson, Autotools, ec. Conan defines the information for consumers in its package_info() method, which is an abstraction that will work for any consumer build system. When this was not mandatory in ConanCenter, it resulted in many packages that only worked for CMake.
It happens that some of the packaged cmake files can be problematic, and depending on how they are generated, they will not handle transitive dependencies as expected, and they can find transitive dependencies in the system instead of other Conan packages. This happens sometimes when an open source library doesn't have a modern and correct CMakeLists.txt and locates some dependencies in the system directly. Unfortunately the quality of CMakeLists.txt files out there is varying and they not always follow best practices.
Conan handles binary configuration separately, to scale to support many different binary configurations (for example ConanCenter builds around 130 different binaries for each package version), so for example the Debug and Release packages are in separate locations. The native find_package() CMake files cannot handle this situation, so all users expecting to have multi-config setup (like Visual Studio and Xcode) will not manage to achieve this without the Conan generated .cmake files.
Again, this only applies to ConanCenter packages, because ConanCenter is trying to be as universal as possible (to support fairly all build systems) and to allow multi-configuration setups, while being as robust as possible given the complexities of the diverse ecosystem.
In any case, the modern CMakeDeps and CMakeToolchain experimental generators will achieve a transparent integration, they are already released in latest Conan 1.X and as those will be the ones that will survive in next 2.0, it is recommended to start trying them soon.

How to include all 3rdparty runtime dependencies into cmake/cpack-generated package on linux?

I have a c++ project with a couple of executables set up with cmake. The usual workflow is to install all 3rdparty dependencies via package manager, build and install a package via cpack on that same machine. Now, I would like to include all runtime dependencies in that package to be able to install it on another machine without needing to install 3rdparty dependencies there like on the build machine.
I did lot's of research on the web - without much success. I found something called BundleUtilities for cmake but couldn't find any entry-friendly documentation about it. I don't even know if it does what I need.
I would like to use cmake's benefits and generate such a "bundled" package without any manual intervention or anything. I do not want to assemble and copy 3rdparty dependencies manually. Ideal would be a clean cmake/cpack solution for the problem.
Edit:
To clarify: The target machine in question has no internet connection.
Are you really sure you want to do this? It probably won't turn out to be a great idea... packaging third party tools is effectively assuming responsibility for third party software, and as the upstream version inevitably gets out ahead of what people are finding in your tarballs that can become a real headache. Consider whether you're really ok with seeing version conflicts because your dependencies are installed.
Why not just have cmake call out to the system's package manager at config time? The exec_process() command will run console commands for you.

What does /usr/lib/rpm/check-buildroot do?

I am building a RPM package for a c++ application. The compilation and installation succeed. Then the the following command fails /usr/lib/rpm/check-buildroot with the following error:
Found '/user/dfsdf/rpmbuild/BUILDROOT/vendor-xerces-c-3.1.3-3.1.3-1.x86_64' in installed files; aborting
I haven't found any documentation about this command. What does check-buildroot does?
Here is a pointer to a copy of the script. Because it is considered an "internal" part of rpmbuild (in /usr/lib/rpm, rather than /usr/bin), there is no manual page for it.
However, it is known to people who troubleshoot problems building rpms.
The script checks for a common problem: when building an rpm, your package compiles and installs into a BUILDROOT directory. If it is done properly, no trace of that directory name will remain in the final package. Occurrences of the actual installation directory, e.g., /usr/bin, /usr/lib, etc., are okay.
Further reading:
rpmdevtools-5.3-1.el4 RPM for noarch describes the rpmdevtools and gives its changelog.
pk's Tech Page discusses a change that a developer made based on the check-buildroot message.
check-buildroot failure is another example where it was used
How do I safely remove a path string from a compiled library without corrupting the library? illustrates the real problem: getting good advice.

How to prevent accidentally including old headers?

Build systems frequently have separate build and install steps. Sometimes, installed versions will have headers that are older installed on the operating system and those headers may be picked up instead of the headers in the source code. This can lead to very subtle and strange behavior in the source code that is difficult to diagnose because the code looks like it does one thing and the binary does another.
In particular, my group uses CMake and C++, but this question is also more broadly relevant.
Are there good techniques to prevent old headers from being picked up in a build?
1. Uninstall
Uninstall package from CMAKE_INSTALL_PREFIX while hacking develop version.
Pros: very effective
Cons: not flexible
2. Custom install location
Use custom location for installed target, don't add custom install prefix to build.
Pros: very flexible
Cons: if every package use this technique tons of -I option passed to
compiler and tons of <PACKAGE>_ROOT to cmake configure step.
3. Include priority
Use headers search priority. See include_directories command and
AFTER/BEFORE suboptions.
Pros: flexible enough
Cons: sometimes it's not a trivial task if you have a lot of find_package/add_subdirectory
commands, error-prone, errors not detected by autotesting.
BTW
Conflicts can occur not only between build/install directories, but also
in install directory itself. For example version 1.0 install: A.hpp and B.hpp,
version 2.0 install: A.hpp. If you sequentially install 1.0 and 2.0 targets
some #include<B.hpp> errors will not be detected locally. This kind of error can be easily
detected by autotesting (clean environment of CI server don't have old B.hpp file from 1.0 version). Uninstall command also can be helpfull.
Guys recently had fixed the exact same problem with shogun package. You basically need to have your source folders including your header files passed by -I to gcc before the system folders. You don't have to pass the system folders as -I to gcc anyway.
Have a look at the search path here. You might need to have a proper way of including your header files in your source code.
This is the pull request which fixed the problem I guess.

Why must uuid.h be "installed" on linux systems to be able to build many C++ programs rather than just put in include or lib folders

All over the web, the answer to the question "uuid.h not found" is "install some kind of rpm or deb file". This may be appropriate for attempting to build some kind of open source project, which has dependencies on other open source, but it does not seem correct for building one's own software.
At my company, most of our own code can be built by getting the code from our source control and building it. Dependent headers, libs, etc. are included in the sync. However, whenever someone gets a uuid.h not found, soemone always says "do apt-get install uuid-dev" or something like that.
My question: what is so different about uuid.h that it must be installed like this? Our code uses ODBC too, but we don't need to "install" odbc headers. Ditto xml parsers, and many other third party code.
I don't think there's anything magical about uuid.h that requires a packages installation; just that installing the package is a simpler step than adding the required files one by one, and it will be easier for you to keep them up to date through your Linux distro's package update utilities.
So installing the package is the simplest way to get a user going, and will keep them up to date without manual intervention. I suspect there is a way to build from source and add the files one-by-one, but that is not as simple.