GitHub Continuous Integration CMake/C++ - c++

I am trying to set up a CI for my cmake/c++ project hosted on a private repository on GitHub.
The project depends on lots of third party libraries that should be git-cloned and built. The latter takes a while, hence, I created a docker image with all dependencies installed and hosted it on the docker hub. (Ideally, I would like the docker image to be private also, but if it is not possible, I can make it public.)
I want to achieve the following:
On pull requests to the master branch, the application is automatically built on the docker container(because all dependencies are there), all unit tests (gtest) are run and, if everything is alright, the branch is merged to master.
Ideally, I would like to see the logs and statistics generated by gcovr/lcov.
OS: Ubuntu 18.04
I wonder if this is even achievable as I have been searching for 2 days with no luck and a billion of possible readings.

My 2 cents (more a comment) on controlled build using docker.
For automatic merge, I don't know since I would be against it since code review can't be replaced by CI only IMHO...
Take a look at https://github.com/Mizux/cmake-cpp
Introduction
I use a Makefile for orchestration (docker command can be way too long ;)) and docker for isolated build on various distro.
pro:
Be able to test locally (Just need a GNU/Linux distro with Docker & Make)
Can migrate easily to various CI runners provider (Travis-CI, GitHub Workflow, gitlab-runner, bitbucket?)
Contributors can test locally before sending a PR
cons:
Less coupled to github -> more complex to maintain.
more difficult to have a cache between workflow
note: Dockerfile are stored in the repository in ci/docker i.e. I rebuild the images in the first steps but you should be able to replace this step by a simple docker load if your image is located on docker hub (not tested)
Setup
Dockerfile
I split my Dockerfile in several stages (mostly for debug).
note: you can replace ubuntu:rolling with your own image...
ci/docker/ubuntu/Dockerfile:
# Create a virtual environment with all tools installed
# ref: https://hub.docker.com/_/ubuntu
FROM ubuntu:rolling AS env
# Install system build dependencies
# note: here we use the CMake package provided by Ubuntu
# see: https://repology.org/project/cmake/versions
ENV PATH=/usr/local/bin:$PATH
RUN apt-get update -q && \
apt-get install -yq git build-essential cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD [ "/bin/sh" ]
# Add the library src to our build env
FROM env AS devel
# Create lib directory
WORKDIR /home/lib
# Bundle lib source
COPY . .
# Build in an other stage
FROM devel AS build
# CMake configure
RUN cmake -H. -Bbuild
# CMake build
RUN cmake --build build --target all
# CMake install
RUN cmake --build build --target install
# Create an install image to check cmake install config
FROM env AS install
# Copy lib from build to install
COPY --from=build /usr/local /usr/local/
# Copy sample
WORKDIR /home/sample
COPY ci/sample .
Runner jobs
Github action runners have docker installed.
note: you can have one badge per yml file. e.g. You could should use one job per distro for example to have one jobs per distro or one file for Release and one file for Debug...
.github/workflows/docker.yml:
name: C++ CI
on: [push, pull_request]
jobs:
build-docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build env image
run: docker build --target=env --tag project:env -f ci/docker/ubuntu/Dockerfile .
- name: Build devel image
run: docker build --target=devel --tag project:devel -f ci/docker/ubuntu/Dockerfile .
- name: Build build image
run: docker build --target=build --tag project:build -f ci/docker/ubuntu/Dockerfile .
For testing you can add an other stage or run it using the project:build image:
docker run --rm --init -t --name test project:build cmake --build build --target test
Annexes
Faster build
You can add a .dockerignore file to remove unneeded files (e.g. LICENCES, doc, local build dir if testing locally...) to reduce docker context and the COPY . .
.dockerignore:
# Project Files unneeded by docker
ci/cache
ci/docker
ci/Makefile
.git
.gitignore
.github
.dockerignore
.travis.yml
.appveyor.yml
.clang-format
AUTHORS
CONTRIBUTING.md
CONTRIBUTHORS
INSTALL
LICENSE
README.md
doc
# Native CMake build
build/
# Editor directories and files
*.user
*.swp
Custom CMake version install
You can use the following instead of apt install -y cmake
Can take time since you rebuild CMake...
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4.tar.gz" \
&& tar xzf cmake-3.16.4.tar.gz \
&& rm cmake-3.16.4.tar.gz \
&& cd cmake-3.16.4 \
&& ./bootstrap --prefix=/usr/local/ \
&& make \
&& make install \
&& cd .. \
&& rm -rf cmake-3.16.4
so you can use the prebuild version instead using:
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4-Linux-x86_64.sh" \
&& chmod a+x cmake-3.16.4-Linux-x86_64.sh \
&& ./cmake-3.16.4-Linux-x86_64.sh --prefix=/usr/local/ --skip-license \
&& rm cmake-3.16.4-Linux-x86_64.sh

Related

C++ with Crow, CMake, and Docker [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 15 days ago.
Improve this question
Goal
I would like to compile an Crow Project with CMake and deploy it in a docker container.
Code
So far, I compiled in Visual Studio and installed Crow via VCPKG similar to this Tutorial.
example main.cpp from Crow website:
#include "crow.h"
//#include "crow_all.h"
int main()
{
crow::SimpleApp app; //define your crow application
//define your endpoint at the root directory
CROW_ROUTE(app, "/")([](){
return "Hello world";
});
//set the port, set the app to run on multiple threads, and run the app
app.port(18080).multithreaded().run();
}
I want to build my docker image with docker build -t main_app:1 . and then run a container with docker run -d -it -p 443:18080 --name app main_app:1.
Therefore, I considered something similar like this:
Dockerfile:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get upgrade -y
# is it necessary to install all of them?
RUN apt-get install -y g++ gcc cmake make git gdb pkg-config
RUN git clone --depth 1 https://github.com/microsoft/vcpkg
RUN ./vcpkg/bootstrap-vcpkg.sh
RUN /vcpkg/vcpkg install crow
CMakeLists.txt:
cmake_minimum_required(VERSION 3.8)
project(project_name)
include(/vcpkg/scripts/buildsystems/vcpkg.cmake)
find_package(Crow CONFIG REQUIRED)
add_executable(exe_name "main.cpp")
target_link_libraries(exe_name PUBLIC Crow::Crow)
Questions
However, obviously this is not complete and thus will not work. Hence, I would like to know how a proper (and simple) Dockerfile and CMakeLists.txt would look like for this main.cpp?
Is it possible to create my image without VCPKG? I am a little bit concerned about my image and container size, here.
How would it work with the crow_all.h header only file?
Is it possible to build an image from an already compiled name.exe, as well - so I won't have to compile anything while building the image?
Since this ought to be a minimal example, would there be any conflicts with a file structure like this:
docker_project
|__Dockerfile
|__CMakeLists.txt
|__header.hpp
|__class.cpp
|__main.cpp
Thanks for your help :)
After further research and testing I could solve this issue on two ways:
Crow.h Project compiled with CMake in Docker container
Dockerfile
# get baseimage
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get upgrade -y
# reinstall certificates, otherwise git clone command might result in an error
RUN apt-get install --reinstall ca-certificates -y
# install developer dependencies
RUN apt-get install -y git build-essential cmake --no-install-recommends
# install vcpkg package manager
RUN git clone https://github.com/microsoft/vcpkg
RUN apt-get install -y curl zip
RUN vcpkg/bootstrap-vcpkg.sh
# install crow package
RUN /vcpkg/vcpkg install crow
# copy files from local directory to container
COPY . /project
# define working directory from container
WORKDIR /build
# compile with CMake
RUN bash -c "cmake ../project && cmake --build ."
# run executable (name has to match with CMakeLists.txt file)
CMD [ "./app" ]
Docker directory would look like this:
Docker
|__vcpkg
|__ ...
|__project
|__CMakeLists.txt
|__main.cpp
|__build
|__app
|__ ...
CMakeLists.txt
cmake_minimum_required(VERSION 3.8)
project(project)
# full path from root directory
include(/vcpkg/scripts/buildsystems/vcpkg.cmake)
find_package(Crow CONFIG REQUIRED)
add_executable(
app
main.cpp
)
target_link_libraries(app PUBLIC Crow::Crow)
build Docker image in local directory
project
|__Dockerfile
|__CMakeLists.txt
|__main.cpp
Navigate to project folder in shell and run docker build -t image_name:1 . to build the Docker image and run Docker container with docker run -d -it --rm --name container_name -p 443:18080 image_name:1.
Crow Project compiled with g++ command and header only library in Docker container
I created the crow_all.h header only file from Crow Github repository and downloaded the asio package via VCPKG on my PC and copied the header files (C:\vcpkg\packages\asio_x64-windows\include) to my project folder into a subdirectory called asio. Hence, my project directory look like this:
project
|__asio
|__asio.hpp
|__asio
|__ ...
|__crow_all.h
|__Dockerfile
|__main.cpp
I build and run the Docker image/container with the same commands as above.
Dockerfile
(entire content from project directory gets copied into /usr/src/ directory in Docker container)
# get baseimage
FROM gcc:12
# copy files from local folder to destination
COPY . /usr/src
# define working directory in container
WORKDIR /usr/src
# compile main.cpp (-I/usr/src/asio/ link path starting from root)
RUN g++ -I/usr/src/asio/ main.cpp -lpthread -o app
# run executable
CMD [ "./app" ]
To the other questions
I still do not know, whether that is possible.
With such a (still) simple file structure no serious conflicts appeared.

Docker throws error while running npm install for node application

FROM node:12-alpine
RUN mkdir /project-api
WORKDIR /project-api
RUN apk add --update-cache python
ENV PYTHON=/usr/local/bin/
COPY ./package.json .
RUN npm cache clean --force
RUN rm -rf ~/.npm
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm install
EXPOSE 3000
I was trying to create a node container for my project, but it throws some error while npm install (bcrypt package). I tried installing python in image file.But still it shows error. I'm attaching error screen
The bcrypt npm package depends on non-javascript code. This means it needs to be built for the specific architecture it's being run on. The initial "WARNING: Tried to download" indicates a pre-built artifact wasn't available, so it's falling back to building from source.
The specific error I see is Error: not found: make, which indicates make isn't installed on the image you're building on (node:12-alpine). Either install it in a prior step in your dockerfile, or switch to a base image that has it pre-installed (node:12 might).
The bcrypt package have more specific instructions at https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions#alpine-linux-based-images.
You need the following packages:
build-base
python
apk --no-cache add --virtual builds-deps build-base python

Installing c++ packages during Docker build

I have a dockerfile for running a c++ application. Part of the docker file has the following command :
RUN tar -xvf boost_1_56_0.tar.bz2 && \
cd boost_1_56_0 && \
./bootstrap.sh && \
./b2 install
The tar file is part of docker image.
The problem is that each time I build the dockerfile the entire package gets installed which takes an awful amount of time. How can I prevent it ?
If nothing has changed up to and including a command in a docker file, then Docker will used the cached data from a previous build. So if you have something like this:
ADD ./myfiles /path/in/container # changes each time
RUN tar -xvf boost # etc
Then boost will be rebuilt every time. But if reorganise your Dockerfile like this:
RUN tar -xvf boost # etc
ADD ./myfiles /path/in/container # changes each time
Then the binary build of boost from your last docker build will be reused from the cache. More generally, put things earlier in the Docker file the less likely they are to change.

Compiling libsass in Docker container

So I have the following Dockerfile:
############################################################
# Based on Ubuntu
############################################################
FROM ubuntu:trusty
MAINTAINER OTIS WRIGHT
# Add extra software sources
RUN apt-get update -qq
# Libsass requirements.
RUN apt-get install -y curl git build-essential automake libtool
# Fetch sources
RUN git clone https://github.com/sass/libsass.git
RUN git clone https://github.com/sass/sassc.git libsass/sassc
# Create configure script
RUN cd libsass
RUN autoreconf --force --install
RUN cd ..
# Create custom makefiles for **shared library**, for more info read:
# 'Difference between static and shared libraries?' before installing libsass http://stackoverflow.com/q/2649334/802365
RUN cd libsass
RUN autoreconf --force --install
RUN ./configure --disable-tests --enable-shared --prefix=/usr
RUN cd ..
# Build the library
RUN make -C libsass -j5
# Install the library
RUN make -C libsass -j5 install
I am trying to build libsass based on this gist:
https://gist.github.com/edouard-lopez/503d40a5c1a49cf8ae87
However when I try to build with docker I get the following error:
Step 11 : RUN git clone https://github.com/sass/libsass.git
---> Using cache
---> d1a4eef78fa5
Step 12 : RUN git clone https://github.com/sass/sassc.git libsass/sassc
---> Using cache
---> 435410579641
Step 13 : RUN cd libsass
---> Using cache
---> f0a4df503d85
Step 14 : RUN autoreconf --force --install
---> Running in a9b0d51d6ee3
autoreconf: 'configure.ac' or 'configure.in' is required
The command '/bin/sh -c autoreconf --force --install' returned a non-zero code: 1
I don't know a lot about compiling from source so any solutions please walk me through.
Docker won't change dirs for you RUN cd libsass so you need to path your commands or use the WORKDIR directive

Run Keter without GHC and cabal

I have a server and want to deploy my Yesod applications without installing GHC and Cabal. I am not sure if is possible: a Teacher told me that I must first compile Keter in my machine and, after that, put keter executable on the server, though I am not sure how to do that.
To build Keter, first you'll need to clone the sources from its GitHub repository. Then you'll need to set up a Haskell build environment and use cabal build or cabal install to build the sources. Personally, I use a Docker container derived from an image based on the following Dockerfile:
FROM haskell:7.10.2
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
ENTRYPOINT /bin/bash
This is an image containing the Keter sources checked out at a specific revision with the minimum GHC toolchain required to build it all. The cabal command lines pull down all the project's dependencies and compiles the whole thing. Once this has completed, you can grab the keter executable from ~/.cabal/bin/keter.
Even if you choose not to use Docker, this file should give you a rough idea how to set up your environment.
Now you have Keter compiled, you can run it inside another Docker container. Here's a rough idea what the Dockerfile for the corresponding image might look like:
FROM debian
RUN apt-get update && apt-get install -y \
libgmp-dev \
nano \
postgresql
COPY keter /opt/keter/bin/
COPY keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
Ths will take a base Debian image and install a minimal set of packages on top of it. It then copies the keter executable and configuration file into the image. If you then run a container from the resulting image it will start the keter executable.
Fair warning: This whole process is fairly involved. I'm still working on tweaking the exact details myself. Good luck!