Run Keter without GHC and cabal - yesod

I have a server and want to deploy my Yesod applications without installing GHC and Cabal. I am not sure if is possible: a Teacher told me that I must first compile Keter in my machine and, after that, put keter executable on the server, though I am not sure how to do that.

To build Keter, first you'll need to clone the sources from its GitHub repository. Then you'll need to set up a Haskell build environment and use cabal build or cabal install to build the sources. Personally, I use a Docker container derived from an image based on the following Dockerfile:
FROM haskell:7.10.2
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
ENTRYPOINT /bin/bash
This is an image containing the Keter sources checked out at a specific revision with the minimum GHC toolchain required to build it all. The cabal command lines pull down all the project's dependencies and compiles the whole thing. Once this has completed, you can grab the keter executable from ~/.cabal/bin/keter.
Even if you choose not to use Docker, this file should give you a rough idea how to set up your environment.
Now you have Keter compiled, you can run it inside another Docker container. Here's a rough idea what the Dockerfile for the corresponding image might look like:
FROM debian
RUN apt-get update && apt-get install -y \
libgmp-dev \
nano \
postgresql
COPY keter /opt/keter/bin/
COPY keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
Ths will take a base Debian image and install a minimal set of packages on top of it. It then copies the keter executable and configuration file into the image. If you then run a container from the resulting image it will start the keter executable.
Fair warning: This whole process is fairly involved. I'm still working on tweaking the exact details myself. Good luck!

Related

GitHub Continuous Integration CMake/C++

I am trying to set up a CI for my cmake/c++ project hosted on a private repository on GitHub.
The project depends on lots of third party libraries that should be git-cloned and built. The latter takes a while, hence, I created a docker image with all dependencies installed and hosted it on the docker hub. (Ideally, I would like the docker image to be private also, but if it is not possible, I can make it public.)
I want to achieve the following:
On pull requests to the master branch, the application is automatically built on the docker container(because all dependencies are there), all unit tests (gtest) are run and, if everything is alright, the branch is merged to master.
Ideally, I would like to see the logs and statistics generated by gcovr/lcov.
OS: Ubuntu 18.04
I wonder if this is even achievable as I have been searching for 2 days with no luck and a billion of possible readings.
My 2 cents (more a comment) on controlled build using docker.
For automatic merge, I don't know since I would be against it since code review can't be replaced by CI only IMHO...
Take a look at https://github.com/Mizux/cmake-cpp
Introduction
I use a Makefile for orchestration (docker command can be way too long ;)) and docker for isolated build on various distro.
pro:
Be able to test locally (Just need a GNU/Linux distro with Docker & Make)
Can migrate easily to various CI runners provider (Travis-CI, GitHub Workflow, gitlab-runner, bitbucket?)
Contributors can test locally before sending a PR
cons:
Less coupled to github -> more complex to maintain.
more difficult to have a cache between workflow
note: Dockerfile are stored in the repository in ci/docker i.e. I rebuild the images in the first steps but you should be able to replace this step by a simple docker load if your image is located on docker hub (not tested)
Setup
Dockerfile
I split my Dockerfile in several stages (mostly for debug).
note: you can replace ubuntu:rolling with your own image...
ci/docker/ubuntu/Dockerfile:
# Create a virtual environment with all tools installed
# ref: https://hub.docker.com/_/ubuntu
FROM ubuntu:rolling AS env
# Install system build dependencies
# note: here we use the CMake package provided by Ubuntu
# see: https://repology.org/project/cmake/versions
ENV PATH=/usr/local/bin:$PATH
RUN apt-get update -q && \
apt-get install -yq git build-essential cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD [ "/bin/sh" ]
# Add the library src to our build env
FROM env AS devel
# Create lib directory
WORKDIR /home/lib
# Bundle lib source
COPY . .
# Build in an other stage
FROM devel AS build
# CMake configure
RUN cmake -H. -Bbuild
# CMake build
RUN cmake --build build --target all
# CMake install
RUN cmake --build build --target install
# Create an install image to check cmake install config
FROM env AS install
# Copy lib from build to install
COPY --from=build /usr/local /usr/local/
# Copy sample
WORKDIR /home/sample
COPY ci/sample .
Runner jobs
Github action runners have docker installed.
note: you can have one badge per yml file. e.g. You could should use one job per distro for example to have one jobs per distro or one file for Release and one file for Debug...
.github/workflows/docker.yml:
name: C++ CI
on: [push, pull_request]
jobs:
build-docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build env image
run: docker build --target=env --tag project:env -f ci/docker/ubuntu/Dockerfile .
- name: Build devel image
run: docker build --target=devel --tag project:devel -f ci/docker/ubuntu/Dockerfile .
- name: Build build image
run: docker build --target=build --tag project:build -f ci/docker/ubuntu/Dockerfile .
For testing you can add an other stage or run it using the project:build image:
docker run --rm --init -t --name test project:build cmake --build build --target test
Annexes
Faster build
You can add a .dockerignore file to remove unneeded files (e.g. LICENCES, doc, local build dir if testing locally...) to reduce docker context and the COPY . .
.dockerignore:
# Project Files unneeded by docker
ci/cache
ci/docker
ci/Makefile
.git
.gitignore
.github
.dockerignore
.travis.yml
.appveyor.yml
.clang-format
AUTHORS
CONTRIBUTING.md
CONTRIBUTHORS
INSTALL
LICENSE
README.md
doc
# Native CMake build
build/
# Editor directories and files
*.user
*.swp
Custom CMake version install
You can use the following instead of apt install -y cmake
Can take time since you rebuild CMake...
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4.tar.gz" \
&& tar xzf cmake-3.16.4.tar.gz \
&& rm cmake-3.16.4.tar.gz \
&& cd cmake-3.16.4 \
&& ./bootstrap --prefix=/usr/local/ \
&& make \
&& make install \
&& cd .. \
&& rm -rf cmake-3.16.4
so you can use the prebuild version instead using:
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4-Linux-x86_64.sh" \
&& chmod a+x cmake-3.16.4-Linux-x86_64.sh \
&& ./cmake-3.16.4-Linux-x86_64.sh --prefix=/usr/local/ --skip-license \
&& rm cmake-3.16.4-Linux-x86_64.sh

Where should c++ application be compiled in GitLab CI Docker workflow?

I’m looking to understand how to properly structure my .gitlab-ci.yml and Dockerfile such that I can build a C++ application into a Docker container.
I’m struggling with where the actual compilation and link of the C++ application should take place within the CI workflow.
What I’ve done:
My current in approach is to use Docker in Docker with a private gitlab docker registry.
My gitlab-ci.yml uses a dind docker image service I created based on the the docker:19.03.1-dind image but includes my certificates to talk securely to my private gitlab docker registry.
I also have a custom base image referenced by my gitlab-ci.yml based on docker:19.03.1 that includes what I need for building, eg cmake, build-base mariadb-dev, etc.
Have my build script added to the gitlab-ci.yml to build the application, cmake … && cmake --build .
The dockerfile then copies the final binary produced in my build step.
Having done all of this it doesn’t feel quite right to me and I’m wondering if I’m missing the intent. I’ve tried to find a C++ example online to follow as example but have been unsuccessful.
What I’m not fully understanding is the role of each player in the docker-in-docker setup: docker image, dind image, and finally the container I’m producing…
What I’d like to know…
Who should perform the build and contain the build environment, the base image specified in my .gitlab-ci.yml or my Dockerfile?
If I build with the dockerfile, how to i get the contents of the source into the docker container? Do I copy the /builds dir? Should I mount it?
Where to divide who performs work, gitlab-ci.yml or Docker file?
Reference to a working example of a C++ docker application built with Docker-in-Docker Gitlab CI.
.gitlab-ci.yml
image: $CI_REGISTRY/building-blocks/dev-mysql-cpp:latest
#image: docker:19.03.1
services:
- name: $CI_REGISTRY/building-blocks/my-dind:latest
alias: docker
stages:
- build
- release
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- mkdir build
Both approaches are equally valid. If you look at other SO questions, one thing you'll probably notice is that Java/Docker images almost universally build a jar file on their host and then COPY it into an image, but Go/Docker images tend to use a multi-stage Dockerfile starting from sources.
If you already have a fairly mature build system and your developers already have a very consistent setup, it makes sense to do more work in the CI environment (in your .gitlab.yml file). Build your application the same way you already do, then COPY it into a minimal Docker image. This approach is also helpful if you need to ship both Docker and non-Docker artifacts. If you have a make dist style tar file and want to get a Docker image out of it, you could use a very straightforward Dockerfile like
FROM ubuntu
RUN apt-get update && apt-get install ...
ADD dist/myapp.tar.gz /usr/local # unpacking it
EXPOSE 12345
CMD ["myapp"] # /usr/local/bin/myapp
On the other hand, if your developers have a variety of desktop environments and you're really trying to standardize things, and you only need to ship the Docker image, it could make sense to centralize most things in the Dockerfile. This would have the advantage that every developer could run the exact build sequence themselves locally, rather than depending on the CI system to try simple changes. Something built around GNU Autoconf might look more like
FROM ubuntu AS build
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
build-essential \
lib...-dev
WORKDIR /app
COPY . .
RUN ./configure --prefix=/usr/local \
&& make \
&& make install
FROM ubuntu
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
lib...
COPY --from=build /usr/local /usr/local
CMD ["myapp"]
If you do the primary build in a Dockerfile, you need to COPY the source code in. Volume mounts don't work at this point in the sequence. CI systems should avoid bind-mounting source code into a container in any case: you want to run tests against the actual artifact you've built, and not a hybrid of a built Docker image but with all of its source code replaced.

Installing c++ packages during Docker build

I have a dockerfile for running a c++ application. Part of the docker file has the following command :
RUN tar -xvf boost_1_56_0.tar.bz2 && \
cd boost_1_56_0 && \
./bootstrap.sh && \
./b2 install
The tar file is part of docker image.
The problem is that each time I build the dockerfile the entire package gets installed which takes an awful amount of time. How can I prevent it ?
If nothing has changed up to and including a command in a docker file, then Docker will used the cached data from a previous build. So if you have something like this:
ADD ./myfiles /path/in/container # changes each time
RUN tar -xvf boost # etc
Then boost will be rebuilt every time. But if reorganise your Dockerfile like this:
RUN tar -xvf boost # etc
ADD ./myfiles /path/in/container # changes each time
Then the binary build of boost from your last docker build will be reused from the cache. More generally, put things earlier in the Docker file the less likely they are to change.

Compiling a file leaves files in my home directory when doing vagrant provision

I am using Vagrant to setup a linux box. I need to add a file I compiled to the system and I do so using the following commands -
sudo git clone https://github.com/thewtex/tmux-mem-cpu-load.git /tmp/tmuxcpu
sudo cmake /tmp/tmuxcpu
sudo make clean /tmp/tmuxcpu
sudo make install clean /tmp/tmuxcpu
However, this leaves tons of files including a makefile, config files, and other garbage inside the /home/vagrant/ folder. How do I make and install from tmp without littering the home directory with garbage?
The above commands work, but it leaves tons of files in the /home/vagrant folder that I don't want there. Is it possible to cmake, make, and make install without leaving 'trash'?
I have solved my issue with the help of Etan Reisner and reinierpost.
## CPU LOAD
sudo git clone https://github.com/thewtex/tmux-mem-cpu-load.git /tmp/tmuxcpu
sudo sh -c "cd /tmp/tmuxcpu && sudo cmake . && sudo make && sudo make install";
My install script now uses the above commands and this has resolved my issue. By keeping the files in /tmp, they are removed the moment I reboot the box. I am using this technique for several other items during the vagrant setup now and it works perfectly. This has solved numerous issues for me.

How about manage system dependencies when using azk?

I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.