I am having several containers, and each of my containers are having their own Dockerfile. Everytime I am building, using docker-compose build, each container runs its own requirements; either from a requirements.txt file (RUN pip install -r requirements.txt), or directly from the Dockerfile (RUN pip install Django, celery, ...). Many of the requirements are common in some of the containers (almost all).
It is working perfectly, but there is a problem with build time. It takes almost 45 minutes to build every container from scratch. (lets say after I deleted all images and containers)
Is there a way, to install all the requirements in a common directory for all containers, so that we dont install the common requirements each time a new container is building?
Docker-compose I am using is version 2.
You can define your own base image. Let's say all your containers need django and boto for instance, you can create your own Dockerfile:
FROM python:3
RUN pip install django boto
# more docker commands
Then you can build this image as arrt_dtu/envbase and publish it somewhere (dockerhub, internal docker environment of your company). Now you can create your specialized images using this one:
FROM arrt_dtu/envbase
RUN pip install ...
That's exactly the same principle we have with the ruby image, for instance. The ruby one uses a linux one. If you want a rails image, you can use the ruby one as well. Docker images are totally reusable!
Related
I'm having problems with an app that uses Django. Everything is in a docker container, there is a pipfile and a pipfile.lock. So far, so good.
The problem is when I want to install a new depedency. I open the docker container shell, and I install the dependency with pipenv install <package-name>.
After installing the package, pipenv runs a command to update the pipfile.lock file and doing so updates all packages to their last version, bringing whit these updates a lot of breaking changes.
I don't understand why is this happening, I have all packages listed in my pipfile with ~=, this is suppose to avoid updating to versions that can break your app.
I'll give you an example, I have this dependency in my pipfile: dj-stripe = "~=2.4". But, in the pipfile.lock file, after pipenv runs the command lock, that depedency is updated to its last version (2.5.1).
What am I doing wrong?
Are you sure you're installing it within Docker? A common cause of pipfile.lock conflicts is installing a package locally instead of within Docker and then when the local environment syncs with Docker it will override your pipfile.lock.
Assuming you're using docker-compose, this is how I'm installing my packages:
docker-compose exec web pipenv install <package-name>
I discovered what my problem was.
I've been listing the dependencies like this: ~=2.4, I thought that was indicating not to update to 2.5 or greater, but that's not true, that only tells pipenv not to update to 3 or greater.
In order to stay in 2.4 version, I must specify the last number version, for example: ~=2.4.0
That way, I'm telling pipenv not to update from 2.4.
When you have a complicated RUN apt-get install section that you reuse over multiple docker images, what is the best way to reuse it?
The options that I think we have are
copy-paste the RUN command n times across your Dockerfiles (this is what I do today)
make a docker image and use it as a build step + COPY --from=builder... (this is what I wan't, but I don't konw how to do it).
I am thinking of something like this:
Dockerfile with reusable apt install command, tagged as my-builder-img:
FROM debian:buster
RUN ... apt-get install ...
Dockerfile that reuses that complicated install:
FROM my-builder-img as builder
#nothing here
FROM debian:buster
COPY --from=builder /usr/bin:/usr/bin # (...???)
TL;DR how to reuse apt-get install from a previus image onto a new image.
You just use the image you put all the packages in directly.
Multi-stage builds shine when you are creating an artifact and copying that to a new image. If you are just installing packages those will exist in the image.
Dockerfile with packages you want:
FROM debian:buster
RUN ... apt-get install ...
Tag it as my-image.
Now, just use that image in other Dockerfiles and the packages installed will be available.
FROM my-image:latest
# other directives...
Background
I have a CI pipeline for a C++ library I've been developing. So far, I can distribute this lib to Linux and Windows systems. Since I use GitLab to build, test and package my lib, I'd like to have my Windows builds running faster and I have no clue on how to do that.
Currently, I use the following script for my Windows builds:
.windows_template:
tags:
- windows
before_script:
- choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
- choco install python --pre -y
- choco install git -y
- $env:ChocolateyInstall = Convert-Path "$((Get-Command choco).Path)\..\.."; Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"; refreshenv
- python -m pip install --upgrade pip
- pip install conan monotonic
The problem
Any build with the script above can take up to 10 minutes; worse: I have two stages, each one taking the same amount of time. This means that my whole CI pipeline will take 20 minutes to finish because of slowness in Windows builds.
Ideal solution
EVERYTHING in my before_script can be cached or stored as an image. I only need some hints on how to do it properly.
Additional information
I use the following tools for my builds:
CMake: to support my building process;
Python3: to test and build packages;
Conan (requires Python3): to support the creation of packages with several features, as well as distribute them;
Git: to download Googletest in CMake configuration step This is already provided in the cookbooks - I might just remove this extra installation step in my before_script;
Googletest (requires Python3): testing library;
Visual Studio DEV Tools: to compile the library This is already in the cookbooks.
Installing packages like this (whether it's OS packages though apt-get install... or pip, or anything else) is generally against best practices for CI/CD jobs because every job that runs will have to do the same thing, costing a lot of time as you run more pipelines, as you've seen already.
A few alternatives are to search for an existing image that has everything you need (possible but not likely with more dependencies), split up your job into pieces that might be solved by an image with just one or two dependencies, or create a custom docker image to use in your jobs. I answered a similar question with an example a few weeks ago here: "Unable to locate package git" when running GitLab CI/CD pipeline
But here's an example Dockerfile with Windows:
# Dockerfile
FROM mcr.microsoft.com/windows
RUN ./install_chocolatey.sh
RUN choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
RUN choco install python --pre -y
RUN choco install git -y
...
The FROM line says that our new image extends the mcr.microsoft.com/windows base image. You can extend any image you have access to, even if it already extends another image (in fact, that's how most images work: they start with something small, like a base OS installation, then add things needed for that package. PHP for example starts on an Ubuntu image, then installs the necessary PHP packages).
The first RUN line is just an example. I'm not a Windows user and don't have experience installing Chocolatey, but you'd do here whatever you'd normally do to install it locally. The rest are for installing whatever else you need.
Then run
docker build /path/to/dockerfile-dir -t mygroup/mytag:version
The path you supply needs to be the directory that contains the Dockerfile, not the Dockerfile itself. The -t flag sets the image's tag after it's built (though you can do that with a separate command, docker tag too).
Next, you'll have to log into whichever registry you're using (Docker Hub (https://docs.docker.com/docker-hub/repos/), Gitlab Container Registry (https://docs.gitlab.com/ee/user/packages/container_registry/), a private registry your employer may support, or any other option.
docker login my.docker.hub.com
Now you can push the image to the registry:
docker push my.docker.hub.com/mygroup/mytag:version
You'll have to review the information in the docs about telling your Gitlab runner or pipelines how to authenticate with the registry (unless it's Public on Docker Hub or you use the Gitlab Container Registry) https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Once all that's done, you can use your new image in your CI jobs, and everything we put into the image will be ready to use:
.windows_template:
image: my.docker.hub.com/mygroup/mytag:version
tags:
- windows
...
I’m looking to understand how to properly structure my .gitlab-ci.yml and Dockerfile such that I can build a C++ application into a Docker container.
I’m struggling with where the actual compilation and link of the C++ application should take place within the CI workflow.
What I’ve done:
My current in approach is to use Docker in Docker with a private gitlab docker registry.
My gitlab-ci.yml uses a dind docker image service I created based on the the docker:19.03.1-dind image but includes my certificates to talk securely to my private gitlab docker registry.
I also have a custom base image referenced by my gitlab-ci.yml based on docker:19.03.1 that includes what I need for building, eg cmake, build-base mariadb-dev, etc.
Have my build script added to the gitlab-ci.yml to build the application, cmake … && cmake --build .
The dockerfile then copies the final binary produced in my build step.
Having done all of this it doesn’t feel quite right to me and I’m wondering if I’m missing the intent. I’ve tried to find a C++ example online to follow as example but have been unsuccessful.
What I’m not fully understanding is the role of each player in the docker-in-docker setup: docker image, dind image, and finally the container I’m producing…
What I’d like to know…
Who should perform the build and contain the build environment, the base image specified in my .gitlab-ci.yml or my Dockerfile?
If I build with the dockerfile, how to i get the contents of the source into the docker container? Do I copy the /builds dir? Should I mount it?
Where to divide who performs work, gitlab-ci.yml or Docker file?
Reference to a working example of a C++ docker application built with Docker-in-Docker Gitlab CI.
.gitlab-ci.yml
image: $CI_REGISTRY/building-blocks/dev-mysql-cpp:latest
#image: docker:19.03.1
services:
- name: $CI_REGISTRY/building-blocks/my-dind:latest
alias: docker
stages:
- build
- release
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- mkdir build
Both approaches are equally valid. If you look at other SO questions, one thing you'll probably notice is that Java/Docker images almost universally build a jar file on their host and then COPY it into an image, but Go/Docker images tend to use a multi-stage Dockerfile starting from sources.
If you already have a fairly mature build system and your developers already have a very consistent setup, it makes sense to do more work in the CI environment (in your .gitlab.yml file). Build your application the same way you already do, then COPY it into a minimal Docker image. This approach is also helpful if you need to ship both Docker and non-Docker artifacts. If you have a make dist style tar file and want to get a Docker image out of it, you could use a very straightforward Dockerfile like
FROM ubuntu
RUN apt-get update && apt-get install ...
ADD dist/myapp.tar.gz /usr/local # unpacking it
EXPOSE 12345
CMD ["myapp"] # /usr/local/bin/myapp
On the other hand, if your developers have a variety of desktop environments and you're really trying to standardize things, and you only need to ship the Docker image, it could make sense to centralize most things in the Dockerfile. This would have the advantage that every developer could run the exact build sequence themselves locally, rather than depending on the CI system to try simple changes. Something built around GNU Autoconf might look more like
FROM ubuntu AS build
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
build-essential \
lib...-dev
WORKDIR /app
COPY . .
RUN ./configure --prefix=/usr/local \
&& make \
&& make install
FROM ubuntu
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
lib...
COPY --from=build /usr/local /usr/local
CMD ["myapp"]
If you do the primary build in a Dockerfile, you need to COPY the source code in. Volume mounts don't work at this point in the sequence. CI systems should avoid bind-mounting source code into a container in any case: you want to run tests against the actual artifact you've built, and not a hybrid of a built Docker image but with all of its source code replaced.
I am thinking to use a docker for django.
Since this docker image will be exclusive for a particular django project is it ok to just pip install everything in docker rather than creating a virtualenv and then install all the required django and related packages using pip
So what is the best way and also safe way if one want to stick to docker for django projects.
You are right that you don't need a virtual environment inside the django container.
If you are always using pip and store the the requirements in a requirements.txt you can use this to initialize a virtual environment for development without docker as well as for setting up the docker container:
To reduce the size of the container remove the pip cache after installation:
FROM python:3.6.7-alpine3.8
...
RUN pip3.6 install -U pip setuptools \
&& pip3.6 install -r requirements.txt \
&& pip3.6 install gunicorn \. # or uwsgi or whatever
&& rm -rf /root/.cache
Docker containers provide already isolated environment which is a similar goal to that of virtualenv. So, if it's only 1 application running in a Docker container, it is fine to use it without another layer that virtualenv would bring. Personally, I don't remember seeing a Django app used with virtualenv in a container.