Passing environment variables to docker from GitLab CI/CD job failing - dockerfile

I am having issues passing variables which are defined in the GitLab ci file to my docker file
My GitLab CI file looks like this
variables:
IMAGE : "openjdk"
IMAGE_TAG : "11-slim"
docker-image:
extends: .build
variables:
DOCKER_IMAGE_VERSION : ${JDK_IMAGE}:${JDK_IMAGE_TAG}
My Docker file looks a bit like this:
# --- STAGE 1 ----------------------------------------------------------------
# Getting ARGS for build
ARG DOCKER_IMAGE_VERSION
# Start with a base image containing Java runtime
FROM ${DOCKER_IMAGE_VERSION} as build
Now i am getting the following error when the pipeline starts the docker build:
Step 1/7 : ARG DOCKER_IMAGE_VERSION
Step 2/7 : FROM ${DOCKER_IMAGE_VERSION} as build
base name (${DOCKER_IMAGE_VERSION}) should not be blank
Can someone help point me where i am going wrong?
Thanks

consider defining global ARG's and override it when you build.
example
ARG sample_TAG=test
ARG sample_TAG
WORKDIR /opt/sample-test
RUN echo "image tag is ${sample_TAG}"
FROM $sample_TAG
VOLUME /opt
RUN mkdir /opt/sample-test

Related

nest_container : error TS2307: Cannot find module 'class-validator' or its corresponding type declarations

I'm trying to use the 'class-validator' module on a nest_container. It is not working.
My Dockerfile:
FROM node:latest
WORKDIR /usr/src/app
RUN npm install -g npm#9.3.1 && npm install -g
#nestjs/cli
RUN npm i --save validator class-validator class-transformer
COPY nest.sh .
EXPOSE 3000
CMD ["sh", "nest.sh"]
logs from my nest container:
error TS2307: Cannot find module 'class-validator' or its corresponding type declarations.
In my node_module/#type, they're is no class-validator folder, I don't know why.
I read already read similar posts but no one solved my problem.
I can't show you my json files because of the "do 4 spaces and CTRL + K to write code" thing isn't working.
What did I forget in my dockerfile ?
I'm trying to use 'class-validator' in a nest container
class-validator seems not wanting to be installed in the node_modules folder

Use Docker-Windows for Gitlab-runner

I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%

Conditionals in dockerfile?

Is it possible to do the following
docker build --build-arg myvar=yes
f
RUN if ["$myvar" == "yes"]; \
then FROM openjdk \
COPY . . \
RUN java -jar myjarfile.jar \
fi
As you can tell from above i only want to run the specific section in the dockerfile if the build argument is set. I've seen similar thread but they seems to always be running bash commands. If its possible i can't seem to get the syntax correct.
As of now, doing conditional execution in Dockerfiles without the help of the shell is severely limited, see https://medium.com/#tonistiigi/advanced-multi-stage-build-patterns-6f741b852fae
The idea behind existing approaches is to use a Docker multistage build and create different stages for the different outcomes of the IF. Then, at one point, a stage to copy data from is selected based on the value of a variable.
This is an example similar to what you wrote:
# docker build -t test --build-arg MYVAR=yes .
# docker build -t test --build-arg MYVAR=no .
ARG MYVAR=no
FROM openjdk:latest as myvar-yes
COPY . /datadir
RUN java -jar /datadir/myjarfile.jar || true
FROM openjdk:latest as myvar-no
RUN mkdir /datadir
FROM myvar-${MYVAR} as myvar-src
FROM debian:10
COPY --from=myvar-src /datadir/ /
RUN ls /
Stage myvar-no is a variant with an empty /datadir. Stage myvar-yes contains the jarfile and runs it once (remove the || true for actual use, it is just that I did not provide a "real" jarfile in my local tests). Then the last stage copies from the stage myvar-${MYVAR} and invokes ls to be able to see the differences between the two variants.
I have not understand all of the question about syntax: If there are some troubles with getting the bash syntax correctly, that is possibly easier than trying to conditionally run Dockerfile statements.

Name a stage per build-arguments using a Dockerfile with multi-stage-builds

Is there a possibiliity to name a build-stage?
I am searching something like the following example:
ARG NAME
FROM python:3.7-alpine as modul-${NAME}
# ...
If I try this example this error occurs:
Error response from daemon: Dockerfile parse error line 5: invalid name for build stage: "modul-${NAME}", name can't start with a number or contain symbols
I also tryed to use the argument as tag (modul:${NAME}) with the same result.
You can do this with BuildKit, which requires docker 18.09+. (https://docs.docker.com/develop/develop-images/build_enhancements/)
All you have to do is set an env variable before building:
DOCKER_BUILDKIT=1 docker build -t whatever .
I don't think it's possible without BuildKit.

Passing CMD line arguments to docker entry point in Python

So I've got a docker file like so:
FROM frolvlad/alpine-python2
MAINTAINER *REDACTED*
COPY . .
RUN pip install -r requirements.txt
RUN python misc_scripts/gen_hosts.py
RUN python misc_scripts/strip_delims.py
RUN python runparse.py -spc -spi
ENTRYPOINT ["python", "rebuild.py"]
And using the Docker API in python I'm trying to run the container like so
logs = client.containers.run(image_id, name=str(_uuid), entrypoint=['-m ME3400', '-mi NONE'])
However I get the following error
500 Server Error: Internal Server Error ("OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"-m ME3400\": executable file not found in $PATH": unknown")
I assume I'm doing it wrong, anyone know how/why this isn't working?
As you said in the title of your question, it should be the command argument you expect. Follow the source code on github(starting line 484), it should be more specific.
Change your code as below:
logs = client.containers.run(image_id, name=str(_uuid), command=['-m ME3400', '-mi NONE'])
Always note that CMD will treat as ENTRYPOINT's args when it exists. But if you specify another ENTRYPOINT(in your case: ['-m ME3400', '-mi NONE']), the original one(["python", "rebuild.py"]) will be overwritten.