Podman support of target build? - dockerfile

DockerFile:
FROM scratch as stage1
RUN 1
FROM scratch as stage2
RUN 2
FROM scratch AS stage3
RUN 3
If I run
docker build --target stage2 .
stage1 will be ignored. And stage2(RUN 2) will be called
But if I run
podman build --target stage2 .
stage1(RUN 1) will be called.
Why is podman ignoring the target specified? Is there a way to specify the target to Podman?

Turned out target option in podman works differently than what I expected.
This is from release 1.7 of Buildah:
The buildah bud command now accepts a --target option which allows the build to only include the stages in the Dockerfile up to and including the specified stage.
So everything before the specified target(stage) will be built. (I wonder the reason for this implementation!?)

Related

How to add trix.css when it is removed after pruning?

After pruning with the command "docker system prune -a" nodejs wont build anymore beacuse trix.css is missing. I am assuming this was probably deleted while pruning. How can I resolve this error (see the error below)? Why is it not created again while building the container again since the file is in the docker file.
Required path doesn't exist: /code/bower_components/trix/dist/trix.css trix
[13:57:39] 'vendorcss' errored after 1.63 ms
[13:57:39] Error: Promise rejected without Error
at Domain.onError (/usr/local/lib/node_modules/gulp/node_modules/async-done/index.js:49:15)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Domain.emit (events.js:211:7)
at Domain._errorHandler (domain.js:134:21)
at process._fatalException (bootstrap_node.js:375:33)
[13:57:39] 'staging' errored after 41 ms
ERROR: Service 'nodejs' failed to build: The command '/bin/sh -c gulp staging' returned a non-
zero code: 1
Usually I use this command : "sudo docker-compose -f docker-compose-staging.yml build nodejs" when I want to build the container again. I am very new to this and would be greatfull for some help.
For me, this was the case:
The issue exists because trix.css was removed in the latest version. It has nothing to do with docker system prune as far as I understand.
You can compare the two versions here: https://github.com/basecamp/trix/compare/1.3.1...v2.0.0
Basically, in order to fix this issue, you need to do
yarn install
yarn build
inside bower_components. This is suggested in the official updated README of the trix repository: https://github.com/basecamp/trix.
Once done with that, you will have trix.css and trix.umd.min.js files for your perusal.

Use Docker-Windows for Gitlab-runner

I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%

Conditionals in dockerfile?

Is it possible to do the following
docker build --build-arg myvar=yes
f
RUN if ["$myvar" == "yes"]; \
then FROM openjdk \
COPY . . \
RUN java -jar myjarfile.jar \
fi
As you can tell from above i only want to run the specific section in the dockerfile if the build argument is set. I've seen similar thread but they seems to always be running bash commands. If its possible i can't seem to get the syntax correct.
As of now, doing conditional execution in Dockerfiles without the help of the shell is severely limited, see https://medium.com/#tonistiigi/advanced-multi-stage-build-patterns-6f741b852fae
The idea behind existing approaches is to use a Docker multistage build and create different stages for the different outcomes of the IF. Then, at one point, a stage to copy data from is selected based on the value of a variable.
This is an example similar to what you wrote:
# docker build -t test --build-arg MYVAR=yes .
# docker build -t test --build-arg MYVAR=no .
ARG MYVAR=no
FROM openjdk:latest as myvar-yes
COPY . /datadir
RUN java -jar /datadir/myjarfile.jar || true
FROM openjdk:latest as myvar-no
RUN mkdir /datadir
FROM myvar-${MYVAR} as myvar-src
FROM debian:10
COPY --from=myvar-src /datadir/ /
RUN ls /
Stage myvar-no is a variant with an empty /datadir. Stage myvar-yes contains the jarfile and runs it once (remove the || true for actual use, it is just that I did not provide a "real" jarfile in my local tests). Then the last stage copies from the stage myvar-${MYVAR} and invokes ls to be able to see the differences between the two variants.
I have not understand all of the question about syntax: If there are some troubles with getting the bash syntax correctly, that is possibly easier than trying to conditionally run Dockerfile statements.

How to run nosetests in jenkins

I have a project which has two folders on same level
/home/ishan/my_repo/jenkins_test/business_logic
/home/ishan/my_repo/jenkins_test/test_cases
test_cases has two files test_fib and test_fact
when I run nosetests --exe at /home/ishan/my_repo/jenkins_test/ it runs correctly showing
....
----------------------------------------------------------------------
Ran 4 tests in 0.036s
OK
I am trying to run these test cases so, I created following script at /home/ishan/my_repo
#!/bin/bash
source /home/ishan/venv/bin/activate
nosetests --exe /home/ishan/sf_shared/my_repo/jenkin_test/
deactivate
When I run it using
source /home/ishan/my_repo/test_runner.sh it shows correct output.
So, I tried to put it in jenksins build step. So, I added the same line
source /home/ishan/my_repo/test_runner.sh in command section of Execute Shell in jenkins.
Now, when I trigger the build using build now it fails saying
Started by user anonymous
Building in workspace /var/lib/jenkins/workspace/jenkins_test
[jenkins_test] $ /bin/sh -xe /tmp/hudson5020664150857393715.sh
+ source /home/ishan/sf_shared/test_runner.sh
/tmp/hudson5020664150857393715.sh: 2: /tmp/hudson5020664150857393715.sh: source: not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I think it doesn't even execute any test cases. It fails long before.
Can you suggest what am I doing wrong?
Maybe this will works:
/home/ishan/venv/bin/nosetests --exe /home/ishan/sf_shared/my_repo/jenkin_test/
Resolved it, issue was with the following line
source /home/ishan/venv/bin/activate
I replaced it source with standard . then it worked. So, my line became
. /home/ishan/venv/bin/activate

Running the "exec" command in Jenkins "Execute Shell"

I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.