Use Docker-Windows for Gitlab-runner - c++

I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.

Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%

Related

How can I correct this Dockerfile to take arguments properly?

I have this dockerfile content:
FROM python:latest
ARG b=8
ARG r=False
ARG p=1
ENV b=${b}
ENV r=${r}
ENV p=${p}
# many lines of successful setup
ENTRYPOINT python analyze_all.py /analysispath -b $b -p $p -r $r
My intention was to take three arguments at the command line like so:
docker run -it -v c:\analyze containername -e b=16 -e p=22 -e r=False
But unfortunately, I'm misunderstanding something fundamental and simple here instead of something complicated, so I'm helpless :).
If I understood the question correctly, this Dockerfile should do what is required:
FROM python:latest
# test script that prints argv
COPY analyze_all.py /
ENTRYPOINT ["python", "analyze_all.py", "/analysispath"]
Launch:
$ docker run -it test:latest -b 16 -p 22 -r False
sys.argv=['analyze_all.py', '/analysispath', '-b', '16', '-p', '22', '-r', 'False']
Looks like your Dockerfile is designed to build and run a container on Windows. I tested my Dockerfile on Linux, it probably won't be much different to use this approach on Windows.
I think ARG instructions isn't needed in this case because it defines a variable that users can pass at build-time using the docker build command. I would also suggest that you take a look at the Dockerfile reference for the ENTRYPOINT instruction:
Command line arguments to docker run will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run -d will pass the -d argument to the entry point.
Also, this question will probably be useful for you: How do I use Docker environment variable in ENTRYPOINT array?

Nix Gradle dist - Failed to load native library 'libnative-platform.so' for Linux amd64

I am trying to build a Freeplane derivation based on Freemind, see: https://github.com/razvan-panda/nixpkgs/blob/freeplane/pkgs/applications/misc/freeplane/default.nix
{ stdenv, fetchurl, jdk, jre, gradle }:
stdenv.mkDerivation rec {
name = "freeplane-${version}";
version = "1.6.13";
src = fetchurl {
url = "mirror://sourceforge/project/freeplane/freeplane%20stable/freeplane_src-${version}.tar.gz";
sha256 = "0aabn6lqh2fdgdnfjg3j1rjq0bn4d1947l6ar2fycpj3jy9g3ccp";
};
buildInputs = [ jdk gradle ];
buildPhase = "gradle dist";
installPhase = ''
mkdir -p $out/{bin,nix-support}
cp -r ../bin/dist $out/nix-support
sed -i 's/which/type -p/' $out/nix-support/dist/freeplane.sh
cat >$out/bin/freeplane <<EOF
#! /bin/sh
JAVA_HOME=${jre} $out/nix-support/dist/freeplane.sh
EOF
chmod +x $out/{bin/freeplane,nix-support/dist/freeplane.sh}
'';
meta = with stdenv.lib; {
description = "Mind-mapping software";
homepage = https://www.freeplane.org/wiki/index.php/Home;
license = licenses.gpl2Plus;
platforms = platforms.linux;
};
}
During the gradle build step it is throwing the following error:
building path(s)
‘/nix/store/9dc1x2aya5p8xj4lq9jl0xjnf08n7g6l-freeplane-1.6.13’
unpacking sources unpacking source archive
/nix/store/c0j5hgpfs0agh3xdnpx4qjy82aqkiidv-freeplane_src-1.6.13.tar.gz
source root is freeplane-1.6.13 setting SOURCE_DATE_EPOCH to timestamp
1517769626 of file freeplane-1.6.13/gitinfo.txt patching sources
configuring no configure script, doing nothing building
FAILURE: Build failed with an exception.
What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64.
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. builder for ‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed with exit code 1 error: build of
‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed
Running gradle dist from terminal works fine. I'm guessing that maybe one of the globally installed Nix packages provides a fix to the issue and they are not visible during the build.
I searched a lot but couldn't find any working solution. For example, removing the ~/.gradle folders didn't help.
Update
To reproduce the issue just git clone https://github.com/razvan-panda/nixpkgs, checkout the freeplane branch and run nix-build -A freeplane in the root of the repository.
Link to GitHub issue
Maybe you just don't have permission for the folder/file
sudo chmod 777 yourFolderPath
you can also : sudo chmod 777 yourFolderPath/* (All folder)
Folder will not be locked,then You can use it normally
[At least I succeeded。。。]
EX:
sudo chmod 777 Ruby/
now ,that's ok
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.

Golang Build Constraints Random

I have two go files with different build constraints in the header.
constants_production.go:
// +build production,!staging
package main
const (
URL = "production"
)
constants_staging.go:
// +build staging,!production
package main
const (
URL = "staging"
)
main.go:
package main
func main() {
fmt.Println(URL)
}
When I do a go install -tags "staging", sometimes, it prints production; Sometimes, it prints staging. Similarly, when I do go install -tags "production",...
How do I get a consistent output on every build? How do I make it print staging when I specify staging as a build flag? How do I make it print production when I specify production as a build flag? Am I doing something wrong here?
go build and go install will not rebuild the package (binary) if it looks like nothing has changed -- and it's not sensitive to changes in command-line build tags.
One way to see this is to add -v to print the packages as they are built:
$ go install -v -tags "staging"
my/server
$ go install -v -tags "production"
(no output)
You can force a rebuild by adding the -a flag, which tends to be overkill:
$ go install -a -v -tags "production"
my/server
...or by touching a server source file before the build:
$ touch main.go
$ go install -a -tags "staging"
...or manually remove the binary before the build:
$ rm .../bin/server
$ go install -a -tags "production"

Running the "exec" command in Jenkins "Execute Shell"

I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.

Need to solve "Can't locate VMware/VIRuntime.pm" in cygwin

I have a (maybe) unusual situation. I need to run VMware CLI commands in a Windows box, but via the cygwin CLI inside a shell script. I can NOT change this for now, so any suggestions to "why not do this instead" may be futile, although appreciated. Here's a sample script.
#!/bin/bash
# Paths for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
When I run the above script, I get
Can't locate VMware/VIRuntime.pm in #INC (#INC contains:
/usr/lib/perl5/site_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/site_perl/5.14
/usr/lib/perl5/vendor_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/vendor_perl/5.14
/usr/lib/perl5/5.14/i686-cygwin-threads-64int /usr/lib/perl5/5.14
/usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10
/usr/lib/perl5/site_perl/5.8 .) at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8. BEGIN
failed--compilation aborted at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8.
I can run the same vmware-cmd.pl script from a DOS command prompt
c:> vmware-cm.pl
So I now my installation is correct.
Any clues please?
This post gave me the idea to fix it. But now I get a core dump.
How is Perl's #INC constructed? (aka What are all the ways of affecting where Perl modules are searched for?)
The added line is the second export PERL5LIB line.
#!/bin/bash
# Path for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
export PERL5LIB=$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
I solved by going through my elbow to get to my a**, as the saying goes.
What I did was
- Install vmware cli on my Windows box to the default directory
- Added environment variables for the VMware main directory, the bin directory, the Perl directory and the Perl/bin directory
- Added these environment variables to my PATH variable.
Then I created a vmware-cli.bat file that takes parameters and concatenates them into a vmware-cli command with the correct values. For example, I call this to list the VMs in the server
cygwin:> ./vmware-cli.bat vmware-cmd.pl --server MyServer --username User --password PW -l
Inside the batch file I essentailly do
REM Get first parm as the command, and then concatenate the rest of the parms
set VCLI_CMD=%1
shift
:LOOP
if %1x==x goto :EXECUTE
set VCLI_CMD=%VCLI_CMD% %1
shift
goto LOOP:
:EXECUTE
%VCLI_CMD%
This is an alternative to the previous posted that will allow you to keep it in the same shell script
VIMCMD="/cygdrive/C/Program Files (x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl"
VIMCMD_DOS=$(cygpath -d "$VIMCMD")
DOS_VIMCMD="cmd /c $VIMCMD_DOS"
Then you can run:
$ $DOS_VIMCMD --version
vSphere SDK for Perl version: 6.0.0
Script 'vmware-cmd.pl' version: 6.0.0