go-cd - run basic linux commands (cd, etc) in GoCD at task level - go-cd

I have had a good amount of experience using GoCD. I previously used bash scripts to generate builds and was successful as well. For some reason, I have to use basic linux commands as individual tasks.
What I have done successfully:
Pipeline1 - > Stage1 > Job1 > Task 1 : git clone my-project
Pipeline1 - > Stage1 > Job1 > Task 2 : bash my-project/script.sh
What I want to achieve:
Pipeline1 - > Stage1 > Job1 > Task 1 : git clone my-project2
Pipeline1 - > Stage1 > Job1 > Task 2 : cd my-project2
Pipeline1 - > Stage1 > Job1 > Task 3 : build package (*tar.gz file)
Pipeline1 - > Stage1 > Job1 > Task 4 : mkdir newDirectory/
Pipeline1 - > Stage1 > Job1 > Task 5 : mv *tar.gz newDirectory/
Whilst I am able to achieve task 1,3,4; but for commands like cd, mv, I get error like check if agent can run cd, cannot find the file (when I run the mv cmd as it is in terminal it works!) respectively.
I came across GoCD Command Respository, but I am not sure how would it help me. Can anyone help if you faced the same situation earlier?

Just in case anyone was looking for the answer!
Okay, so I was playing around settings and found the field of 'Working directory' while creating a new task. here you can specify the directory where you want to run your command relative to where git checks out the materials. This serves te purpose of cd.
Will update this answer when I get something for mv.
Thanks!

Related

Podman support of target build?

DockerFile:
FROM scratch as stage1
RUN 1
FROM scratch as stage2
RUN 2
FROM scratch AS stage3
RUN 3
If I run
docker build --target stage2 .
stage1 will be ignored. And stage2(RUN 2) will be called
But if I run
podman build --target stage2 .
stage1(RUN 1) will be called.
Why is podman ignoring the target specified? Is there a way to specify the target to Podman?
Turned out target option in podman works differently than what I expected.
This is from release 1.7 of Buildah:
The buildah bud command now accepts a --target option which allows the build to only include the stages in the Dockerfile up to and including the specified stage.
So everything before the specified target(stage) will be built. (I wonder the reason for this implementation!?)

Passing environment variables to docker from GitLab CI/CD job failing

I am having issues passing variables which are defined in the GitLab ci file to my docker file
My GitLab CI file looks like this
variables:
IMAGE : "openjdk"
IMAGE_TAG : "11-slim"
docker-image:
extends: .build
variables:
DOCKER_IMAGE_VERSION : ${JDK_IMAGE}:${JDK_IMAGE_TAG}
My Docker file looks a bit like this:
# --- STAGE 1 ----------------------------------------------------------------
# Getting ARGS for build
ARG DOCKER_IMAGE_VERSION
# Start with a base image containing Java runtime
FROM ${DOCKER_IMAGE_VERSION} as build
Now i am getting the following error when the pipeline starts the docker build:
Step 1/7 : ARG DOCKER_IMAGE_VERSION
Step 2/7 : FROM ${DOCKER_IMAGE_VERSION} as build
base name (${DOCKER_IMAGE_VERSION}) should not be blank
Can someone help point me where i am going wrong?
Thanks
consider defining global ARG's and override it when you build.
example
ARG sample_TAG=test
ARG sample_TAG
WORKDIR /opt/sample-test
RUN echo "image tag is ${sample_TAG}"
FROM $sample_TAG
VOLUME /opt
RUN mkdir /opt/sample-test

Use Docker-Windows for Gitlab-runner

I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%

mxnet sagemaker load model

I'm trying to load an already trained model from sagemaker MXnet.
I have the model.tar.gz file, however, when I try to do
> %%bash
> tar -xzf model.tar.gz rm model.tar.gz
> prefix = 'model_name'
> sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, 0)
> mod = mx.mod.Module(symbol=sym,
> context=ctx, label_names=None) mod.bind(for_training=False, data_shapes=[('data', (1,3,480,480))], label_shapes=mod._label_shapes)
> mod.set_params(arg_params, aux_params)
I keep getting the error Error in operator multibox_target: [09:08:47] src/operator/contrib/./multibox_target-inl.h:225: Check failed: lshape.ndim() == 3 (0 vs. 3) Label should be [batch-num_labels-(>=5)] tensor
Can anyone help me with this?
I believe you have to run deploy.py prior to being able to prediction.
check out incubator-mxnet\example\ssd\deploy.py
and note the model files need to be in a subdirectory of the directory where deploy.py is located.
this worked for my resnet50 based model.
python deploy.py --network resnet50 --prefix model2/model_algo_1 --num-class 2 --data-shape 416
Thank you #lwebuser suggestion. I wrote an end-to-end example at the jupyter notebook. Here is the link
You can see the result:

Running the "exec" command in Jenkins "Execute Shell"

I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.