I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.
Related
I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%
If I want to run a specific command (with arguments) under Software Collections, I can use this command:
scl enable python27 "ls /tmp"
However, if I try to make a shell script that has a similar command as its shebang line, I get errors:
$ cat myscript
#!/usr/bin/scl enable python27 "ls /tmp"
echo hello
$ ./myscript
Unable to open /etc/scl/prefixes/"ls!
What am I doing wrong?
You should try using -- instead of surrounding your command with quotes.
scl enable python27 -- ls /tmp
I was able to make a python script that uses the rh-python35 collection with this shebang:
#!/usr/bin/scl enable rh-python35 -- python
import sys
print(sys.version)
The parsing of arguments in the she-bang command is not really defined. From man execve:
The semantics of the optional-arg argument of an interpreter script vary across implementations. On Linux, the entire string following the interpreter name is passed as a single argument to the interpreter, and this string can include white space. However, behavior differs on some other systems. Some systems use the first white space to terminate optional-arg. On some systems, an interpreter script can have multiple arguments, and white spaces in optional-arg are used to delimit the arguments.
No matter what, argument splitting based on quote sis not supported. So when you write:
#!/usr/bin/scl enable python27 "ls /tmp"
It's very possible that what gets invoked is (using bash notation):
'/usr/bin/scl' 'enable' 'python27' '"ls' '/tmp"'
This is probably why it tries to open the "ls file at /etc/scl/prefixes/"ls
But it is just as likely that the shebang evaluates to:
'/usr/bin/scl' 'enable python27 "ls /tmp"'
And that would fail since it wont be able to find a command named enable python27 "ls /tmp" for scl to execute.
There's a few workarounds you can use.
You can call your script via scl:
$ cat myscript
#!/bin/bash
echo hello
$ scl enable python27 ./myscript
hello
You can also use the heredoc notation, but it might lead to subtle issues. I personally avoid this:
$ cat ./myscript
#!/bin/bash
scl enable python27 -- <<EOF
echo hi
echo \$X_SCLS
EOF
$ bash -x myscript
+ scl enable python27 --
hi
python27
You can see one of the gotcha's already: I had to write \$X_SCLS to access the environment variable instead of just $X_SCL.
Edit: Another option is two have two scripts. One that has the actual code, and the second that simply does scl enable python27 $FIRST_SCRIPT. Then you wont have to remember to enter scl ... manually.
The software collections documentation may also be helpful. In particular you can try
cat myscript.sh | scl enable python27 -
As well as permanently enabling a software collection
source scl_source enable python27
./myscript.sh
I'm working on CentOS 7 and regular sudo commands (e.g. sudo yum update, etc.) are working fine. However, one of my sudo commands require to preserve the environment variables, so I used:
sudo -E ./build/unit-tests
and I get this error:
/var/tmp/sclyZMkcN: line 8: -E: command not found
It appears sudo is not recognizing the -E command on CentOS 7. What can I do in this case? Any alternatives or possible fix?
I've recently come across exactly the same problem. I tried to execute a script with sudo -E, which caused the above-mentioned -E: command not found error.
The reason turned out to be Red Hat Developer Toolset providing a broken sudo. A solution is to use the full sudo system path to make sure a good one is used, i.e.
/usr/bin/sudo -E ./some_script.sh
I you know which variables to preserve, you can use env to pass them through the command line.
sudo env foo="$foo" bar="$bar" ./build/unit-tests
How do you configure Tox to source a file before it runs your test command?
I tried the obvious:
commands = source /path/to/my/setup.bash; ./mytestcommand
But Tox just reports the ERROR: InvocationError: could not find executable 'source'
I know Tox has a setenv parameter, but I want to use my setup.bash and not have to copy and paste its contents into my tox.ini.
tox uses exec system call to run commands, not shell; and of course exec doesn't know how to run source. You need to explicitly run the command with bash, and you need to whitelist bash to avoid warnings from tox. That is, your tox.ini should be somewhat like this:
[testenv]
commands =
bash -c 'source /path/to/my/setup.bash; ./mytestcommand'
whitelist_externals =
bash
Following the accepted answer to Running phpunit tests using HHVM (HipHop), I attempted to run some tests:
unit-tests/ [develop] > hhvm $(which phpunit) --colors -c phpunit.xml --testsuite all .
/usr/bin/env php -d allow_url_fopen=On -d detect_unicode=Off /usr/local/Cellar/phpunit/4.3.4/libexec/phpunit-4.3.4.phar $*
It appears that this is a command to run the tests (which it does), but I'm confused about
why it's printing this command instead of just running the tests
whether executing that command even uses HHVM, since it starts with /usr/bin/env php...
Does anyone have any insight into this? Thanks so much!
What happened here is that you installed PHPUnit using homebrew, so it created a wrapper script for the actual PHAR file. That wrapper script is a Bash script that runs the PHPUnit PHAR and that script is what you're trying to get HHVM to run. Since it's not a PHP or Hack script, the Bash script is outputted directly.
Instead, you probably want to try to execute $(brew --prefix phpunit)/libexec/phpunit*.phar
e.g.: hhvm $(brew --prefix phpunit)/libexec/phpunit*.phar --colors -c phpunit.xml --testsuite all .
The wildcard is so that you don't need to specify the version of PHPUnit being using.