If I want to run a specific command (with arguments) under Software Collections, I can use this command:
scl enable python27 "ls /tmp"
However, if I try to make a shell script that has a similar command as its shebang line, I get errors:
$ cat myscript
#!/usr/bin/scl enable python27 "ls /tmp"
echo hello
$ ./myscript
Unable to open /etc/scl/prefixes/"ls!
What am I doing wrong?
You should try using -- instead of surrounding your command with quotes.
scl enable python27 -- ls /tmp
I was able to make a python script that uses the rh-python35 collection with this shebang:
#!/usr/bin/scl enable rh-python35 -- python
import sys
print(sys.version)
The parsing of arguments in the she-bang command is not really defined. From man execve:
The semantics of the optional-arg argument of an interpreter script vary across implementations. On Linux, the entire string following the interpreter name is passed as a single argument to the interpreter, and this string can include white space. However, behavior differs on some other systems. Some systems use the first white space to terminate optional-arg. On some systems, an interpreter script can have multiple arguments, and white spaces in optional-arg are used to delimit the arguments.
No matter what, argument splitting based on quote sis not supported. So when you write:
#!/usr/bin/scl enable python27 "ls /tmp"
It's very possible that what gets invoked is (using bash notation):
'/usr/bin/scl' 'enable' 'python27' '"ls' '/tmp"'
This is probably why it tries to open the "ls file at /etc/scl/prefixes/"ls
But it is just as likely that the shebang evaluates to:
'/usr/bin/scl' 'enable python27 "ls /tmp"'
And that would fail since it wont be able to find a command named enable python27 "ls /tmp" for scl to execute.
There's a few workarounds you can use.
You can call your script via scl:
$ cat myscript
#!/bin/bash
echo hello
$ scl enable python27 ./myscript
hello
You can also use the heredoc notation, but it might lead to subtle issues. I personally avoid this:
$ cat ./myscript
#!/bin/bash
scl enable python27 -- <<EOF
echo hi
echo \$X_SCLS
EOF
$ bash -x myscript
+ scl enable python27 --
hi
python27
You can see one of the gotcha's already: I had to write \$X_SCLS to access the environment variable instead of just $X_SCL.
Edit: Another option is two have two scripts. One that has the actual code, and the second that simply does scl enable python27 $FIRST_SCRIPT. Then you wont have to remember to enter scl ... manually.
The software collections documentation may also be helpful. In particular you can try
cat myscript.sh | scl enable python27 -
As well as permanently enabling a software collection
source scl_source enable python27
./myscript.sh
Related
I am currently having problems using TinyTeX in a conda environment with Snakemake. I have to install TinyTeX installation files using the command tinytex::install_tinytex() before running the pipeline. This installs TinyTeX outside of the created environment (which isn't that big of a problem... but not preferred either) . The main problem is that every time I execute my Snakemake pipeline it will try to reinstall this installation which I don't want. Could anyone tell me what the easiest way is for me to check whether it's installed already? Should I be using the command Rscript -e \"tinytex:::is_tinytex()\" with an if-statement? And what is the best way to write that if-statement by calling Rscript -e in Snakemake? Or should I just write a boolean text-file on first run which specifies whether TinyTeX has been installed before?
It kinda sucks that the TinyTeX conda dependency doesn't work on its own without additional installation...
Snakemake rule (ignore input/output):
rule assembly_report_rmarkdown:
input:
rules.assembly_graph2image_bandage.output,
rules.assembly_assessment_quast.output,
rules.coverage_calculator_shortreads.output,
rules.coverage_calculator_longreads.output
output:
config["outdir"] + "Hybrid_assembly_report.pdf"
conda:
"envs/r-rmarkdown.yaml"
shell:
"""
cp report/RMarkdown/Hybrid_assembly_report.Rmd {config[outdir]}Hybrid_assembly_report.Rmd
Rscript -e \"tinytex::install_tinytex()\"
Rscript -e \"rmarkdown::render('{config[outdir]}Hybrid_assembly_report.Rmd')\"
rm -f {config[outdir]}Hybrid_assembly_report.Rmd {config[outdir]}Hybrid_assembly_report.tex
"""
Conda YAML:
name: r-rmarkdown
channels:
- conda-forge
- bioconda
dependencies:
- r-base=4.0.3
- r-rmarkdown=2.5
- r-tinytex=0.27
Thanks in advance.
I think I've solved the issue. Instead of calling Rscript -e, I have put the following if-statement in the setup chunk in R Markdown (Which runs before running any other code if i'm correct). I then proceeded to uninstall TinyTeX to see whether it will install for once only which it did.
knitr::opts_chunk$set(echo = TRUE)
library(knitr)
if (!tinytex:::is_tinytex()){
tinytex::install_tinytex()
}
How do you configure Tox to source a file before it runs your test command?
I tried the obvious:
commands = source /path/to/my/setup.bash; ./mytestcommand
But Tox just reports the ERROR: InvocationError: could not find executable 'source'
I know Tox has a setenv parameter, but I want to use my setup.bash and not have to copy and paste its contents into my tox.ini.
tox uses exec system call to run commands, not shell; and of course exec doesn't know how to run source. You need to explicitly run the command with bash, and you need to whitelist bash to avoid warnings from tox. That is, your tox.ini should be somewhat like this:
[testenv]
commands =
bash -c 'source /path/to/my/setup.bash; ./mytestcommand'
whitelist_externals =
bash
I have a python script that I want to execute from the terminal, but I don't want to use the command pythonscriptName.py, instead, I'd like to just type scriptName. Is that possible? If it is, how?
I had a look in here and here, but it doesn't work (probably because I'm on a different os).
I'm using python 2.7.9 on osx Yosemite (10.10.3).
Put this as the first line in your Python script:
#!/usr/bin/python
(or wherever your Python interpreter lives).
Then give the script the executable bit:
chmod +x scriptName.py
Now you should be able to execute it like ./scriptName.py. You can then put a symlink without the .py extension somewhere in your path.
I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.
I have a (maybe) unusual situation. I need to run VMware CLI commands in a Windows box, but via the cygwin CLI inside a shell script. I can NOT change this for now, so any suggestions to "why not do this instead" may be futile, although appreciated. Here's a sample script.
#!/bin/bash
# Paths for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
When I run the above script, I get
Can't locate VMware/VIRuntime.pm in #INC (#INC contains:
/usr/lib/perl5/site_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/site_perl/5.14
/usr/lib/perl5/vendor_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/vendor_perl/5.14
/usr/lib/perl5/5.14/i686-cygwin-threads-64int /usr/lib/perl5/5.14
/usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10
/usr/lib/perl5/site_perl/5.8 .) at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8. BEGIN
failed--compilation aborted at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8.
I can run the same vmware-cmd.pl script from a DOS command prompt
c:> vmware-cm.pl
So I now my installation is correct.
Any clues please?
This post gave me the idea to fix it. But now I get a core dump.
How is Perl's #INC constructed? (aka What are all the ways of affecting where Perl modules are searched for?)
The added line is the second export PERL5LIB line.
#!/bin/bash
# Path for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
export PERL5LIB=$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
I solved by going through my elbow to get to my a**, as the saying goes.
What I did was
- Install vmware cli on my Windows box to the default directory
- Added environment variables for the VMware main directory, the bin directory, the Perl directory and the Perl/bin directory
- Added these environment variables to my PATH variable.
Then I created a vmware-cli.bat file that takes parameters and concatenates them into a vmware-cli command with the correct values. For example, I call this to list the VMs in the server
cygwin:> ./vmware-cli.bat vmware-cmd.pl --server MyServer --username User --password PW -l
Inside the batch file I essentailly do
REM Get first parm as the command, and then concatenate the rest of the parms
set VCLI_CMD=%1
shift
:LOOP
if %1x==x goto :EXECUTE
set VCLI_CMD=%VCLI_CMD% %1
shift
goto LOOP:
:EXECUTE
%VCLI_CMD%
This is an alternative to the previous posted that will allow you to keep it in the same shell script
VIMCMD="/cygdrive/C/Program Files (x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl"
VIMCMD_DOS=$(cygpath -d "$VIMCMD")
DOS_VIMCMD="cmd /c $VIMCMD_DOS"
Then you can run:
$ $DOS_VIMCMD --version
vSphere SDK for Perl version: 6.0.0
Script 'vmware-cmd.pl' version: 6.0.0