How do you configure Tox to source a file before it runs your test command?
I tried the obvious:
commands = source /path/to/my/setup.bash; ./mytestcommand
But Tox just reports the ERROR: InvocationError: could not find executable 'source'
I know Tox has a setenv parameter, but I want to use my setup.bash and not have to copy and paste its contents into my tox.ini.
tox uses exec system call to run commands, not shell; and of course exec doesn't know how to run source. You need to explicitly run the command with bash, and you need to whitelist bash to avoid warnings from tox. That is, your tox.ini should be somewhat like this:
[testenv]
commands =
bash -c 'source /path/to/my/setup.bash; ./mytestcommand'
whitelist_externals =
bash
Related
I'm following along with Google's Python class, and the person in the videos always runs his scripts from the interactive session in command line using "./". Whenever I try it, I just get a syntax error. How can I use ./ to run scripts? I'm using Windows 10
To run a script from the command line you need to use the syntax
python3 script.py
Now on Unix systems, it's possible to add a shebang to the first line of the script as followings
#!/usr/bin/env python3
This then allows the shell syntax './name.py' to work. But windows doesn't have this mechanism. Instead, you need to create an 'association' between the .py extension and the python executable ('right click', 'open with'). Or just use the full syntax. Both require the python executable to be in your path, and generally on windows both python 2 and 3 will have the same executable name
I'm having trouble running a .sh file in python. When I type in the location of the .sh file (/home/pi/file/script.sh) the script runs perfectly.
I'm trying to run this script in my python2 script and I've done the following methods:
subprocess.Popen(['bash', 'location of .sh'])
subprocess.call(['location of .sh'])
os.popen(['location of .sh'])
When I run the python script, I get a prompt from rclone saying "Command sync needs 2 arguments maximum"
My .sh file just includes:
#!/bin/sh
sudo /usr/local/bin/rclone -v sync /home/pi/some_project_data remote:rclone --delete-before --include *.csv --include *.py
I'm not sure how running the .sh file on terminal works fine, but this error pops up when I'm trying to run the .sh file using Python.
Your script fails whenever you run it in a directory containing 2 or more .csv or .py files. This is true for terminals as well as via Python.
To avoid that, quote your patterns so the shell doesn't expand them:
#!/bin/sh
sudo /usr/local/bin/rclone -v sync /home/pi/some_project_data remote:rclone \
--delete-before --include "*.csv" --include "*.py"
Please try:
os.popen('bash locationof.sh')
ex:
os.popen('bash /home/script.sh')
That worked on my machine. If you place square brackets around the string then python assumes it is a list and popen doesnt accept a list, it accepts a single string.
If the script doesnt work, then this won't fix that, but it will at least run it. If it still doesnt work, try running the script with something like
touch z.txt
and see if z.txt appears in the file explorer. If it does, then your .sh file has a problem.
I have a python script that I want to execute from the terminal, but I don't want to use the command pythonscriptName.py, instead, I'd like to just type scriptName. Is that possible? If it is, how?
I had a look in here and here, but it doesn't work (probably because I'm on a different os).
I'm using python 2.7.9 on osx Yosemite (10.10.3).
Put this as the first line in your Python script:
#!/usr/bin/python
(or wherever your Python interpreter lives).
Then give the script the executable bit:
chmod +x scriptName.py
Now you should be able to execute it like ./scriptName.py. You can then put a symlink without the .py extension somewhere in your path.
I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.
I have a (maybe) unusual situation. I need to run VMware CLI commands in a Windows box, but via the cygwin CLI inside a shell script. I can NOT change this for now, so any suggestions to "why not do this instead" may be futile, although appreciated. Here's a sample script.
#!/bin/bash
# Paths for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
When I run the above script, I get
Can't locate VMware/VIRuntime.pm in #INC (#INC contains:
/usr/lib/perl5/site_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/site_perl/5.14
/usr/lib/perl5/vendor_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/vendor_perl/5.14
/usr/lib/perl5/5.14/i686-cygwin-threads-64int /usr/lib/perl5/5.14
/usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10
/usr/lib/perl5/site_perl/5.8 .) at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8. BEGIN
failed--compilation aborted at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8.
I can run the same vmware-cmd.pl script from a DOS command prompt
c:> vmware-cm.pl
So I now my installation is correct.
Any clues please?
This post gave me the idea to fix it. But now I get a core dump.
How is Perl's #INC constructed? (aka What are all the ways of affecting where Perl modules are searched for?)
The added line is the second export PERL5LIB line.
#!/bin/bash
# Path for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
export PERL5LIB=$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
I solved by going through my elbow to get to my a**, as the saying goes.
What I did was
- Install vmware cli on my Windows box to the default directory
- Added environment variables for the VMware main directory, the bin directory, the Perl directory and the Perl/bin directory
- Added these environment variables to my PATH variable.
Then I created a vmware-cli.bat file that takes parameters and concatenates them into a vmware-cli command with the correct values. For example, I call this to list the VMs in the server
cygwin:> ./vmware-cli.bat vmware-cmd.pl --server MyServer --username User --password PW -l
Inside the batch file I essentailly do
REM Get first parm as the command, and then concatenate the rest of the parms
set VCLI_CMD=%1
shift
:LOOP
if %1x==x goto :EXECUTE
set VCLI_CMD=%VCLI_CMD% %1
shift
goto LOOP:
:EXECUTE
%VCLI_CMD%
This is an alternative to the previous posted that will allow you to keep it in the same shell script
VIMCMD="/cygdrive/C/Program Files (x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl"
VIMCMD_DOS=$(cygpath -d "$VIMCMD")
DOS_VIMCMD="cmd /c $VIMCMD_DOS"
Then you can run:
$ $DOS_VIMCMD --version
vSphere SDK for Perl version: 6.0.0
Script 'vmware-cmd.pl' version: 6.0.0