Why doesn't Fabric see my .bash_profile? - fabric

In Fabric, when I try to use any alias' or functions from my .bash_profile file, they are not recognized. For instance my .bash_profile contains alias c='workon django-canada', so when I type c in iTerm or Terminal, workon django-canada is executed.
My fabfile.py contains
def test():
local('c')
But when I try fab test it throws this at me:
[localhost] local: c
/bin/sh: c: command not found
Fatal error: local() encountered an error (return code 127) while executing 'c'
Aborting.
Other Fabric functions work fine. Do I have to specify my bash profile somewhere in fabric?

EDIT - As it turns out, this was fixed in Fabric 1.4.4. From the changelog:
[Feature] #725: Updated local to allow override of which local shell is used. Thanks to Mustafa Khattab.
So the original question would be fixed like this:
def test():
local('c', shell='/bin/bash')
I've left my original answer below, which only relates to Fabric version < 1.4.4.
Because local doesn't use bash. You can see it clearly in your output
/bin/sh: c: command not found
See? It's using /bin/sh instead of /bin/bash. This is because Fabric's local command behaves a little differently internally than run. The local command is essentially a wrapper around the subprocess.Popen python class.
http://docs.python.org/library/subprocess.html#popen-constuctor
And here's your problem. Popen defaults to /bin/sh. It's possible to specify a different shell if you are calling the Popen constructor yourself, but you're using it through Fabric. And unfortunately for you, Fabric gives you no means to pass in a shell, like /bin/bash.
Sorry that doesn't offer you a solution, but it should answer your question.
EDIT
Here is the code in question, pulled directly from fabric's local function defined in the operations.py file:
p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream,
stderr=err_stream)
(stdout, stderr) = p.communicate()
As you can see, it does NOT pass in anything for the executable keyword. This causes it to use the default, which is /bin/sh. If it used bash, it'd look like this:
p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream,
stderr=err_stream, executable="/bin/bash")
(stdout, stderr) = p.communicate()
But it doesn't. Which is why they say the following in the documentation for local:
local is simply a convenience wrapper around the use of the builtin Python subprocess module with shell=True activated. If you need to do anything special, consider using the subprocess module directly.

One workaround is simply to wrap whatever command you have around a bash command:
#task
def do_something_local():
local("/bin/bash -l -c 'run my command'")
If you need to do a lot of these, consider creating a custom context manager.

It looks like you're trying to use virtualenvwrapper locally. You'll need to make your local command string look like this:
local("/bin/bash -l -c 'workon django-canada && python manage.py runserver'")
Here's an example by yours truly that does that for you in a context manager.

Related

Why can't I run python scripts from the command line interactive session using "./ name.py"?

I'm following along with Google's Python class, and the person in the videos always runs his scripts from the interactive session in command line using "./". Whenever I try it, I just get a syntax error. How can I use ./ to run scripts? I'm using Windows 10
To run a script from the command line you need to use the syntax
python3 script.py
Now on Unix systems, it's possible to add a shebang to the first line of the script as followings
#!/usr/bin/env python3
This then allows the shell syntax './name.py' to work. But windows doesn't have this mechanism. Instead, you need to create an 'association' between the .py extension and the python executable ('right click', 'open with'). Or just use the full syntax. Both require the python executable to be in your path, and generally on windows both python 2 and 3 will have the same executable name

Termux says "'Bad Interpreter: No such file or directory"

I have a problem and hope someone can help me. I am currently trying to write a script for Termux or Termux:Task. My script currently looks like this:
#!/data/data/com.termux/files/usr/bin/bash
cd /./sdcard/www/public/
wp post list sleep 5
Every time I load the script I get the following error message:
/data/data/com.termux/files/usr/bin/wp: /usr/bin/env: bad interpreter: No such file or directory.
I've been looking for a solution to my problem for hours, unfortunately without success.
I am using an extension for Termux called "WordPress CLI". When I start termux and enter the commands individually, everything works. But as soon as I write the commands into a sh script and start it doesn't work anymore. :(
Can anyone help me?
Thanks a lot
This is simple error you can fix it by replacing !/data/data/com.termux/files/usr/bin/bash. With #!/data/data/com.termux/files/usr/bin/bash
Please tell if you get error again
Try with #!/usr/bin/env bash in the shebang line.
Termux-exec allows you to execute scripts with shebangs for traditional Unix file structures. So shebangs like #!/bin/sh and #!/usr/bin/env python should be able to run without termux-fix-shebang.
From https://wiki.termux.com/wiki/Termux-exec
According to doc:
Why do I keep getting a '/bin/sh bad interpreter' error?
This error is thrown due to access script interpreter at nonexistent
location.
Termux does not have common directories like /bin, /sbin, /usr/bin at
their standard place. There is an exception for certain devices where
/bin is a symbolic link to /system/bin, but that does not make a
difference.
Interpreters should be accessed at this directory only:
/data/data/com.termux/files/usr/bin
There are three ways to fix this:
Install termux-exec by using pkg install termux-exec. It won’t affect the current session, but after a restart should work without
any setup. Not needed if your Termux is up to date. If still not
working, try the next workaround.
Use command termux-fix-shebang to fix the shebang line of specified file.
Use termux-chroot from package proot to setup a chroot environment mimicking a normal Linux file system in Termux.
termux-fix-shebang my_script.py of second method work for me, which it modify the shebang(first line of my_script.py) from #!/usr/bin/env python to #!/data/data/com.termux/files/usr/bin/env python. Since /usr/bin/ is not exist in Android, that's why it throws the error /usr/bin/env: bad interpreter: No such file or directory. The other solution is run with python my_script.py, neither of my_script.py nor ./my_script.py.
In my test, termux-exec of the first method only work if I added correct shebang in main script(child OR child of child script no need) and ran command export LD_PRELOAD=/data/data/com.termux/files/usr/lib/libtermux-exec.so.
And for the issue of this question, error shows /usr/bin/env in the middle with /data/data/com.termux/files/usr/bin/wp even though the shebang of script #!/data/data/com.termux/files/usr/bin/bash looks ok, it means that wp command (located at /data/data/com.termux/files/usr/bin/wp) used inside the script contains shebang #!/usr/bin/env wp and should modify it to #!/data/data/com.termux/files/usr/bin/env wp too. termux-exec of first method should fix this specific case too(already has correct shebang in main script).

Calling Python package from command line / PyCharm

I am creating a Python package, I anticipate that it will be called both by command line and from other scripts. Here is a simplified version of my file structure:
GREProject/
__init__.py
__main__.py
Parsing.py
Parseing.py contains a method, parse(), it takes two arguments, an input file and an output file. I am trying to figure out the proper code for "__main__.py" so that when the following is called from the command line or terminal the arguments will be passed to "parse()":
Python GREProject -i input.file -o output.file
I have tried this numerous ways but they have all met with failure, I do believe I need the "-m" flag for the interpreter but more than that I don't know. Example with the flag:
Python -m GREProject -i input.file -o output.file
When running the later command I receive the following error:
Import by filename is not supported.
Presumably from this line:
from . import Parsing
Ok, turns out this was a problem with my IDE, PyCharm. No idea why I recieved this error but I have setting that fixed it:
Import by filename is not supported.
For the record here are the options I set in my Pycharm project
Script:
GREProject
Script parameters:
-i .\GREProject\pr2.nyc1 -o .\GREProject\Test.pkl
Enviroment variables:
PYTHONUNBUFFERED=1
Python interpreter:
Python 2.7.11 (c:\Python27\python.exe)
Interpreter options:
-m
Working directory:
C:\Users\probert.dan\PycharmProjects
Here is an explanation of the options:
Script: This is the script to run, by default PyCharm will only insert absolute references to .py files, nothing prevents you from manually typing in a relative reference, in this case it is the GREProjects folder.
Script Parameters: These are passed onto the script itself, in this case I am telling my script that the input file is ".\GREProject\pr2.nyc1" which means, look the file "pr2.nyc1" in the "GREProject" directory below the current working directory.
Environment variables: This was set by PyCharm and left unchanged.
Python interpreter: My active interpreter.
Interpreter options: The option here tells python that we are calling a module, python then knows to access the "__main__.py" file.
Working directory: The directory the script is run from, I chose the directory above "GREProject"
For reference here is the contents of my "__main__.py file":
from . import Parsing
import argparse
parser = argparse.ArgumentParser(description='Parse flags.')
parser.add_argument('-i', help='Import file.')
parser.add_argument('-o', help='(Optional) Output file.')
arguments = parser.parse_args()
Parsing.parse(arguments.i, arguments.o)
It is also important to note that debugging in PyCharm is not possible like this. Here is the solution to debugging: Intellij/Pycharm can't debug Python modules

How can i load Module packages of linux using a python script

I am new to python and am learning things by writing scripts.
I have tried the following and none of them seem to be working.
1) commands.getoutput('module load xxx')
2) subprocess.check_output(['module load', xxx'])
None of these change the environment as a side effect of the module call. Can someone tell me what is wrong?
I found a different approach to solve this problem.
I have written a shell script which load s the environment modules and i am calling that in the python script.
Something like this
import subprocess
subprocess.call(['./module_load.sh'])
and the script has something like this....
module load ***
This seems to be working.
Have to say.....its pretty simple in Perl. it provides the environment modules package to handle this.
Both commands create a subshell where the module is loaded - the problem is that this subshell is destroyed the moment Python function terminates
In case it helps anyone, I managed to do it by prefixing all commands that depend on the module with module load xxx &&. For example,
module load mpi/mpich && mpirun ./myprogram -n 4 -options
Not an elegant solution, but works for me. There's also this answer but I couldn't make it work in the system I have access to.

Python interactive interpreter in hashbang line

For my django projects, I wanted to write a simple replacement for manage.py shell to take advantage of bpython. Essentially all it does is run setup_environ(settings) and then import some common models (User, etc.)
in any case, everything works fine when I run bpython -i bshell.py (my script is named bshell.py). Then I thought I'd get clever and set the hashbang line to #!/usr/bin/env bpython -i to make it even simpler, and this worked on the OSX but is not working now in Ubuntu (10.10).
#!/usr/bin/env python -i also does not work, but #!/usr/bin/env bpython works (but obviously doesn't drop into the interactive prompt).
It's a small point, but over the course of my life it will save me hundreds of "bpython -i"s if I can just run my script as ./bshell.py (really I'm just curious). Any ideas why it's not working on Ubuntu?
I should note I'm in a virtualenv, and I already double checked that line endings are *nix style.
From wikipedia:
Another portability problem is the interpretation of the command arguments.
Some systems, including Linux, do not split up the arguments; for example,
when running the script with the first line like,
#!/usr/bin/env python -c
That is, python -c will be passed as one argument to /usr/bin/env,
rather than two arguments.
If it's no big deal, you're probably better off using the actual path to bpython instead of going through /usr/bin/env.