Django creates virtual environment alternatively applies command:
virtualenv -p python3 .
I have typed over 30 times of the commands.
The meaning of -p is a ghost for me.To search it several times,I failed to get its explanation.
I wonder it might be out of the scope of my concepts and vocabulary.
-p sets which python interpreter to use
See the vitrualenv reference here https://virtualenv.pypa.io/en/stable/reference/
Related
I have this dockerfile content:
FROM python:latest
ARG b=8
ARG r=False
ARG p=1
ENV b=${b}
ENV r=${r}
ENV p=${p}
# many lines of successful setup
ENTRYPOINT python analyze_all.py /analysispath -b $b -p $p -r $r
My intention was to take three arguments at the command line like so:
docker run -it -v c:\analyze containername -e b=16 -e p=22 -e r=False
But unfortunately, I'm misunderstanding something fundamental and simple here instead of something complicated, so I'm helpless :).
If I understood the question correctly, this Dockerfile should do what is required:
FROM python:latest
# test script that prints argv
COPY analyze_all.py /
ENTRYPOINT ["python", "analyze_all.py", "/analysispath"]
Launch:
$ docker run -it test:latest -b 16 -p 22 -r False
sys.argv=['analyze_all.py', '/analysispath', '-b', '16', '-p', '22', '-r', 'False']
Looks like your Dockerfile is designed to build and run a container on Windows. I tested my Dockerfile on Linux, it probably won't be much different to use this approach on Windows.
I think ARG instructions isn't needed in this case because it defines a variable that users can pass at build-time using the docker build command. I would also suggest that you take a look at the Dockerfile reference for the ENTRYPOINT instruction:
Command line arguments to docker run will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run -d will pass the -d argument to the entry point.
Also, this question will probably be useful for you: How do I use Docker environment variable in ENTRYPOINT array?
Is it possible to do the following
docker build --build-arg myvar=yes
f
RUN if ["$myvar" == "yes"]; \
then FROM openjdk \
COPY . . \
RUN java -jar myjarfile.jar \
fi
As you can tell from above i only want to run the specific section in the dockerfile if the build argument is set. I've seen similar thread but they seems to always be running bash commands. If its possible i can't seem to get the syntax correct.
As of now, doing conditional execution in Dockerfiles without the help of the shell is severely limited, see https://medium.com/#tonistiigi/advanced-multi-stage-build-patterns-6f741b852fae
The idea behind existing approaches is to use a Docker multistage build and create different stages for the different outcomes of the IF. Then, at one point, a stage to copy data from is selected based on the value of a variable.
This is an example similar to what you wrote:
# docker build -t test --build-arg MYVAR=yes .
# docker build -t test --build-arg MYVAR=no .
ARG MYVAR=no
FROM openjdk:latest as myvar-yes
COPY . /datadir
RUN java -jar /datadir/myjarfile.jar || true
FROM openjdk:latest as myvar-no
RUN mkdir /datadir
FROM myvar-${MYVAR} as myvar-src
FROM debian:10
COPY --from=myvar-src /datadir/ /
RUN ls /
Stage myvar-no is a variant with an empty /datadir. Stage myvar-yes contains the jarfile and runs it once (remove the || true for actual use, it is just that I did not provide a "real" jarfile in my local tests). Then the last stage copies from the stage myvar-${MYVAR} and invokes ls to be able to see the differences between the two variants.
I have not understand all of the question about syntax: If there are some troubles with getting the bash syntax correctly, that is possibly easier than trying to conditionally run Dockerfile statements.
I've been writing in Python 3 for a while, I came across this library that I really need:
https://github.com/Yelp/python-gearman
but I want to try to port it to Python 3. But I don't know how one runs the tests in a Python module. I tried python -m unittest discover but it didn't discover any tests. And once I actually do change something to Python 3, how would I test it? Is the testing mechanism the same in Python 3 as in 2?
First figure out how to run the tests in 2.7, assuming that the test directory is installed along with the gearman directory. (It should be if you have git and clone the github repository, but I have not used git, just hg.) In the test/ directory, add and run a shell/console script like the following. (You did not specify OS.)
python -m admin_client_tests
python -m client_tests
python -m protocol_tests
python -m worker_tests
where python invokes 2.7 on your system. Each of these modules imports _core_testing.py, which should not be run directly. There should be a way to run all tests at once, and it should be documented, but the authors may not expect users to run them. Anyway, run 2to3 to produce a new package directory, look at messages, change python to run 3.x, and test. Process difficulty may be anything from 'no-brainer' to 'give-it-up'. (I did one conversion that was about a 1 or 2 on 0 to 10 difficulty scale.) If successful, asks authors if they want to either make code 2 and 3 compatible or have a separate 3.x version.
You should specify the pattern explicitly (the default: test*.py doesn't work in this case):
$ python -m unittest discover -p \*_tests.py
.............................................................
----------------------------------------------------------------------
Ran 61 tests in 0.040s
OK
To run individual test files:
$ python -m tests.client_tests
I created a fabric file that contain several commands from short to long and complex. I need to have an autocomplete feature so that when user type fab[tab][tab] then all available fab commands are shown, just like we have in bash.
i.e.
user#someone-ubuntu:~/path/to/fabfile$ fab[tab][tab]
command1 command2 command3 ..and so on
How can I do this ?
There are instructions you can follow here: http://evans.io/legacy/posts/bash-tab-completion-fabric-ubuntu/
Basically you run a script that calls fab --shortlist, the output gets fed into complete which is a bash function that you can read more about here: https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html
I did this for my new fabfile using fabric 2.5 & Python 3:
~/.config/fabfile
#!/usr/bin/env zsh
_fab()
{
local cur
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
# Build a list of the available tasks from: `python3 -m fabric --complete`
local cmds=$(python3 -m fabric --complete 2>/dev/null)
# Generate possible matches and store them in the
# array variable COMPREPLY
COMPREPLY=($(compgen -W "${cmds}" $cur))
}
# Assign the auto-completion function for our command.
complete -F _fab fab
And in my ~/.zshrc:
source ~/.config/fabfile
I updated the fabric 1.X version here: gregorynichola's gist
In Fabric, when I try to use any alias' or functions from my .bash_profile file, they are not recognized. For instance my .bash_profile contains alias c='workon django-canada', so when I type c in iTerm or Terminal, workon django-canada is executed.
My fabfile.py contains
def test():
local('c')
But when I try fab test it throws this at me:
[localhost] local: c
/bin/sh: c: command not found
Fatal error: local() encountered an error (return code 127) while executing 'c'
Aborting.
Other Fabric functions work fine. Do I have to specify my bash profile somewhere in fabric?
EDIT - As it turns out, this was fixed in Fabric 1.4.4. From the changelog:
[Feature] #725: Updated local to allow override of which local shell is used. Thanks to Mustafa Khattab.
So the original question would be fixed like this:
def test():
local('c', shell='/bin/bash')
I've left my original answer below, which only relates to Fabric version < 1.4.4.
Because local doesn't use bash. You can see it clearly in your output
/bin/sh: c: command not found
See? It's using /bin/sh instead of /bin/bash. This is because Fabric's local command behaves a little differently internally than run. The local command is essentially a wrapper around the subprocess.Popen python class.
http://docs.python.org/library/subprocess.html#popen-constuctor
And here's your problem. Popen defaults to /bin/sh. It's possible to specify a different shell if you are calling the Popen constructor yourself, but you're using it through Fabric. And unfortunately for you, Fabric gives you no means to pass in a shell, like /bin/bash.
Sorry that doesn't offer you a solution, but it should answer your question.
EDIT
Here is the code in question, pulled directly from fabric's local function defined in the operations.py file:
p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream,
stderr=err_stream)
(stdout, stderr) = p.communicate()
As you can see, it does NOT pass in anything for the executable keyword. This causes it to use the default, which is /bin/sh. If it used bash, it'd look like this:
p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream,
stderr=err_stream, executable="/bin/bash")
(stdout, stderr) = p.communicate()
But it doesn't. Which is why they say the following in the documentation for local:
local is simply a convenience wrapper around the use of the builtin Python subprocess module with shell=True activated. If you need to do anything special, consider using the subprocess module directly.
One workaround is simply to wrap whatever command you have around a bash command:
#task
def do_something_local():
local("/bin/bash -l -c 'run my command'")
If you need to do a lot of these, consider creating a custom context manager.
It looks like you're trying to use virtualenvwrapper locally. You'll need to make your local command string look like this:
local("/bin/bash -l -c 'workon django-canada && python manage.py runserver'")
Here's an example by yours truly that does that for you in a context manager.