This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Activate a virtualenv via fabric as deploy user
I've been advised to try and use fabric for deploying Django to a production server, and automating tasks by using python instead of bash.
I wanted to start easily and just automate the activation of my virtualenv, and start the Django development server in it.
I've created a file named fabfile.py:
from fabric.api import local
def activate_env():
local("source /.../expofit_env/bin/activate")
def run_local_server():
local("/.../server/manage.py runserver")
def start():
activate_env()
run_local_server()
However, when I run
fab start
i get the following message:
[localhost] local: source /.../expofit_env/bin/activate
/bin/sh: 1: source: not found
Fatal error: local() encountered an error (return code 127) while executin
'source /.../expofit_env/bin/activate'
What am I doing wrong?
Update
Based on Burhan Khalid's proposal, i tried the following:
....
def activate_env():
local("/bin/bash /.../expofit_env/bin/activate")
....
Running just
fab activate_env
results:
[localhost] local: /bin/bash /.../expofit_env/bin/activate
Done.
However after execution, virtualenv isn't activated.
For the following code:
def start_env():
with prefix('/bin/bash /.../expofit_env/bin/activate'):
local("yolk -l")
I still get an error, as if virtualenv wasn't activated.
alan#linux ~/Desktop/expofit $ fab start_env
[localhost] local: yolk -l
/bin/sh: 1: yolk: not found
When i manually activate virtualenv, yolk workd fine:
alan#linux ~/.../expofit_env $ source bin/activate
(expofit_env)alan#linux ~/.../expofit_env $ yolk -l
DateUtils - 0.5.2 - active
Django - 1.4.1 - active
Python - 2.7.3rc2 - active development (/usr/lib/python2.7/lib-dynload)
....
Update
Tried a new approach from this question.
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.activate = 'source /.../expofit_env/bin/activate'
#_contextmanager
def virtualenv():
with prefix(env.activate):
yield
def deploy():
with virtualenv():
local('yolk -l')
Gives the same error:
[localhost] local: yolk -l
/bin/sh: 1: source: not found
Fatal error: local() encountered an error (return code 127) while executing 'yolk -l'
Aborting.
Even dough the first command passes without errors:
alan#linux ~/.../expofit_env/bin $ fab virtualenv
[servername] Executing task 'virtualenv'
Done.
Update
It is possible to run the local with a custom shell.
from fabric.api import local
def start_env():
local('source env/bin/activate',shell='/bin/bash')
However, that didn't activate the virtualenv as if it was done manually.
To use enable a virtualenv from the fab file you need to run your commands as follow:
def task():
# do some things outside the env if needed
with prefix('source bin/activate'):
# do some stuff inside the env
pip install django-audiofield
All the commands within the with bloc will be executed inside the virtualenv
By default you are using the sh shell, and the source command is a bashism (that is, something that only works in bash).
To activate your environment, you need to execute it with bash directly. /bin/bash /path/to/bin/activate.
Related
I want to use pycurl in order to have TTFB and TTLB, but am unable to call pycurl in an AWS lambda.
To focus on the issue, let say I call this simple lambda function:
import json
import pycurl
import certifi
def lambda_handler(event, context):
client_curl = pycurl.Curl()
client_curl.setopt(pycurl.CAINFO, certifi.where())
client_curl.setopt(pycurl.URL, "https://www.arolla.fr/blog/author/edouard-gomez-vaez/") #set url
client_curl.setopt(pycurl.FOLLOWLOCATION, 1)
client_curl.setopt(pycurl.WRITEFUNCTION, lambda x: None)
content = client_curl.perform()
dns_time = client_curl.getinfo(pycurl.NAMELOOKUP_TIME) #DNS time
conn_time = client_curl.getinfo(pycurl.CONNECT_TIME) #TCP/IP 3-way handshaking time
starttransfer_time = client_curl.getinfo(pycurl.STARTTRANSFER_TIME) #time-to-first-byte time
total_time = client_curl.getinfo(pycurl.TOTAL_TIME) #last requst time
client_curl.close()
data = json.dumps({'dns_time':dns_time,
'conn_time':conn_time,
'starttransfer_time':starttransfer_time,
'total_time':total_time,
})
return {
'statusCode': 200,
'body': data
}
I have the following error, which is understandable:
Unable to import module 'lambda_function': No module named 'pycurl'
I followed the tuto https://aws.amazon.com/fr/premiumsupport/knowledge-center/lambda-layer-simulated-docker/ in order to create a layer, but then have the following error while generated the layer with docker (I extracted the interesting part):
Could not run curl-config: [Errno 2] No such file or directory: 'curl-config': 'curl-config'
I even tried to generate the layer just launching on my own machine:
pip install -r requirements.txt -t python/lib/python3.6/site-packages/
zip -r mypythonlibs.zip python > /dev/null
And then uploading the zip as a layer in aws, but I then have another error when lanching the lambda:
Unable to import module 'lambda_function': libssl.so.1.0.0: cannot open shared object file: No such file or directory
It seems that the layer has to be built on a somehow extended target environment.
After a couple of hours scratching my head, I managed to resolve this issue.
TL;DR: build the layer by using a docker image inherited from the aws one, but with the needed libraries, for instance libcurl-devel, openssl-devel, python36-devel. Have a look at the trick Note 3 :).
The detailed way:
Prerequisite: having Docker installed
In a empty directory, copy your requirements.txt containing pycurl (in my case: pycurl~=7.43.0.5)
In this same directory, create the following Dockerfile (cf Note 3):
FROM public.ecr.aws/sam/build-python3.6
RUN yum install libcurl-devel python36-devel -y
RUN yum install openssl-devel -y
ENV PYCURL_SSL_LIBRARY=openssl
RUN ln -s /usr/include /var/lang/include
Build the docker image:
docker build -t build-python3.6-pycurl .
build the layer using this image (cf Note 2), by running:
docker run -v "$PWD":/var/task "build-python3.6-pycurl" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.6/site-packages/; exit"
Zip the layer by running:
zip mylayer.zip python > /dev/null
Send the file mylayer.zip to aws as a layer and make your lambda points to it (using the console, or following the tuto https://aws.amazon.com/fr/premiumsupport/knowledge-center/lambda-layer-simulated-docker/).
Test your lambda and celebrate!
Note 1. If you want to use python 3.8, just change 3.6 or 36 by 3.8 and 38.
Note 2. Do not forget to remove the python folder when regenerating the layer, using admin rights if necessary.
Note 3. Mind the symlink in the last line of the DockerFile. Without it, gcc won't be able to find some header files, such as Python.h.
Note 4. Compile pycurl with openssl backend, for it is the ssl backend used in the lambda executing environment. Or else you'll get a libcurl link-time ssl backend (openssl) is different from compile-time ssl backend error when executing the lambda.
i used git clone https://github.com/django-oscar/django-oscar, then i used
pipenv install
and i got
AttributeError: module 'os' has no attribute 'uname' this error as well as this
pipenv.patched.notpip._internal.exceptions.InstallationError:
Command errored out with exit status 1:
python setup.py egg_info Check the logs for full command output.
I am using windows 10.
I guess this does not have a lot to do with docker but more with the host OS (win 10 in this case).
This question describes in more detail but it comes down to the fact that uname is not available on windows. Since docker containers use the kernels from the host OS they are run on, in your case this will error out.
The same error will appear if you run these commands in Windows 10 under the python Idle environment.
I am trying to launch a django service using docker which uses nltk library.
In the dockerfile I have called a setup.py which calls nltk.download. According to the logs I see during building the docker image this step runs successfully.
But when I run the docker image and try to connect to my django service, I get the error saying that nltk.download hasn't happened yet.
Dockerfile code -
RUN . ${PYTHON_VIRTUAL_ENV_FOLDER}/bin/activate && python ${PYTHON_APP_FOLDER}/setup.py
setup.py code -
import nltk
import os
nltk.download('stopwords', download_dir=os.getcwd() + '/nltk_data/')
nltk.download('wordnet', download_dir=os.getcwd() + '/nltk_data/')
Error:
**********************************************************************
Resource stopwords not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('stopwords')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/src/venv/nltk_data'
- '/usr/src/venv/share/nltk_data'
- '/usr/src/venv/lib/nltk_data'
**********************************************************************
Any idea what is wrong here?
Also, the same code works when I run it without docker.
Having faced that same problem before and having done almost the same thing you did, I'd assume what you're missing here is configuring the nltk.data.path by adding to the path wherever your os.getcwd() is.
Thanks for the post and it fixed my issue as well!!!!
I got the same issue that punkt does exit in docker:
/root/nltk_data/tokenizers/punkt
But when my app tried to reach it, Docker kept complaining the resource couldn't be found.
Inspired by your post, I added:
ENV NLTK_DATA /root/nltk_data/
ADD . $NLTK_DATA
But still got the same error message. So I tried this:
ENV NLTK_DATA /nltk_data/
ADD . $NLTK_DATA
I didn't know why I wanted to remove /root from the path but it worked!
My app is using Flask and uWSGI, so I guess maybe this is an issue for Django and Flask? Thanks anyway!
I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.
I want my custom made Django command to be executed every minute. However it seems like python /path/to/project/myapp/manage.py mycommand doesn't seem to work while at the directory python manage.py mycommand works perfectly.
How can I achieve this ? I use /etc/crontab with:
****** root python /path/to/project/myapp/manage.py mycommand
I think the problem is that cron is going to run your scripts in a "bare" environment, so your DJANGO_SETTINGS_MODULE is likely undefined. You may want to wrap this up in a shell script that first defines DJANGO_SETTINGS_MODULE
Something like this:
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py mycommand
Make it executable (chmod +x) and then set up cron to run the script instead.
Edit
I also wanted to say that you can "modularize" this concept a little bit and make it such that your script accepts the manage commands as arguments.
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py ${*}
Now, your cron job can simply pass "mycommand" or any other manage.py command you want to run from a cron job.
cd /path/to/project/myapp && python manage.py mycommand
By chaining your commands like this, python will not be executed unless cd correctly changes the directory.
If you want your Django life a lot more simple, use django-command-extensions within your project:
http://code.google.com/p/django-command-extensions/
You'll find a command named "runscript" so you simply add the command to your crontab line:
****** root python /path/to/project/myapp/manage.py runscript mycommand
And such a script will execute with the Django context environment.
This is what i have done recently in one of my projects,(i maintain venvs for every project i work, so i am assumning you have venvs)
***** /path/to/venvs/bin/python /path/to/app/manage.py command_name
This worked perfectly for me.
How to Schedule Django custom Commands on AWS EC-2 Instance?
Step -1
First, you need to write a .cron file
Step-2
Write your script in .cron file.
MyScript.cron
* * * * * /home/ubuntu/kuzo1/venv/bin/python3 /home/ubuntu/Myproject/manage.py transfer_funds >> /home/ubuntu/Myproject/cron.log 2>&1
Where * * * * * means that the script will be run at every minute. you can change according to need (https://crontab.guru/#*_*_*_*_*). Where /home/ubuntu/kuzo1/venv/bin/python3 is python virtual environment path. Where /home/ubuntu/kuzo1/manage.py transfer_funds is Django custom command path & /home/ubuntu/kuzo1/cron.log 2>&1 is a log file where you can check your running cron log
Step-3
Run this script
$ crontab MyScript.cron
Step-4
Some useful command
1. $ crontab -l (Check current running cron job)
2. $ crontab -r (Remove cron job)
The runscript extension wasn't well documented. Unlike the django command this one can go anywhere in your project and requires a scripts folder. The .py file requires a run() function.
If its a standalone script, you need to do this:
from django.conf import settings
from django.core.management import setup_environ
setup_environ(settings)
#your code here which uses django code, like django model
If its a django command, its easier: https://coderwall.com/p/k5p6ag
In (management/commands/exporter.py)
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
args = ''
help = 'Export data to remote server'
def handle(self, *args, **options):
# do something here
And then, in the command line:
$ python manage.py exporter
Now, it's easy to add a new cron task to a Linux system, using crontab:
$ crontab -e
or $ sudo crontab -e if you need root privileges
In the crontab file, for example for run this command every 15 minutes, something like this:
# m h dom mon dow command
*/15 * * * * python /var/www/myapp/manage.py exporter