I am trying to launch a django service using docker which uses nltk library.
In the dockerfile I have called a setup.py which calls nltk.download. According to the logs I see during building the docker image this step runs successfully.
But when I run the docker image and try to connect to my django service, I get the error saying that nltk.download hasn't happened yet.
Dockerfile code -
RUN . ${PYTHON_VIRTUAL_ENV_FOLDER}/bin/activate && python ${PYTHON_APP_FOLDER}/setup.py
setup.py code -
import nltk
import os
nltk.download('stopwords', download_dir=os.getcwd() + '/nltk_data/')
nltk.download('wordnet', download_dir=os.getcwd() + '/nltk_data/')
Error:
**********************************************************************
Resource stopwords not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('stopwords')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/src/venv/nltk_data'
- '/usr/src/venv/share/nltk_data'
- '/usr/src/venv/lib/nltk_data'
**********************************************************************
Any idea what is wrong here?
Also, the same code works when I run it without docker.
Having faced that same problem before and having done almost the same thing you did, I'd assume what you're missing here is configuring the nltk.data.path by adding to the path wherever your os.getcwd() is.
Thanks for the post and it fixed my issue as well!!!!
I got the same issue that punkt does exit in docker:
/root/nltk_data/tokenizers/punkt
But when my app tried to reach it, Docker kept complaining the resource couldn't be found.
Inspired by your post, I added:
ENV NLTK_DATA /root/nltk_data/
ADD . $NLTK_DATA
But still got the same error message. So I tried this:
ENV NLTK_DATA /nltk_data/
ADD . $NLTK_DATA
I didn't know why I wanted to remove /root from the path but it worked!
My app is using Flask and uWSGI, so I guess maybe this is an issue for Django and Flask? Thanks anyway!
Related
When I try to use the project creation template which is on github, even after changing the appropriate values in config.yaml I am getting following error.
location: /deployments/projectcreation000/manifests/manifest-1534790908361
message: 'Manifest expansion encountered the following errors: Error compiling Python code: No module named apis Resource: project.py Resource: config'
you can find the repo link here : https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/project_creation
Please help as I need it for production workflow. I have tried "sudo pip install apis" in Cloud Shell but it does not help, even after successful installation of apis module.
you either need to fix the import or move the file, so that apis.py will be found.
The apis module in this context refers to,
not a pip package. Ensure you have all the files in the same relative paths to each other when deploying these samples.
We've been using appcfg.py request_logs to download GAE logs, every once in a while it throws the error:
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
But after a few times trying it works out, sometimes also it works after updating gcloud using gcloud components update. We thought it might be some network throttling issue of some kind and didn't give it enough thought. Lately though, we're trying to figure out what is causing this.
The full command we use is:
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append --no_cookies
It seems the error is related to httplib2 library, but since it is part of the appcfg.py calls we're not sure we should tamper with something within its calls
Versions:
Python 2.7.13
Google Cloud SDK 196.0.0
app-engine-python 1.9.67
This has become more persistent now and I couldn't download logs for a few days now no matter how many times I try.
Looking at the download logs command I tried the same command again but without the --no_cookies flag to see what would happen.
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append
I got the error:
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u'e~testapp').
--- end server output ---
Which lead me to the answer provided here https://stackoverflow.com/a/34694577/1394228 by #ninjahoahong. This worked for me and logs where downloaded from first trial in case someone faces the same issue
There's also this Google Group post which I didn't try but seems like it does the same thing.
Not sure if removing the file ~/.appcfg_oauth2_tokens would have other effects, yet to find out.
Update:
I also found out that my httplib2 located at /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2 was version = "0.7.5", I upgraded it to version = '0.11.3' using target location(directory) upgrade command:
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/
I want to use google appengine.api in local machine. i have installed google cloud SDK and started it ,the authentication is successful . I have executed $dev_appserver.py app.yaml at the project path which has started a google app engine server at localhost:8000 .
when i want to execute the program it gives an error message " ImportError: No module named appengine.api "
I appreciate your help .
My first thought is it is an error in the code you are deploying. Are you able to get the Hello World to work?:
https://cloud.google.com/appengine/docs/standard/python/quickstart#download_the_hello_world_app
I tried executing the project in local in pycharm, so i got the above error(google.appengine.api error). Basically it has to be executed on a server.The server can be started using your terminal.
1) Go to the project path(root folder of all files in the project where the app.yaml file is located,eg: appengine)
2) start the server using $ dev_appserver.py app.yaml. It starts server at localhost port 8000 as the default one.
3) In the server start depends on the handler and its path specified (like '/' or '/testjob') try localhost:8000/ or localhost:8000/testjob
4) All the logs written in the program will be shown in the terminal. For logs try using 'logging' module , make sure to mention the logging level else basic level logs are not shown
I'm trying to run a go script as part of the build process. The script imports a 'custom' package. However I get this import error.
The repository name is bis. The script which I run is configbis.go. The package imported configbis.go is mymodule
The project structure is as following:
bisrepo -------
| |
mymodule configbis.go
go run configbis.go
configbis.go:16:2: cannot find package "bisrepo/mymodule" in any of:
/home/travis/.gvm/gos/go1.1.2/src/pkg/bisrepo/mymodule (from $GOROOT)
/home/travis/.gvm/pkgsets/go1.1.2/global/src/bisrepo/mymodule (from $GOPATH)
I've tried to import mymodule in configbis.go as following:
import "mymodule"
import "bisrepo/mymodule"
import "github.com/user/bisrepo/mymodule"
None of them works. I run out of ideas/options ...
I read the the travis-ci documentation and I found it useless.
You could try to add something like that in your .travis.yml:
install:
- go get github.com/user/bisrepo/mymodule
in order to use private repos you must provide a github api auth token (similarly so when deploying go projects which reference private repos on Heroku). You can try adding something like this in your .travis.yml
before_install:
- echo "machine github.com login $GITHUB_AUTH_TOKEN" > ~/.netrc
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Activate a virtualenv via fabric as deploy user
I've been advised to try and use fabric for deploying Django to a production server, and automating tasks by using python instead of bash.
I wanted to start easily and just automate the activation of my virtualenv, and start the Django development server in it.
I've created a file named fabfile.py:
from fabric.api import local
def activate_env():
local("source /.../expofit_env/bin/activate")
def run_local_server():
local("/.../server/manage.py runserver")
def start():
activate_env()
run_local_server()
However, when I run
fab start
i get the following message:
[localhost] local: source /.../expofit_env/bin/activate
/bin/sh: 1: source: not found
Fatal error: local() encountered an error (return code 127) while executin
'source /.../expofit_env/bin/activate'
What am I doing wrong?
Update
Based on Burhan Khalid's proposal, i tried the following:
....
def activate_env():
local("/bin/bash /.../expofit_env/bin/activate")
....
Running just
fab activate_env
results:
[localhost] local: /bin/bash /.../expofit_env/bin/activate
Done.
However after execution, virtualenv isn't activated.
For the following code:
def start_env():
with prefix('/bin/bash /.../expofit_env/bin/activate'):
local("yolk -l")
I still get an error, as if virtualenv wasn't activated.
alan#linux ~/Desktop/expofit $ fab start_env
[localhost] local: yolk -l
/bin/sh: 1: yolk: not found
When i manually activate virtualenv, yolk workd fine:
alan#linux ~/.../expofit_env $ source bin/activate
(expofit_env)alan#linux ~/.../expofit_env $ yolk -l
DateUtils - 0.5.2 - active
Django - 1.4.1 - active
Python - 2.7.3rc2 - active development (/usr/lib/python2.7/lib-dynload)
....
Update
Tried a new approach from this question.
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.activate = 'source /.../expofit_env/bin/activate'
#_contextmanager
def virtualenv():
with prefix(env.activate):
yield
def deploy():
with virtualenv():
local('yolk -l')
Gives the same error:
[localhost] local: yolk -l
/bin/sh: 1: source: not found
Fatal error: local() encountered an error (return code 127) while executing 'yolk -l'
Aborting.
Even dough the first command passes without errors:
alan#linux ~/.../expofit_env/bin $ fab virtualenv
[servername] Executing task 'virtualenv'
Done.
Update
It is possible to run the local with a custom shell.
from fabric.api import local
def start_env():
local('source env/bin/activate',shell='/bin/bash')
However, that didn't activate the virtualenv as if it was done manually.
To use enable a virtualenv from the fab file you need to run your commands as follow:
def task():
# do some things outside the env if needed
with prefix('source bin/activate'):
# do some stuff inside the env
pip install django-audiofield
All the commands within the with bloc will be executed inside the virtualenv
By default you are using the sh shell, and the source command is a bashism (that is, something that only works in bash).
To activate your environment, you need to execute it with bash directly. /bin/bash /path/to/bin/activate.