I'm trying to create a trigger that test a function before deploying it in cloud function. So far I managed to install requirements.txt and execute pytest but I get the following error:
/usr/local/lib/python3.7/site-packages/ghostscript/__init__.py:35: in <module>
from . import _gsprint as gs
/usr/local/lib/python3.7/site-packages/ghostscript/_gsprint.py:515: in <module>
raise RuntimeError('Can not find Ghostscript library (libgs)')
E RuntimeError: Can not find Ghostscript library (libgs)
I have ghostscript in my requirements.txt file :
[...]
ghostscript==0.6
[...]
pytest==6.0.1
pytest-mock==3.3.1
Here is my deploy.yaml
steps:
- name: 'docker.io/library/python:3.7'
id: Test
entrypoint: /bin/sh
dir: 'My_Project/'
args:
- -c
- 'pip install -r requirements.txt && pytest pytest/test_mainpytest.py -v'
From the traceback, I understand that I don't have ghostscript installed on the cloud build, which is true.
Is there a way to install ghostscript on a step of my deploy.yaml?
Edit-1:
So I tried to install ghostscript using commands in a step, I tried apt-get gs, apt-get ghostscript but unfortunately it didn't work
The real problem is that you are missing a c-library, the package itself seems installed by pip. You should install that library with your package manager. This is an example for ubuntu-based containers:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
apt update
apt install ghostscript -y
pip install -r requirements.txt
pytest pytest/test_mainpytest.py -v
Related
I am building a Django project, and I am using GitHub actions to run python manage.py test whenever I push. The problem is that in the project, I am using the graphene-django package, which's available to install via pip install graphene-django. The problem is that, for some reason, this doesn't seem to work (it outputs an error). I have tried everything:
pip install graphene-django
pip install "graphene-django>=2.0"
pip install --user graphene-django
pip install --user "graphene-django>=2.0"
pip3 install graphene-django
pip3 install "graphene-django>=2.0"
pip3 install --user graphene-django
pip3 install --user "graphene-django>=2.0"
Some of these commands display a different error, but the most common is this:
Collecting promise>=2.1 (from graphene-django>=2.0)
Downloading https://files.pythonhosted.org/packages/cf/9c/fb5d48abfe5d791cd496e4242ebcf87a4bb2e0c3dcd6e0ae68c11426a528/promise-2.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'setuptools'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-5vr1pems/promise/
Here is my YAML file for the Action (with the last intslling attempt):
name: Testing
on: push
jobs:
test_vote:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Run Django unit tests
run: |
pip3 install --user django
pip3 install --user "graphene-django>=2.0"
python3 manage.py test
env:
# Random key
SECRET_KEY: '!nj1v)#-y)e21t^u#-6tk+%+#vyzn30dp+)xof4q*y8y&%=h9l'
Any help would be really appreciated, since I've been in this for like an hour, when in the course, the teacher took like 5 minutes.
Thanks!
Install the module setuptools before installing graphene-django.
I want to install auto-checking my code with help of cpplint in Github actions.
I try to install it in workflows file like that:
- name: Install cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: |
pip install wheel
pip install cpplint
After this code block I try to run cpplint:
- name: cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: cpplint --recursive --exclude=source/catch.hpp --filter=-legal/copyright,-build/include_subdir source/*
But after successful installation (in the first block) I got "line 1: cpplint: command not found" in the second one.
Please try python -m cpplint:
- name: cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: python -m cpplint --recursive --exclude=source/catch.hpp --filter=-legal/copyright,-build/include_subdir source/*
Modules installed over pip are not recognized as system level command.
I'm trying to get NLTK and Wordnet working on a lambda via CodeBuild.
It looks like it installs fine in CloudFormation, but I get the following error in the Lambda:
START RequestId: c660c446-e1c4-11e8-8047-15f59f1e002c Version: $LATEST
Unable to import module 'index': No module named 'nltk'
END RequestId: c660c446-e1c4-11e8-8047-15f59f1e002c
REPORT RequestId: c660c446-e1c4-11e8-8047-15f59f1e002c Duration: 2.10 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
However when I check, it installed fine in CodeBuild:
[Container] 2018/11/06 12:45:06 Running command pip install -U nltk
Collecting nltk
Downloading https://files.pythonhosted.org/packages/50/09/3b1755d528ad9156ee7243d52aa5cd2b809ef053a0f31b53d92853dd653a/nltk-3.3.0.zip (1.4MB)
Requirement already up-to-date: six in /usr/local/lib/python2.7/site-packages (from nltk)
Building wheels for collected packages: nltk
Running setup.py bdist_wheel for nltk: started
Running setup.py bdist_wheel for nltk: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/d1/ab/40/3bceea46922767e42986aef7606a600538ca80de6062dc266c
Successfully built nltk
Installing collected packages: nltk
Successfully installed nltk-3.3
Here is the actual python code:
import json
import datetime
import nltk
from nltk.corpus import wordnet as wn
And here is the YML file:
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Install nltk & WordNet
- pip install -U nltk
- python -m nltk.downloader wordnet
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
# - python -m unittest discover tests
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
Any idea why it installs fine in CodeBuild but can't access the module NLTK in the Lambda? For reference the code runs fine in the lambda if you just remove NLTK.
I have a feeling this a YML file issue, but not sure what, given NLTK installs fine.
NLTK was installed only locally, on the machine where the CodeBuild job was running. You need to copy NLTK into the CloudFormation deployment package. Your buildspec.yml will then look something like this:
install:
commands:
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
- virtualenv /venv
# Install nltk & WordNet
- pip install -U nltk
- python -m nltk.downloader wordnet
build:
commands:
- cp -r /venv/lib/python3.6/site-packages/. ./
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
Additional reading:
Create Deployment Package Using a Python Environment Created with Virtualenv
Ok, so thanks to laika for pointing me in the right direction.
This is a working deployment of NLTK & Wordnet to Lambda via CodeStar / CodeBuild. Some things to keep in mind:
1) You cannot use source venv/bin/activate as it is not POSIX compliant. Use . venv/bin/activate as below instead.
2) You must set the path for NLTK as shown in the define directories section.
buildspec.yml
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI & PIP to the latest version
- pip install --upgrade awscli
- pip install --upgrade pip
# Define Directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- cd $HOME_DIR
# Create VirtualEnv to package for lambda
- virtualenv venv
- . venv/bin/activate
# Install Supporting Libraries
- pip install -U requests
# Install WordNet
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA wordnet
# Output Requirements
- pip freeze > requirements.txt
# Unit Tests
# - python -m unittest discover tests
build:
commands:
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
If anyone has any improvements LMK. It's working for me.
I tried pysaml2 and python-saml library on google cloud platform but both are internally using some libraries which are using C extensions or python wrapper on C libraries which is incompatible with app engine as app engine blocks the c implemented libraries in its eco system.
Does any one has implemented saml2 protocol in appengine using python?
pysaml2 documentation suggests that its a pure python implementation but it also uses library like pycrytodome or cryptodome which need _ctype library.
Below is the error:
File "/home/***/anaconda2/lib/python2.7/ctypes/_init_.py", line 10, in <module>
from _ctypes import Union, Structure, Array
File "/home/***/sdks/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 963, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named _ctypes
Please suggest some other approaches if possible.
I figured out what to do if you want to use c libraries in the app engine environment.
First of all you have to use app engine flexible environment instead of standard environment there also use the custom runtime. A sample yaml file is posted below.
app.yaml
runtime: custom
env: flex
api_version: 1
handlers:
- url: /.*
script: main.app
The second thing which you need to do is choose a proper base image to build from and install the necessary libraries.
example dockerfile
FROM gcr.io/google_appengine/python-compat-multicore
RUN apt-get update -y
RUN apt-get install -y python-pip build-essential libssl-dev libffi-dev python-dev libxml2-dev libxslt1-dev xmlsec1
RUN apt-get install -y curl unzip
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN curl https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.40.zip > /tmp/google_appengine_1.9.40.zip
RUN unzip /tmp/google_appengine_1.9.40.zip -d /usr/local/gae
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
ENV PATH $PATH:/usr/local/gae/google_appengine/
COPY . /app
WORKDIR /app
ENV MODULE_YAML_PATH app.yaml
RUN pip install -r requirements.txt
I'm trying to setup a Django app on Docker with Nginx, uWSGI and Postgres. I found this great guide on setting up Compose for Django and Postgres: https://docs.docker.com/v1.5/compose/django/
However, now I need to add Nginx and uWSGI. I've tried using files of this repo (https://github.com/baxeico/django-uwsgi-nginx) with the Compose setup of the Docker docs but without succes, sadly.
This is what happens when I enter docker-compose run web:
Step 17 : RUN pip install -r /home/docker/code/app/requirements.txt
---> Running in e1ec89e80d9c
Collecting Django==1.9.1 (from -r /home/docker/code/app/requirements.txt (line 1))
/usr/local/lib/python2.7/dist-packages/pip-7.1.2-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading Django-1.9.1-py2.py3-none-any.whl (6.6MB)
Collecting psycopg2 (from -r /home/docker/code/app/requirements.txt (line 2))
Downloading psycopg2-2.6.1.tar.gz (371kB)
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-XRgbSA/psycopg2
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r /home/docker/code/app/requirements.txt' returned a non-zero code: 1
This is my Dockerfile:
from ubuntu:precise
run echo "deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted" | tee -a /etc/apt/sources.list.d/precise-updates.list
# update packages
run apt-get update
# install required packages
run apt-get install -y python python-dev python-setuptools python-software-properties
run apt-get install -y sqlite3
run apt-get install -y supervisor
# add nginx stable ppa
run add-apt-repository -y ppa:nginx/stable
# update packages after adding nginx repository
run apt-get update
# install latest stable nginx
run apt-get install -y nginx
# install pip
run easy_install pip
# install uwsgi now because it takes a little while
run pip install uwsgi
# install our code
add . /home/docker/code/
# setup all the configfiles
run echo "daemon off;" >> /etc/nginx/nginx.conf
run rm /etc/nginx/sites-enabled/default
run ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
run ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
# run pip install
run pip install -r /home/docker/code/app/requirements.txt
run cd /home/docker/code/app && ./manage.py syncdb --noinput
expose 80
cmd ["supervisord", "-n"]
And the docker-compose.yml:
db:
image: postgres
web:
build: .
command: python vms/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
There are also files named nginx-app.conf, supervisor-app.conf, uwsgi_params and uwsgi.ini. These are all from the aforementioned repo. Requirements.txt contains Django 1.9.1, psycopg2 and requests.
If there is a better alternative to this Frankenstein project, I'd love to hear it.
On Ubuntu, make sure that python-dev and libpq-dev have been installed using apt-get, before trying to install psycopg2 using pip.
See the installation docs for more info.