How to add travis environment variables to Tox - django

My project uses environment variables and I am trying to use them in the Tox. According to https://stackoverflow.com/a/37522926/3782963 I have to set passenv in the tox.ini, but when I do that the, I get an error as
Collecting django<1.10,>=1.9
Using cached Django-1.9.13-py2.py3-none-any.whl
Collecting AUTHY_API
Could not find a version that satisfies the requirement AUTHY_API (from versions: )
No matching distribution found for AUTHY_API
It looks like the Tox thinks that AUTHY_API is a distribution file whereas it is actually an environment variable.
My configurations are:
.travis.yml:
language: python
python:
- 3.5
- 3.6
services: postgresql
addons:
postgresql: "9.4"
before_script:
- psql -c "CREATE DATABASE mydb;" -U postgres
branches:
only:
- master
- v3
install:
- pip install tox-travis
script:
- tox
env:
- TOXENV=django19
- TOXENV=django110
- TOXENV=coverage
notifications:
email: false
tox.ini:
[tox]
envlist = django19, django110
skipsdist = True
[testenv]
commands = pytest
setenv =
DJANGO_SETTINGS_MODULE=gollahalli_com.settings
PYTHONPATH={toxinidir}
[base]
deps =
-r{toxinidir}/requirements-testing.txt
passenv =
AUTHY_API
cloudinary_api
cloudinary_api_secret
DEBUG
SECRET_KEY
GITHUB_KEY
[testenv:django19]
deps =
django>=1.9, <1.10
{[base]deps}
{[base]passenv}
[testenv:django110]
deps =
django>=1.10, <1.11
{[base]deps}
{[base]passenv}
[testenv:coverage]
commands =
; coverage run --branch --omit={envdir}/*,test_app/*.py,*/migrations/*.py {envbindir}/manage.py test
pytest --cov=./
codecov
deps =
{[testenv:django110]deps}
{[base]passenv}
I am not sure what is wrong here. Help!

Here is the bug:
deps =
…
{[base]passenv}
You pass the list of env vars as dependencies. Move passenv to [testenv] and remove {[base]passenv} from all environments.

Related

Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "\"\"". Must be of the form: name=value

Trying to build a docker image and it keeps returning the error in the title:
DOCKERFILE
FROM java:8
ADD build/libs/selfservingportal.jar selfservingportal.jar
ENV SSL_CERT= ""
ENV SSL_KEY=""
ENV DEVSECOPS_PLATFORM_TERRAFORM_API_TOKEN=""
ENV DEVSECOPS_PLATFORM_TERRAFORM_USER=""
ENV DEVSECOPS_PLATFORM_TERRAFORM_BASEURL=""
ENV DEVSECOPS_PLATFORM_CREATED_TERRAFORM_API_TOKEN=""
ENV GCP_CAPTCHA_TOKEN=""
ENV JENKINS_BASE=""
ENV JENKINS_EXTENSION=""
ENV JENKINS_APPLICATIONJOB=""
ENV JENKINS_INFRASTRUCTUREJOB=""
ENV DEVSECOPS_PLATFORM_CREATED_TERRAFORM_USER=""
ENV ACCESS_KEY=""
ENV SECRET_ACCESS_KEY=""
ENV AWS_REGION=""
ENV AWS_ACCESSKEY=""
ENV AWS_SECRETKEY=""
ENV AZURE_CLIENT_ID=""
ENV AZURE_CLIENT_SECRET_KEY=""
ENV AZURE_SUBSCRIPTION_ID=""
ENV AZURE_TENANT_ID=""
ENTRYPOINT ["java","-jar","selfservingportal.jar"]
Not sure what the problem with the syntax is here. Is it because the values are empty? Very confused. Any help is appreciated.
Finally noticed the stray space after the = in:
ENV SSL_CERT= ""
Removing this solved the issue.

Pytest not working with Django and Docker - AssertionError: local('/dev/console') is not a file

I'm running a Django application in Docker and everything works fine but when I try try to run tests it fails with quite ambiguous error.
running docker-compose run djangoapp coverage run -m pytest
result:
Creating djangoapp_run ... done ================================================= test session starts ==================================================
platform linux -- Python 3.8.5, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /
collected 0 items / 1 error
======================================================== ERRORS ========================================================
____________________________________________ ERROR collecting test session _____________________________________________
usr/local/lib/python3.8/dist-packages/_pytest/runner.py:311: in from_call
result: Optional[TResult] = func()
usr/local/lib/python3.8/dist-packages/_pytest/runner.py:341: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
usr/local/lib/python3.8/dist-packages/_pytest/main.py:710: in collect
for x in self._collectfile(path):
usr/local/lib/python3.8/dist-packages/_pytest/main.py:546: in _collectfile
assert (
E AssertionError: local('/dev/console') is not a file (isdir=False, exists=True, islink=False)
=============================================== short test summary info ================================================
ERROR - AssertionError: local('/dev/console') is not a file (isdir=False, exists=True, islink=False)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=================================================== 1 error in 0.33s ===================================================
docker-compose:
version: '3'
services:
djangoapp:
build: .
container_name: djangoapp
ports:
- '8000:80'
- '1433:1433'
volumes:
- ./djangoapp:/var/www/html/djangoapp
environment:
- PYTHONUNBUFFERED=0
pytest traverses recursively and default working directory in docker is /. To combine these... Set working directly correctly!
...
environment:
- PYTHONUNBUFFERED=0
working_dir: /var/www/html/djangoapp
...

Deploy shiny with shinyproxy - no app showing

I've developed a shiny app and i'm trying to do a first lightweight deploy using shinyproxy.
All installation seems fine.
I've installed docker, java.
I thought that building a package that wraps the app and other function would be a good idea.
So I developed a package (CI) and CI::launch_application is basically a wrapper around RunApp function of shiny package. This is the code:
launch_application <- function(launch.browser = interactive(), ...) {
runApp(
appDir = system.file("app", package = "CI"),
launch.browser = launch.browser,
...
)
}
I succesfully built the docker image with this Dockerfile
FROM rocker/r-base:latest
## Install required dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
## for R package 'curl'
libcurl4-gnutls-dev \
apt-utils \
## for R package 'xml2'
libxml2-dev \
## for R package 'openssl'
libssl-dev \
zlib1g-dev \
default-jdk \
default-jre \
&& apt-get clean \
&& R CMD javareconf \
&& rm -rf /var/lib/apt/lists/
## Install major fixed R dependencies
# - they will always be needed and we want them in a dedicated layer,
# as opposed to getting them dynamically via `remotes::install_local()`
RUN install2.r --error \
shiny \
dplyr \
devtools \
rJava \
RJDBC
# copy the app to the image
RUN mkdir /root/CI
COPY . /root/CI
# Install CI
RUN install2.r --error remotes \
&& R -e "remotes::install_local('/root/CI')"
# Set host and port
RUN echo "options(shiny.port = 80, shiny.host = '0.0.0.0')" >> /usr/local/lib/R/Rprofile.site
EXPOSE 80
CMD ["R", "-e", "CI::launch_application()"]
This is my application.yml file
proxy:
title:
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
admin-groups: scientists
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
authentication: simple
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
specs:
- id: home
display-name: Customer Intelligence
description: Segment your customer
container-cmd: ["R", "-e", "CI::launch_application()"]
container-image: company/image
access-groups: scientist
logging:
file:
shinyproxy.log
When I launch java shinyproxy.jar and i visited the url with the port I exposed, I see a login mask.
I logged in with simple authentication ( login is successful from shinyproxy.log) but neither an app is showing nor a list of app.
When I launch the app locally everything is fine.
Thanks
There is a misprint in the allowed user group in application.yml (should be scientists over scientist):
access-groups: scientists
Dzimitry is right. It was a typo error: scientists over scientist.

uWSGI error every time I use the requests module in Django app on Docker

I'm running a Django app with uWSGI in Docker with docker-compose. I get the same error every time I:
Send a POST request with AJAX
In handling said request in my view, I use python's requests module, i.e. r = requests.get(some_url)
uWSGI says the following:
!!! uWSGI process 13 got Segmentation Fault !!!
DAMN ! worker 1 (pid: 13) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 24)
spawned 4 offload threads for uWSGI worker 1
The console in the browser says net::ERR_EMPTY_RESPONSE
I've tried using the requests module in different places, and wherever I put it I get the same Segmentation Fault error. I'm also able to run everything fine outside of docker with no errors, so I've narrowed it down to: docker + requests module = errror.
Is there something that could be blocking the requests sent with the requests module from within the docker container? Thanks in advance for your help.
Here's my uwsgi.ini file:
[uwsgi]
chdir = %d
module = my_project.wsgi:application
master = true
processes = 2
http = 0.0.0.0:8000
vacuum = true
pidfile = /tmp/my_project.pid
daemonize = %d/my_project.log
check-static = %d
static-expires = /* 7776000
offload-threads = %k
uid = 1000
gid = 1000
# there is no /etc/mime.types on the docker Arch Linux image
mime-file = %d/mime.types
Dockerfile:
FROM alpine:3.8
ENV PYTHONUNBUFFERED 1
RUN mkdir /my_project
WORKDIR /my_project
RUN apk add build-base python3-dev py3-pip python3
# deps for python cryptography
RUN apk add libffi-dev musl-dev openssl-dev
# dep for uwsgi
RUN apk add linux-headers
ADD requirements.txt /my_project/
RUN pip3 install -r requirements.txt
ADD . /my_project/
ENTRYPOINT ./start.sh
docker-compose.yml:
version: '3'
services:
web:
build: .
entrypoint: ./start.sh
volumes:
- .:/my_project
ports:
- "8000:8000"
environment:
- DEBUG_LEVEL=INFO
network_mode: "host"
start.sh:
#!/bin/sh
echo '' > logfile.log
uwsgi --ini uwsgi.ini
tail -f logfile.log
Solution: Change base image to Ubuntu 16.04 and everything works fine now.

Superset in production

I've been trying to work out how best to productionise superset, or at least getting it running in a daemon. I created a SystemD service with the following:
[Unit]
Description=Superset
[Service]
Type=simple
WorkingDirectory=/home/XXXX/Documents/superset/venv
ExecStart=/home/XXXX/Documents/superset/venv/bin/superset runserver
[Install]
WantedBy=multi-user.target
And the last error I got to was gunicorn cannot be found. I don't know what else I am missing or is there another way to set it up?
I was able to set it up, after a bunch of searching and trial and error with supervisor, which is a python 2 program, but you can run any command (including other python version in other virtual environments).
I'm running it on an ubuntu 16 VPS. After creating an environment and installing supervisor, you create a configuration file and mine looks like this:
[supervisord]
logfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.pid
nodaemon = false
minfds = 1024
minprocs = 200
umask = 022
identifier = supervisor
directory = %(ENV_HOME)s/sdaprod/supervisor
nocleanup = true
childlogdir = %(ENV_HOME)s/sdaprod/supervisor
strip_ansi = false
[unix_http_server]
file=/tmp/supervisor.sock
chmod = 0777
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:superset]
command = %(ENV_HOME)s/miniconda3/envs/superset/bin/superset runserver
directory = %(ENV_HOME)s/sdaprod
environment = PATH='%(ENV_PATH)s:%(ENV_HOME)s/miniconda3/envs/superset/bin',PYTHONPATH='%(ENV_PYTHONPATH)s:%(ENV_HOME)s/sdacore:%(ENV_HOME)s/sdaprod'
And then you just run supervisord from an environment that has it installed
The %(ENV_<>)s are environment variables. This is my first time doing this, so I absolutely can not vouch for this approach's efficiency, but it does work.