How do I run redis on Travis CI? - django

I am practicing unit testing using django
In items/tests.py
class NewBookSaleTest(SetUpLogInMixin):
def test_client_post_books(self):
send_post_data_post = self.client.post(
'/booksale/',
data = {
'title':'Book_A',
}
)
new_post = ItemPost.objects.first()
self.assertEqual(new_post.title, 'Book_A')
In views/booksale.py
class BookSale(LoginRequiredMixin, View):
login_url = '/login/'
def get(self, request):
[...]
def post(self, request):
title = request.POST.get('title')
saler = request.user
created_bookpost = ItemPost.objects.create(
user=saler,
title=title,
)
# redis + celery task queue
auto_indexing = UpdateIndexTask()
auto_indexing.delay()
return redirect(
[...]
)
when I run unit test, raise redis connection error
redis.exceptions.ConnectionError
I know when I running redis-server and celery is error will solve
but when I run unit test in Travis CI I can't run redis-server and celery in Travis CI
So, I found this link
I try insert this code in .travis.yml
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
service:
- redis-server
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development
but it shows same error
so I found next link
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
but... it show same error...
how can I running redis-server in travis ci?

If you have not solve the problem now, here is a solution.
Remove the service line.
Redis is provided by the test environment as a default component, so
service:
- redis-server
will be translated as:
service redis start
In this problem, we want to customize redis to add password auth. So we doesn't need travis ci to start redis service. Just use the before_script.
And after all, your .travis.yml should be this:
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development

Related

Error at setup on Travis with django-pytest, docker

I got errors running pytest on Travis CI but have no idea how to fix it. When I run the command docker-compose run --rm api sh -c "pytest && flake8" on Docker in my local, all the tests pass. Could anyone give me a hint? I have some fixtures on conftest.py as well.
A part of the error information
api/tests/test_order_items.py EEEEE [ 45%]
api/tests/test_orders.py EEEEEE [100%]
==================================== ERRORS ====================================
__________ ERROR at setup of TestOrderItemModel.test_list_order_items __________
self = <django.db.backends.utils.CursorWrapper object at 0x7efe0e94d3d0>
sql = 'SELECT "orders_orderitem"."id", "orders_orderitem"."order_id", "orders_orderitem"."pizza_type", "orders_orderitem"."pizza_size", "orders_orderitem"."quantity" FROM "orders_orderitem" ORDER BY "orders_orderitem"."id" ASC'
params = ()
ignored_wrapper_args = (False, {'connection': <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7efe0fbb4fd0>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0x7efe0e94d3d0>})
def _execute(self, sql, params, *ignored_wrapper_args):
self.db.validate_no_broken_transaction()
with self.db.wrap_database_errors:
if params is None:
return self.cursor.execute(sql)
else:
> return self.cursor.execute(sql, params)
E psycopg2.errors.UndefinedColumn: column orders_orderitem.order_id does not exist
E LINE 1: ...SOR WITH HOLD FOR SELECT "orders_orderitem"."id", "orders_or...
.travis.yml
language: python
python:
- "3.7"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run --rm api sh -c "pytest && flake8"
Pipfile
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
flake8 = "==3.7.9"
autopep8 = "==1.4.4"
pytest = "==5.2.1"
pytest-django = "==3.6.0"
[packages]
django = "==2.2.7"
djangorestframework = "==3.10.3"
psycopg2 = "==2.8.4"
[requires]
python_version = "3.7"

Dockerized Django: how to manage sql scripts in migrations?

Fairly new to Docker, I am trying to add the execution of a custom sql script (triggers and functions) to Django's migration process and I am starting to feel a bit lost. Overall, What I am trying to achieve follows this pretty clear tutorial. In this tutorial, migrations are achieved by the execution of an entry point script. In the Dockerfile:
# run entrypoint.sh
ENTRYPOINT ["/usr/src/my_app/entrypoint.sh"]
Here is the entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# tried several with and without combinations
python manage.py flush --no-input
python manage.py makemigrations my_app
python manage.py migrate
exec "$#"
So far so good. Turning to the question of integrating the execution of custom sql scripts in the migration process, most articles I read (this one for instance) recommend to create an empty migration to add the execution of sql statements. Here is what I have in
my_app/migrations/0001_initial_data.py
import os
from django.db import migrations, connection
def load_data_from_sql(filename):
file_path = os.path.join(os.path.dirname(__file__), '../sql/', filename)
sql_statement = open(file_path).read()
with connection.cursor() as cursor:
cursor.execute(sql_statement)
class Migration(migrations.Migration):
dependencies = [
('my_app', '0001_initial'),
]
operations = [
migrations.RunPython(load_data_from_sql('my_app_base.sql'))
]
As stated by dependencies, this step depends on the initial one (0001_initial.py):
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Unpayed',
fields=[
etc etc
[The Issue] However, even when I try to manually migrate (docker-compose exec web python manage.py makemigrations my_app), I get the following error because the db in the postgresql container is empty:
File "/usr/src/my_app/my_app/migrations/0001_initial_data.py", line 21, in Migration
migrations.RunPython(load_data_from_sql('my_app_base.sql'))
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 82, in _execute
....
return self.cursor.execute(sql)
django.db.utils.ProgrammingError: relation "auth_user" does not exist
[What I do not understand] However, when I log in the container, remove 0001_initial_data.py and run ./entrypoint.sh, everything works like a charm and tables are created. I can add 0001_initial_data.py manually later on, run entrypoint.sh angain and have my functions. Same when I remove this file before running docker-compose up -d --build: tables are created.
I feel like I am missing some obvious way and easier around trying integrate sql script migrations in this canonical way. All I need is this script to be run after 0001_initial migration is over. How would you do it?
[edit] docker-compose.yml:
version: '3.7'
services:
web:
build:
context: ./my_app
dockerfile: Dockerfile
command: python /usr/src/my_app/manage.py runserver 0.0.0.0:8000
volumes:
- ./my_app/:/usr/src/my_app/
ports:
- 8000:8000
environment:
- SECRET_KEY='o##xO=jrd=p0^17svmYpw!22-bnm3zz*%y(7=j+p*t%ei-4pi!'
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=postgres
- SQL_PASSWORD=N0tTh3D3favlTpAssw0rd
- SQL_HOST=db
- SQL_PORT=5432
depends_on:
- db
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
django:2.2
python:3.7
I believe the issue has to do with you naming the migration file and manually making your dependencies with the same prefix "0001" The reason I say this is because when you do reverse migrations, you simply can just reference the prefix. IE if you wanted to go from your 7th migration to your 6th migration. The command looks like this python manage.py migrate my_app 0006 Either way, I would try deleting and creating a new migration file via python manage.py makemigrations my_app --empty and moving your code into that file. This should also write the dependencies for you.
The error message alongside the test you ran with adding he migration file after is indicative of the issue though. Some how initial migrations aren't running before the other one. I would also try dropping your DB as it may have persisted some bad state ./manage.py sqlflush
[The easiest way I could find] I simply disentangled django migrations from the creation of custom functions in the DB. Migration are run first so that the tables exists when creating the functions. Here is the entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
# add custom sql functions to db
cat my_app/sql/my_app_base.sql | python manage.py dbshell
python manage.py collectstatic --no-input
exec "$#"
Keep in mind that manage.py dbshell requires a postgresql-client to run. I just needed to add it in the Dockerfile:
# pull official base image
FROM python:3.7-alpine
...........
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev postgresql-client\
&& pip install psycopg2 \
&& apk del build-deps

Flask and neomodel: ModelDefinitionMismatch

I am getting a neomodel.exceptions.ModelDefinitionMismatch while trying to build a simple Flask app connected to the bold port of Neo4J.
import json
from flask import Flask, jsonify, request
from neomodel import StringProperty, StructuredNode, config
class Item(StructuredNode):
__primarykey__ = "name"
name = StringProperty(unique_index=True)
def create_app():
app = Flask(__name__)
config.DATABASE_URL = 'bolt://neo4j:test#db:7687'
#app.route('/', methods=['GET'])
def get_all_items():
return jsonify({'items': [item.name for item in Item.nodes]})
#app.route('/', methods=['POST'])
def create_item():
item = Item()
item.name = json.loads(request.data)['name']
item.save()
return jsonify({'item': item.__dict__})
return app
Doing postrequest works; I can check in the database that the items actually are added! But the get request returns:
neomodel.exceptions.ModelDefinitionMismatch: Node with labels Item does not resolve to any of the known objects
I'm using Python 3.7.0, Flask 1.0.2 and neomodel 3.0.3
update
To give the full problem: I run the application in a Docker container with docker-compose in DEBUG mode.
The Dockerfile:
FROM continuumio/miniconda3
COPY . /app
RUN pip install Flask gunicorn neomodel==3.3.0
EXPOSE 8000
CMD gunicorn --bind=0.0.0.0:8000 - "app.app:create_app()"
The docker-compose file:
# for local development
version: '3'
services:
db:
image: neo4j:latest
environment:
NEO4J_AUTH: neo4j/test
networks:
- neo4j_db
ports:
- '7474:7474'
- '7687:7687'
flask_svc:
build: .
depends_on:
- 'db'
entrypoint:
- flask
- run
- --host=0.0.0.0
environment:
FLASK_DEBUG: 1
FLASK_APP: app.app.py
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- neo4j_db
networks:
neo4j_db:
driver: bridge
And I run it with:
docker-compose up --build -d
Try using neomodel==3.2.9
I had a similar issue and rolled back version to get it to work.
Here's the commit that broke things
Looks like they introduced _NODE_CLASS_REGISTRY under neomodel.Database, object which is meant to be a singleton. But with Flask it's not necessarily a singleton because it keeps instantiating new instances of Database with empty _NODE_CLASS_REGISTRY.
I am not sure how to get this to work with 3.3.0

Not able to create django superuser in aws beanstalk either through container command or manage.py

I have been stuck to a bottleneck which I have tried to resolve using official docs and other answers here in stackoverflow but still not able to create django superuser programatically in the beanstalk environment.
Current state -
a. Application is getting deployed smoothly and I am able to access database from my UI application. basically the entry is getting made in some other table that i have in application.
How I have tried to create superuser -
a. By passing container commands -
Option 1-
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
commands:
super_user:
command: "source /opt/python/run/venv/bin/activate && python <appname>/createuser.py"
leader_only: true
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "<Appname>.settings"
PYTHONPATH: "/opt/python/current/app:$PYTHONPATH"
In the logs -
I didn't see it trying to run the custom command.
Option 2 -
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
03_createsuperuser:
command: "source /opt/python/run/venv/bin/activate && django-admin.py createsuperuser"
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "<appname>.settings"
PYTHONPATH: "/opt/python/current/app:$PYTHONPATH"
For this, I created a createsuperuser.py file under /management/commands/ following the structure of init.py in both folders and one createsuperuser.py under commands -
from django.core.management.base import BaseCommand
from django.contrib.auth.models import User
class Command(BaseCommand):
def handle(self, *args, **options):
if not User.objects.filter(username="admin").exists():
User.objects.create_superuser("admin", "admin#gmail.com", "admin")
On this, I got a following message from logs -
Superuser creation skipped due to not running in a TTY. You can run `manage.py createsuperuser` in your project to create one manually.
My queries are -
why I am not able to create a superuser from command line of my virtual env? In that I am getting a message like this -
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
A bit weird considering makemigrations command is working fine.
And when I echo $DJANGO_SETTINGS_MODULE, i get the right setting
appname.settings
Let me know where I am going wrong in create superuser thing?
I solved this problem recently with one of my sample app deployment in beanstalk.
I mostly followed the official documentation from this link
In your django app folder create python package 'management'
create another package inside management package 'commands'
create a python file in commands package mysuperuser.py
import os
from django.core.management.base import BaseCommand
from django.contrib.auth.models import User
class Command(BaseCommand):
def handle(self, *args, **options):
if not User.objects.filter(username='myuser').exists():
User.objects.create_superuser('myuser',
'myuser#myemail.com',
'mypassword')
In your django-migrate.config file, add a second command
02_create_superuser_for_django_admin:
command: "python manage.py mysuperuser"
leader_only: true
do python manage.py collectstatic and eb deploy.
Doing this created the superuser for me.I didn't have to add any PYTHONPATH as described in some answers available online.
Your custom file is named "createsuperuser.py" that's the same as the Django command, and that collision is what's causing the issue. Use "createsu.py" for the file name, then be sure to change the config file to also use "createsu."
I spent ages working out how to do this and this is by far the simplest & most secure way. Create the following file .platform > hooks > postdeploy > 01_migrate.sh and input the below:
#!/bin/bash
source /var/app/venv/*/bin/activate && { python migrate.py createsuperuser --noinput; }
You can then add DJANGO_SUPERUSER_PASSWORD, DJANGO_SUPERUSER_USERNAME, DJANGO_SUPERUSER_EMAIL to the configuration section of the application environment and it will know these are to be used as we have specified --noinput.
Then add the below to the folder .ebextentions > django.config . This just gets round permission issues in running 01_migrate.sh
container_commands:
01_chmod1:
command: "chmod +x .platform/hooks/postdeploy/01_migrate.sh"
That will create your superuser in a secure way, with the same logic you can also run migrations and collect static by adding to the 01_migrate.sh file.
I have a slightly simpler version of #jimbo's answer. Inside .ebextensions/db-migrate.config I have the following:
container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
leader_only: true
02_createsuperuser:
command: "source /var/app/venv/*/bin/activate && python3 manage.py createsuperuser --noinput"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: <appname>.settings
The key lines there are the 02_createsuperuser container command. Once you've got that, you can set the DJANGO_SUPERUSER_PASSWORD, DJANGO_SUPERUSER_USERNAME, DJANGO_SUPERUSER_EMAIL environment variables in the environment and deploy and you'll be good to go. Once you've got the user created, remove that container command so it's not run again with the next deployment.
deepesh and jimbos combined solution did it for me.
It is particularly useful if you have a custom User.
I will write down the steps.
1 Create the command file under management/command. Don't name it createsuperuser.py to avoid conflict.
└-- App_dir
└-- management
|-- __init__.py
└-- commands
|-- __init__.py
└-- createsu.py
2 The command file should look like this.
import os
from django.contrib.auth.models import User
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Creates a superuser."
def handle(self, *args, **options):
if not User.objects.filter(username="username").exists():
password = os.environ.get("SUPERUSER_PASSWORD")
if password is None:
raise ValueError("Password not found")
User.objects.create_superuser(
username="username",
email="email",
password=password,
)
print("Superuser has been created.")
else:
print("Superuser exists")
3 Add the command in the config (inside .ebextension) file.
container_commands:
...
03_superuser:
command: "source /var/app/venv/*/bin/activate && python3 manage.py createsu"
leader_only: true
4 Add the SUPERUSER_PASSWORD in environment > configuration > Software > Environment properties
5 Commit and eb deploy.
We are still storing raw passwords, which isnt the most secure thing in the world. However its much safer than hardcoding the password in the command file.
You can't use create superuser in a situation where the user can't input the info. See https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/#Create.the.Admin.User for a different approach.

How to run management command in test file on Travis?

In one of my test files I call a Django management command:
def setUpModule():
management.call_command('loaddata', 'frontend/fixtures/chemicals.json',
verbosity=0)
management.call_command('create_indexes_and_matviews',
db_name, db_user, db_pass,
verbosity=2)
This test runs fine when I run it locally with manage.py test.
However, on Travis I get this error:
======================================================================
ERROR: setUpModule (frontend.tests.test_api_views)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/.../frontend/tests/test_api_views.py", line 35, in setUpModule
verbosity=2)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/django/core/management/__init__.py", line 95, in call_command
raise CommandError("Unknown command: %r" % name)
CommandError: Unknown command: 'create_indexes_and_matviews'
How can I let Travis know about the command?
This is my Travis file:
language: python
python:
- "2.7"
addons:
postgresql: "9.3"
env:
- SECRET_KEY=test DB_NAME=dbtest DB_USER=test DB_PASS=test
before_install:
- export DJANGO_SETTINGS_MODULE=....settings.local
- export PYTHONPATH=$HOME/builds/...
install:
- pip install -r requirements.txt
- pip install -r requirements/local.txt
before_script:
- psql -U postgres -c 'CREATE DATABASE dbtest;'
- psql -U postgres -c "CREATE EXTENSION postgis" -d dbtest
- psql -U postgres -c "CREATE EXTENSION postgis_topology" -d dbtest
- psql -U postgres -c "CREATE USER test WITH CREATEUSER PASSWORD 'test';"
- psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE dbtest to test;"
- psql -U postgres -c "ALTER USER test CREATEDB;"
- cd frontend && python manage.py migrate
script:
- python manage.py test
Is there something I should add so that it knows where to find management commands?
From my practice I know two reasons for such problem.
A. No the_app with create_indexes_and_matviews listed in settings.INSTALLED_APPS (it could be missed, excluded in if/else or try/except magic)
To check actual settings, try to add following command to the Travis file
echo "from django.conf import settings;print(settings.INSTALLED_APPS)" | python manage.py shell
B. Missed app dependencies. Try to get the actual error on travis with following command:
echo "from the_app.management.commands.create_indexes_and_matviews import Command" | python manage.py shell
Usually, real import error is descriptive enough to find the fix.