Docker Compose - adding volume using Amazon S3 - django

I'm using docker-compose v2 to build my containers (Django and Nginx).
I'm wondering how to store the static and media files. At the beginning I stored them as a volume on the machine, but the machine crashed and I lost the data (or at least, I didn't know how to recover it).
I thought it's better to store it on Amazon S3, but there are no guides for that (maybe it means something :) ).
This is my docker-compose file:
I tried to add the the needed fields (name, key, secret,...) but no success so far.
Is it the right way?
Thanks!
version: '2'
services:
web:
build:
context: ./web/
dockerfile: Dockerfile
expose:
- "8000"
volumes:
- ./web:/code
- static-data:/www/static
- media-data:/www/media
env_file: devEnv
nginx:
build: ./nginx/
ports:
- "80:80"
volumes:
- static-data:/www/static
- media-data:/www/media
volumes_from:
- web
links:
- web:web
volumes:
static-data:
driver: local
media-data:
driver: s3

Here a example of how upload files to S3(for backup) from a container, but would be made in the host OS too, since you have a volume of container mounted on Host OS.
In this script I download media from S3 to a local container/server. After it, I use pynotify to watch the dir static/media, for modifications. If any change occur it upload the file to S3 using the command subprocess.Popen(upload_command.split(" ")).
I think that you can adapt this script for your problem too.
Before you test this script, you should set your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY on the environment variables of OS.
For more details S4cmd documentation.
#!-*- coding:utf-8 -*-
import pyinotify
import os
import subprocess
from single_process import single_process
# You most have set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
# in enviromnet variables
PROJECT_DIR = os.getcwd()
MEDIA_DIR = os.path.join(PROJECT_DIR, "static/media")
AWS_BUCKET_NAME = os.environ.get("AWS_BUCKET_NAME", '')
S4CMD_DOWNLOAD_MEDIA = "s4cmd get --sync-check --recursive s3://%s/static/media/ static/" % (AWS_BUCKET_NAME)
UPLOAD_FILE_TO_S3="s4cmd sync --sync-check %(absolute_file_dir)s s3://"+ AWS_BUCKET_NAME +"/%(relative_file_dir)s"
# Download all media from S3
subprocess.Popen(S4CMD_DOWNLOAD_MEDIA.split(" ")).wait()
class ModificationsHandler(pyinotify.ProcessEvent):
def process_IN_CLOSE_WRITE(self, event):
try:
dir = event.path
file_name = event.name
absolute_file_dir=os.path.join(dir, file_name)
relative_dir=dir.replace(PROJECT_DIR, "")
relative_file_dir=os.path.join(relative_dir, file_name)
if relative_file_dir.startswith("/"):
relative_file_dir = relative_file_dir[1:]
print("\nSeding file %s to S3" % absolute_file_dir)
param = {}
param.update(absolute_file_dir=absolute_file_dir)
param.update(relative_file_dir=relative_file_dir)
upload_command = UPLOAD_FILE_TO_S3 % param
print(upload_command)
subprocess.Popen(upload_command.split(" "))
except Exception as e:
# log excptions
print("Some problem:", e.message)
#single_process
def main():
handler = ModificationsHandler()
wm = pyinotify.WatchManager()
notifier = pyinotify.Notifier(wm, handler)
print("\nListening changes in: " + MEDIA_DIR)
if MEDIA_DIR:
wdd = wm.add_watch(MEDIA_DIR, pyinotify.IN_CLOSE_WRITE, auto_add=True, rec=True)
notifier.loop()
if __name__ == "__main__":
main()

Related

ConnectionError: HTTPConnectionPool(host='172.30.0.3', port=5051): Max retries exceeded with url for flask docker

Im a begginer in docker and flask and am trying to build an arthimatic microservice.I'm trying to build seperate flask applications for addition,substraction etc and communicating with them from my landing-service.For now I am only trying to implement Addition service
landing-Service Code
#app.route('/', methods=['POST', 'GET'])
def index():
# Only process the code if the Submit button has been clicked.
if request.form.get("submit_button") != None:
number_1 = int(request.form.get("first"))
number_2 = int(request.form.get('second'))
operation = request.form.get('operation')
result = 0
if operation == 'add':
response = requests.get(f"http://localhost:5051/?operator_1={number_1}&operator_2={number_2}")
result = response.text
flash(f'The result of operation {operation} on {number_1} and {number_2} is {result}')
return render_template('index.html')
if __name__ == '__main__':
app.run(
debug=True,
port=5050,
host="0.0.0.0"
)
Addition service code:
class Addition(Resource):
def get(self):
if request.args.get('operator_1)!=None and request.args.get('operator_2')!=None:
operator_1 = int(request.args.get('operator_1'))
operator_2 = int(request.args.get('operator_2'))
return operator_1+operator_2
api.add_resource(Addition,'/')
if __name__=='__main__':
app.run(debug=True,
port=5051,
host="0.0.0.0"
)
Docker compose file
version: '3.3' # version of compose format
services:
landing-service:
build: ./landing # path is relative to docker-compose.yml location
hostname: landing-service
ports:
- 5050:5050 # host:container
networks:
sample:
aliases:
- landing-service
addition-service:
build: ./addition # path is relative to docker-compose.yml location
hostname: addition-service
ports:
- 5051:5051 # host:container
networks:
sample:
aliases:
- addition-service
networks:
sample:
I am getting .ConnectionError: HTTPConnectionPool(host='172.30.0.3', port=5051): Max retries exceeded with url when i use docker-compose up.The applications work when i use them from command line and my lannding service is able to communicate with localhost:5051 but i am unable to communicate between them when i use docker-compose....

Add Ansible variable to a python file

I am trying to deploy a lambda function using ansible playbook.
Lambda code
import boto3
import os`enter code here`
ecs = boto3.client('ecs')
LAMBDA_ENV = ''
if 'stack_name' in os.environ:
LAMBDA_ENV = os.environ.get['stack_name']
def task(event,context):
get_task_arn = ecs.list_tasks(
cluster = LAMBDA_ENV,
family= LAMBDA_ENV + '-Wallet-Scheduler',
desiredStatus='RUNNING'
)
#print(get_task_arn)
task = ''.join(get_task_arn['taskArns'])
print(task)
stop_task = ecs.stop_task(
cluster = LAMBDA_ENV
task = task,
reason='test'
)
The command i use to deploy the lambda function is
ansible-playbook -e stack_name=DEV playbook.yaml
How do i make sure the variable in python file LAMBDA_ENV changes to DEV,STAGE,PRD based on the environment when it gets deployed?
Ansible Playbook
- name: package python code to a zip file
shell: |
cd files/
rm allet-restart.py
zip file.zip file.py
- name: Create lambda function
lambda:
name: '{{ stack_name | lower }}-lambda-function'
state: present
zip_file: 'files/file.zip'
runtime: python2.7
role: '{{ role_arn }}'
timeout: 60
handler: file.task
with_items:
- env_vars:
stack_name: 'test'
register: wallet-restart
Deploying it from MacOS
AWS Lambda supports environment parameters and the same can be accessed from lambda code.
With this, you can avoid hardcoding parameters inside the code.
"environment_variables" is the parameter using which you can add env variables for lambda
(Ref : https://docs.ansible.com/ansible/latest/modules/lambda_module.html)
If you are using python, you can access the lambda environment variables using python os module
import os
LAMBDA_ENV = ''
if 'ENV' in os.environ:
LAMBDA_ENV = os.environ['ENV']
Hope this helps !!!
You can use ansible template module to replace all the env variables for the python code in the lamda then zip all files using shell module and then invoke the lamda.
- name: template module
template:
src:
dest:
- name: zip the templated python code insde the zip
shell: zip ...
- name: invoke lamda
lamda:
....

Flask and neomodel: ModelDefinitionMismatch

I am getting a neomodel.exceptions.ModelDefinitionMismatch while trying to build a simple Flask app connected to the bold port of Neo4J.
import json
from flask import Flask, jsonify, request
from neomodel import StringProperty, StructuredNode, config
class Item(StructuredNode):
__primarykey__ = "name"
name = StringProperty(unique_index=True)
def create_app():
app = Flask(__name__)
config.DATABASE_URL = 'bolt://neo4j:test#db:7687'
#app.route('/', methods=['GET'])
def get_all_items():
return jsonify({'items': [item.name for item in Item.nodes]})
#app.route('/', methods=['POST'])
def create_item():
item = Item()
item.name = json.loads(request.data)['name']
item.save()
return jsonify({'item': item.__dict__})
return app
Doing postrequest works; I can check in the database that the items actually are added! But the get request returns:
neomodel.exceptions.ModelDefinitionMismatch: Node with labels Item does not resolve to any of the known objects
I'm using Python 3.7.0, Flask 1.0.2 and neomodel 3.0.3
update
To give the full problem: I run the application in a Docker container with docker-compose in DEBUG mode.
The Dockerfile:
FROM continuumio/miniconda3
COPY . /app
RUN pip install Flask gunicorn neomodel==3.3.0
EXPOSE 8000
CMD gunicorn --bind=0.0.0.0:8000 - "app.app:create_app()"
The docker-compose file:
# for local development
version: '3'
services:
db:
image: neo4j:latest
environment:
NEO4J_AUTH: neo4j/test
networks:
- neo4j_db
ports:
- '7474:7474'
- '7687:7687'
flask_svc:
build: .
depends_on:
- 'db'
entrypoint:
- flask
- run
- --host=0.0.0.0
environment:
FLASK_DEBUG: 1
FLASK_APP: app.app.py
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- neo4j_db
networks:
neo4j_db:
driver: bridge
And I run it with:
docker-compose up --build -d
Try using neomodel==3.2.9
I had a similar issue and rolled back version to get it to work.
Here's the commit that broke things
Looks like they introduced _NODE_CLASS_REGISTRY under neomodel.Database, object which is meant to be a singleton. But with Flask it's not necessarily a singleton because it keeps instantiating new instances of Database with empty _NODE_CLASS_REGISTRY.
I am not sure how to get this to work with 3.3.0

How do I run redis on Travis CI?

I am practicing unit testing using django
In items/tests.py
class NewBookSaleTest(SetUpLogInMixin):
def test_client_post_books(self):
send_post_data_post = self.client.post(
'/booksale/',
data = {
'title':'Book_A',
}
)
new_post = ItemPost.objects.first()
self.assertEqual(new_post.title, 'Book_A')
In views/booksale.py
class BookSale(LoginRequiredMixin, View):
login_url = '/login/'
def get(self, request):
[...]
def post(self, request):
title = request.POST.get('title')
saler = request.user
created_bookpost = ItemPost.objects.create(
user=saler,
title=title,
)
# redis + celery task queue
auto_indexing = UpdateIndexTask()
auto_indexing.delay()
return redirect(
[...]
)
when I run unit test, raise redis connection error
redis.exceptions.ConnectionError
I know when I running redis-server and celery is error will solve
but when I run unit test in Travis CI I can't run redis-server and celery in Travis CI
So, I found this link
I try insert this code in .travis.yml
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
service:
- redis-server
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development
but it shows same error
so I found next link
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
but... it show same error...
how can I running redis-server in travis ci?
If you have not solve the problem now, here is a solution.
Remove the service line.
Redis is provided by the test environment as a default component, so
service:
- redis-server
will be translated as:
service redis start
In this problem, we want to customize redis to add password auth. So we doesn't need travis ci to start redis service. Just use the before_script.
And after all, your .travis.yml should be this:
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development

uwsgi and flask - cannot import name "appl"

I created several servers, without any issue, with the stack nginx - uwsgi - flask using virtualenv.
with the current one uwsgi is throwing the error cannot import name "appl"
here is the myapp directory structure:
/srv/www/myapp
+ run.py
+ venv/ # virtualenv
+ myapp/
+ init.py
+ other modules/
+ logs/
here is the /etc/uwsgi/apps-avaliable/myapp.ini
[uwsgi]
# Variables
base = /srv/www/myapp
app = run
# Generic Config
# plugins = http, python
# plugins = python
home = %(base)/venv
pythonpath = %(base)
socket = /tmp/%n.sock
module = %(app)
callable = appl
logto = %(base)/logs/uwsgi_%n.log
and this is run.py
#!/usr/bin/env python
from myapp import appl
if __name__ == '__main__':
DEBUG = True if appl.config['DEBUG'] else False
appl.run(debug=DEBUG)
appl is defined in myapp/ _ init _ .py as an instance of Flask()
(underscores spaced just to prevent SO to turn them into bold)
I accurately checked the python code and indeed if I activate manually the virtualenvironment and execute run.py manually everything works like a charm, but uwsgi keeps throwing the import error.
Any suggestion what should I search more ?
fixed it, it was just a read permissions issue. The whole python app was readable by my user but not by the group, therefore uwsgi could not find it.
This was a bit tricky because I deployed successfully many time with the same script and never had permissions issues