Docker-compose + Django on AWS C9 - django

I'm trying to use a docker container to run a django application on the cloud base IDE from AWS (AWS C9).
The application is started correctly and the development server is started on http://127.0.0.1:8080/
However when browsing to the public URL of the cloud 9 application I'm getting the error 'no application seems to be running'.
When creating a django application without docker, the preview on AWS coud 9 works fine.
Are there any addtional settings required to get the cloud 9 preview to work?
This is my docker-compose file.
services:
web:
build: .
command: python manage.py runserver $IP:$PORT
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres

Related

Docker Compose to Cloud Run

i created a docker compose file containing django apps and postgresql, and it runs perfectly. then I'm confused whether I can deploy this docker compose file to the google container registry to run a cloud run?
version: "3.8"
services:
app:
build: .
volumes:
- .:/app
ports:
- 8000:8000
image: django-app
container_name: django_container
command: >
bash -c "python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=nukacola
- POSTGRES_PASSWORD=as938899
container_name: postgres_db
thank you for answering my question
You cannot run a docker-compose configuration on Cloud Run. Cloud Run only supports individual containers.
To run your Django app on Cloud Run, you can do the following.
Build your docker image for Django locally using the docker build command.
Push the image to GCR using docker push command.
Create a new Cloud Run service and use the newly pushed Docker image.
Create a Cloud SQL Postgres instance and use its credentials as environment variables in your Cloud Run service.
You can also host your own Compute Engine instance and run docker-compose on it but I would not recommend that.
You can also create a GKE cluster and run Django and Postgres in it but it requires knowledge of Kubernetes(deployments, statefulsets, services etc).

How to connect a django app to a dockerized postgres db, both from a dockerized django and non-dockerized django using same DATABASE_HOST

I have a postgres container, docker-compose.yml:
services:
db:
container_name: db
expose:
- "5432"
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
And a django project with settings.py:
DATABASES = {
'default': {
'HOST': os.environ.get('POSTGRES_HOST', '127.0.0.1')
# etc
}
}
.env
POSTGRES_HOST_DJANGO=db
When I run my django app locally with manage.py runserver 0.0.0.0:8000 it connects fine, using the default POSTGRES_HOST=127.0.0.1, because .env isn't loaded.
I also run my django app in a container sometimes:
docker-compose.yml:
web:
#restart: unless-stopped
build: .
env_file: .env
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- 8000:8000
links:
- db:db
However it uses the .env file and connects with POSTGRES_HOST=db
If I try to connect the locally run django app with POSTGRES_HOST=db it fails:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
And if I try to run the django app in a container with POSTGRES_HOST=127.0.0.1, it fails in the same way.
How can I get them to use the same POSTGRES_HOST?
The problem seems to be in the network config. I don't see one.
The default behavior of docker-compose is that it creates a network for every compose file - by default the name is the folder name with '_default'.
Django App is in a different network and Postgres is in a different network
If your Django app and your Postgres containers are in different docker-compose files, using container names to resolve hosts will not work(by default, can be done with a custom network config) as they are in two different networks.
As you have done a port binding, you can directly access Postgres by giving host machine's private ip and port 5432 in the container, this way communication is happening through the host network.
If you find a need to make the containers talk to each other directly, make sure they are on the same docker network
I figured out how to do it. It wasn't getting them to use the same variable, it was to get them to read different variables based on how it was run. So:
from docker-compose.yml
web:
build: .
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
env_file: .env
environment:
POSTGRES_HOST: db # takes precendent over .env file
And in .env:
POSTGRES_HOST=127.0.0.1
Now, when I run locally, with ./manage.py runserver, it uses the .env file and connects to the db container properly at 127.0.0.1:5342
But if I run docker-compose up web, even though it also read the .env file, the environment variable provided in the compose file takes precedent and it uses POSTGRES_HOST: db and connects to the db container as well!

Want to connect mongodb docker service with my django application

I want to create two docker services one is mongodb service another one is web service build using django. And i need that web-service (django app) which need to be connected to that mongodb docker service.
but i dont know how to connect with mongodb docker service in my django application which is also a service running in a same docker swarm .`This is my docker-compose.yml:
version: '3'
services:
mongo:
image: mongo:latest
command: mongod --storageEngine wiredTiger
ports:
- "27017:27017"
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- mongo
depends_on:
- mongo
Here i tried with mongoengine in settings.py of django application but failed
MONGO_DATABASE_NAME = "reg_task21"
MONGO_HOST = "mongo"
mongoengine.connect(db=MONGO_DATABASE_NAME, host=MONGO_HOST,port=27017)
You should add the username and password to connect statement:
mongoengine.connect(db=MONGO_DATABASE_NAME, username='root', password='example', host=MONGO_HOST,port=27017)

Can't access the Django server at http://127.0.0.1:8000/

At firt time it worked totally fine but it is showing this issue now:
app$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
September 03, 2017 - 17:28:43
Django version 1.11.4, using settings 'aggregator.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
It says its running but I can't Access it on that address. What could have been possibly wrong?
my docker-compose file:
version: "2"
services:
postgres:
image: postgres:9.6
volumes:
- pgdata:/var/lib/data/postgres
backend:
build: .
command: gosu app bash
volumes:
- .:/app
- pyenv:/python
links:
- postgres:postgres
ports:
- 8000:8000
volumes:
pyenv:
pgdata:
For development, you can try setting this in settings.py:
ALLOWED_HOSTS = ['*']
and then start your server locally by running:
python manage.py runserver 0.0.0.0:8000
To answer your question of
What could have been possibly wrong?
A number of things:
The mapping of your parts is incorrect. Your application is mapping to another port other than 8000. Check which port your application is mapped to. Not by checking the output of python manage.py runserver, because that is not the command you run in your container. Check the logs of your container.
When your command gosu app bash is run, there is an error. Check the output of docker logs backend.
You could be running the docker-compose up -d command incorrectly.
Post the out put of the logs if you want more information.

Django, Postgres, Docker - Can not run migrations on database on ubuntu 14.04 python 2.7.11

I'm using the same docker-compose file to run an environment with django and postgresql.
When I run the environment on my local virtual machine (using docker-machine virtualbox driver and boot2docker distribution) and run migrations or tests it all works.
Deploying the exact same environment to digitalOcean running ubutu 14.04 I can't run migrations or tests.
Both environments run Django 1.8 and postgres 9.4.5.
The portal container runs on the ubuntu dist runs python 2.7.11 and the portal container on the boot2docker runs python 2.7.10
I can connect to the DB from django manually but for some reason trying to run the migrations or tests just stops after some time period.
The postgres sql logs not showing any error and the python process also not throwing any exceptions.
Every time I try to run the migrations, The process stops on the next line:
Creating test database for alias 'default'...
And then just exists.
I'm not sure what am I missing.
my docker-compose file:
portal:
build: ./portal/
ports:
- "80"
container_name: portal
env_file: ./portal/.env
links:
- postgres:postgres
command: waitress-serve --port=80 --threads=10 ltgss.wsgi:application
postgres:
restart: always
image: postgres:latest
environment:
POSTGRES_PASSWORD: 12345
POSTGRES_USER: test
container_name: postgres
ports:
- "5432:5432"
Thanks.
Try to increase your VM's memory. I had a similar issue. After increasing my VM's memory everything worked fine.