Docker Compose to Cloud Run - django

i created a docker compose file containing django apps and postgresql, and it runs perfectly. then I'm confused whether I can deploy this docker compose file to the google container registry to run a cloud run?
version: "3.8"
services:
app:
build: .
volumes:
- .:/app
ports:
- 8000:8000
image: django-app
container_name: django_container
command: >
bash -c "python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=nukacola
- POSTGRES_PASSWORD=as938899
container_name: postgres_db
thank you for answering my question

You cannot run a docker-compose configuration on Cloud Run. Cloud Run only supports individual containers.
To run your Django app on Cloud Run, you can do the following.
Build your docker image for Django locally using the docker build command.
Push the image to GCR using docker push command.
Create a new Cloud Run service and use the newly pushed Docker image.
Create a Cloud SQL Postgres instance and use its credentials as environment variables in your Cloud Run service.
You can also host your own Compute Engine instance and run docker-compose on it but I would not recommend that.
You can also create a GKE cluster and run Django and Postgres in it but it requires knowledge of Kubernetes(deployments, statefulsets, services etc).

Related

Docker-compose + Django on AWS C9

I'm trying to use a docker container to run a django application on the cloud base IDE from AWS (AWS C9).
The application is started correctly and the development server is started on http://127.0.0.1:8080/
However when browsing to the public URL of the cloud 9 application I'm getting the error 'no application seems to be running'.
When creating a django application without docker, the preview on AWS coud 9 works fine.
Are there any addtional settings required to get the cloud 9 preview to work?
This is my docker-compose file.
services:
web:
build: .
command: python manage.py runserver $IP:$PORT
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres

how to dump postgres database in django?

I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?
While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.

Docker Compose Up works locally, fails to deploy to AWS

I am trying to deploy my docker container (hosting two images in a container) to AWS. I can succesfully run my docker compose up locally, and that builds and runs the container on my local Docker.
However, when I have set up a new context for ECS, and switched to this new context. However, when I run docker compose up (which I believe should now deploy to AWS), I get the error docker.io/xxxx/concordejs_backend:latest: not found.
My docker-compose.yml file looks like this:
version: '3'
services:
backend:
image: xxxx/concordejs_backend
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: concorde-backend
ports:
- "5000:5000"
frontend:
image: xxxx/concordejs_frontend
build:
context: ./frontend
dockerfile: ./Dockerfile
container_name: concorde-frontend
ports:
- "3001:3000"
The image has been built on your local machine and is subquently retrieved from their each time you launch docker-compose locally.
The AWS service is trying to retrieve the image from the public repository docker.io (dockerhub) since it doesn't have the image you built locally.
One solution might be to push your local image to dockerhub for it to be accessible by ECS or you can use AWS's repository service, ECR. https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_ECS.html

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

Best practics working docker-compose on AWS ECS for Continuos Deploy

I'm new on ECS and and I'm somewhat confused about how to deploy in AWS ECS Fargate automatically with a docker-compose file with multiple services.
I was able to perform an End-to-End from a git push to the deploy of a single container, with the following steps:
Create an AWS ECR
Tag the docker image
Create CodeCommit
Create CodeBuild
Create CodeDeploy
Create a Cluster with a Task Description
Create Pipeline to join everything before and automate until the end.
Done
But what happens when you have multiple services?
Do I have to modify the docker-compose file to be compatible with ECS? If so, how can I separate the repository if the entire project is in a folder (pydanny cookiecutter structure) ?
Do I have to create an ECR repository for each service of my docker-compose?
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
How can I modify the volumes of the docker-compose to work on ECS?
I use the following docker-compose file generated by the pydanny cookiecutter and it has 7 services:
Django + Postgres + Redis + Celery + Celeryworker + Celerybeat + Flower
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: test_cd_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: test_cd_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:3.2
celeryworker:
<<: *django
image: test_cd_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: test_cd_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: test_cd_flower
ports:
- "5555:5555"
command: /start-flower
Thank you very much for any help.
it depends if you want to use your docker-compose to perform all the operations. If you want to build, push and pull using your docker-compose, you'll need to have the image blocks in the docker-compose.yml matching the ECR address.
e.g.
image: ${ID}.dkr.ecr.${region}.amazonaws.com/${image_name}:${image_tag:-latest}
Do I have to create an ECR repository for each service of my docker-compose?
You don't have to create an ECR repository for each service but for each image you build. In your case, you don't have to create a repo for redis but you'll have to do it for django and postgres since you're building them using your Dockerfiles. celeryworker and celerybeat are using the django image to start so you won't need to create an extra repo for them.
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
Here I can only provide some suggestions, it all depends on your setup. I tend to remain as cloud service agnostic as possible.
You can have images in the docker-compose.yml defined as follow:
services:
postgres:
image: ${ID}.dkr.ecr.${region}.amazonaws.com/my_postgres:${image_tag:-latest}
django:
image: <theID>.dkr.ecr.<theRegion>.amazonaws.com/my_django:${image_tag:-latest}
and then simply prepare a .env file on the fly during the build containing the info you need. e.g.
image_tag=1.2.0
How can I modify the volumes of the docker-compose to work on ECS?
Unfortunately I can't answer this question and I found the following answer:
https://devops.stackexchange.com/questions/6228/using-volumes-on-aws-fargate