Is there a way to run a one (or more) commands before the python.buildpack (or any buildpack) runs when doing a CF push? I have a Sphinx user guide I want to ship with my Django app. Since python.buildpack runs collectstatic to collect static files for the Django app, it fails to find the files because it has not been built.
My manifest.yml file looks like this but it fails to run as the commands in the python.buildpack run first.
---
applications:
- name: myapp
command: make -C docs html && gunicorn -w 9 --pythonpath src/myapp myapp.wsgi --log-file -
instances: 3
memory: 2048M
disk_quota: 1024M
buildpacks:
- https://github.com/cloudfoundry/python-buildpack.git
stack: cflinuxfs3
env:
DJANGO_MODE: Production
Related
I have the following heroku.yml file for containers deployment:
build:
docker:
release:
dockerfile: Dockerfile
target: release_image
web: Dockerfile
config:
PROD: "True"
release:
image: web
command:
- python manage.py collectstatic --noinput && python manage.py migrate users && python manage.py migrate
run:
# web: python manage.py runserver 0.0.0.0:$PORT
web: daphne config.asgi:application --port $PORT --bind 0.0.0.0 -v2
celery:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
celery_beat:
command:
- celery --app=my_app beat -l info
image: web
When I deploy I get the following warning, which does not make any sense to me:
Warning: You have declared both a release process type and a release section. Your release process type will be overridden.
My Dockerfile is composed of two stages and I want to keep only the release_image stage:
FROM python:3.8 as builder
...
FROM python:3.8-slim as release_image
...
According to the docs the proper way to choose release_image is to use the target section within build step.
But it also mentions that I can run my migrations within a release section.
So what am I supposed to do to get rid of this warning? I could only live with it if it was clear that both my migrations and target are being considered during deploy.Thanks in advance!
I want to keep only the release_image stage
Assuming this is true for your web process as well, update your build section accordingly:
build:
docker:
web:
dockerfile: Dockerfile
target: release_image
config:
PROD: "True"
Now you only have one process type defined and it targets the build stage you want to use.
Since you can run your migrations from the web container there's no need to build a whole container just for your Heroku release process. (And since your release section uses the web image the release process defined in build wouldn't have been for anything used anyway.)
I am in the final steps of deploying a django website. It uses docker to run it and I'm finally deploying it through heroku. I run into an error when running "git push heroku master". I receive "Your app does not include a heroku.yml build manifest. To deploy your app, either create a heroku.yml: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml". This is odd as I do in fact have a heroku.yml app.
heroku.yml
setup:
addons:
- plan: heroku-postgresql
build:
docker:
web: Dockerfile
release:
image: web
command:
- python manage.py collectstatic --noinput
run:
web: gunicorn books.wsgi
The tutorial I am following is using "gunicorn bookstore_project.wsgi" but I used books.wsgi as that is the directory my website is in. Neither worked.
this happened to me when i pushed the wrong branch to heroku. I was testing in the develop branch but pushing master which had not heroku.yml.
pervious gitlab-ci
stages:
- staging
staging:
stage: staging
image: ruby:latest
script:
- git remote add heroku https://heroku:$HEROKU_API_KEY#git.heroku.com/$PROJECT.git
- git push -f heroku master
only:
- develop
actual gitlab-ci
stages:
- staging
staging:
stage: staging
image: ruby:latest
script:
- git remote add heroku https://heroku:$HEROKU_API_KEY#git.heroku.com/$PROJECT.git
- git push -f heroku develop:master
only:
- develop
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
I am trying to deploy my app to the app engine but for some reason, I can't do so. My logs says
Shutting down master
Reason: worker failed to boot
I think it has something to do with gunicorn. How do I go about this?
My app.yaml is
runtime: Python
env: flex
entrypoint: gunicorn -b :$PORT .wsgi
beta_settings:
cloud_sql_instance: /*mysqlinstance*/
runtime_config:
python_version: 3
So my solution:
I started afresh!
I created a new project in my cloud because a lot of complications had gotten into the one I was using. Then I included django==1.10 and gunicorn==19.7.1 in the requirements.txt file and ran
pip install -r requirements.txt
in the virtual environment and then deployed to the new project
Everything worked fine.
I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy