How to publish changes to Docker images using Github Actions - django

I am working on a CI/CD pipeline using Docker and GitHub Packages/Actions. I have 2 workflows: build.yml and deploy.yml.
The build.yml workflow is supposed to pull the Docker images from GitHub Packages, build them, run automated tests, then push the new images to GitHub Packages.
The deploy.yml workflow pulls the images to the server and runs them.
The issue I am having is that my local changes are not being updated on the server.
build.yml:
name: Build and Test
on:
push:
branches:
- development
env:
BACKEND_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/backend
FRONTEND_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/frontend
NGINX_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx
jobs:
test:
name: Build Images and Run Automated Tests
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Add environment variables to .env
run: |
echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=${{ secrets.SECRET_KEY }} >> .env
echo SQL_DATABASE=${{ secrets.SQL_DATABASE }} >> .env
echo SQL_USER=${{ secrets.SQL_USER }} >> .env
echo SQL_PASSWORD=${{ secrets.SQL_PASSWORD }} >> .env
echo SQL_HOST=${{ secrets.SQL_HOST }} >> .env
echo SQL_PORT=${{ secrets.SQL_PORT }} >> .env
- name: Set environment variables
run: |
echo "BACKEND_IMAGE=$(echo ${{env.BACKEND_IMAGE}} )" >> $GITHUB_ENV
echo "FRONTEND_IMAGE=$(echo ${{env.FRONTEND_IMAGE}} )" >> $GITHUB_ENV
echo "NGINX_IMAGE=$(echo ${{env.NGINX_IMAGE}} )" >> $GITHUB_ENV
- name: Log in to GitHub Packages
run: echo ${PERSONAL_ACCESS_TOKEN} | docker login ghcr.io -u ${{ secrets.NAMESPACE }} --password-stdin
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull ${{ env.BACKEND_IMAGE }} || true
docker pull ${{ env.FRONTEND_IMAGE }} || true
docker pull ${{ env.NGINX_IMAGE }} || true
- name: Build images
run: |
docker-compose -f docker-compose.ci.yml build
- name: Run Backend Tests
run: |
docker-compose -f docker-compose.ci.yml run backend python manage.py test
- name: Push images
run: |
docker push ${{ env.BACKEND_IMAGE }}
docker push ${{ env.FRONTEND_IMAGE }}
docker push ${{ env.NGINX_IMAGE }}
docker-compose.ci.yml:
version: "3.8"
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile.prod
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./backend:/backend
- static_volume:/static
- media_volume:/media
expose:
- 8000
env_file: .env
frontend:
build:
context: ./frontend
volumes:
- frontend_build:/frontend/build
nginx:
build:
context: ./nginx
ports:
- 80:80
volumes:
- frontend_build:/var/www/frontend
depends_on:
- backend
- frontend
volumes:
frontend_build:
static_volume:
media_volume:
backend/Dockerfile.prod:
FROM python:3.9.5-alpine
WORKDIR /backend
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
RUN pip install --upgrade pip
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./entrypoint.prod.sh /entrypoint.prod.sh
COPY . /backend/
ENTRYPOINT ["/entrypoint.prod.sh"]
I have tried a few different things to no avail. Any help understanding why my changes are not updating would be appreciated!

It seems like you are re-pushing the same images you pulled instead of the images built. In order to validate that, you can remove the Pull images step to confirm that.
If that is the case you can either change the way you tag the images in docker-compose.ci.yml or change the images you push in the Push images step.
Alternatively, you can use the flow documented here with minor changes:
name: Create and publish a Docker image
on:
push:
branches: ['release']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Log in to the Container registry
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push backend Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: ./backend # <<<=== Notice this
file: Dockerfile.prod # <<<=== Notice this
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Add another Dockerfile for the frontend and duplicate the last step for it.

Related

Permission denied on entrypoint when trying to update Elastic Beanstalk via GitHub Actions

I feel this might be an IAM question, but I don't really know where to begin. I have a Docker-based EBS environment that works great when I update it manually. However, when I update it with GitHub Actions, the container fails with the following message.
unable to start container process: exec: "./docker/entrypoint.sh": permission denied: unknown.
My CD pipeline authenticates, push a new Docker image to the registry, and then updates the Dockerrun.aws.js by editing the image name. The workflow runs ok: the image is pushed, and the Dockerrun.aws.js is correct... and yet the environment fails to launch.
name: Release
on:
push:
tags:
- 'v*'
jobs:
deploy-to-aws-ebs:
runs-on: ubuntu-latest
environment: staging
permissions:
id-token: write
contents: read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
steps:
- name: Check out the repository
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/ServiceRoleForEBSDeploy
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Get tag name
run: echo "tag=`echo ${{ github.ref }} | sed -e 's/\./-/g' | cut -c11-`-`echo ${{ github.sha }} | cut -c1-8`" >> $GITHUB_ENV
- name: Build, tag, and push docker image to Amazon ECR
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: docker_repository
IMAGE_TAG: ${{ env.tag }}
run: |
docker build -t $REGISTRY/$REPOSITORY:$IMAGE_TAG .
docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG
echo "IMAGE_NAME=$REGISTRY/$REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
- name: Create deployment package
run: |
sed -e "s|<IMAGE_NAME>|${{ env.IMAGE_NAME }}|g" \
docker/Dockerrun.aws.template.json > Dockerrun.aws.json
cat Dockerrun.aws.json
- name: Deploy to AWS Elastic Beanstalk
env:
AWS_EBS_APP_NAME: app_name
AWS_EBS_ENV_NAME: env_name
run: |
aws s3 cp Dockerrun.aws.json s3://${{ secrets.AWS_S3_BUCKET_NAME }}/versions/${{ env.tag }}-Dockerrun.aws.json
aws elasticbeanstalk create-application-version \
--application-name $AWS_EBS_APP_NAME \
--source-bundle S3Bucket=${{ secrets.AWS_S3_BUCKET_NAME }},S3Key=versions/${{ env.tag }}-Dockerrun.aws.json \
--version-label ${{ env.tag }}
aws elasticbeanstalk update-environment \
--application-name $AWS_EBS_APP_NAME \
--environment-name $AWS_EBS_ENV_NAME \
--version-label ${{ env.tag }}
Meanwhile, the Dockerfile is your basic Django stuff.
FROM python:3.10-slim-buster
ARG APP_HOME=/code \
USERNAME=user101
WORKDIR ${APP_HOME}
RUN addgroup --system ${USERNAME} \
&& adduser --system --ingroup ${USERNAME} ${USERNAME}
RUN apt-get update --yes --quiet && apt-get install --no-install-recommends --yes --quiet \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev \
# dev utils
git \
# cleanup
&& rm -rf /var/lib/apt/lists/*
COPY . --chown=${USERNAME}:${USERNAME} ${APP_HOME} ${APP_HOME}
RUN pip install --upgrade pip
RUN pip install poetry
RUN poetry install --no-interaction --no-ansi
EXPOSE 80
USER ${USERNAME}
ENTRYPOINT ["./docker/entrypoint.sh" ]
CMD ["gunicorn", "config.wsgi:application", "--bind", ":80"]
My guess is that EBS is trying to build the environment with the GitHub Actions service user? Does that make sense? Should it be using the user defined in the Dockerfile?
This has nothing to do with IAM permissions.
You just need to make your script executable:
$ chmod +x ./docker/entrypoint.sh
You can also run it inside the Dockerfile before the ENTRYPOINT command:
RUN chmod +x ./docker/entrypoint.sh
ENTRYPOINT ["./docker/entrypoint.sh" ]

How can I solve syntax error in yaml file when pushing to github?

I'm using postgresql with django. I set a github action that verifies my code whenever I push or pull, and I get the following error:
You have an error in your yaml syntax on line 19
Here is my yaml:
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5433:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9.7
uses: actions/setup-python#v2
with:
python-version: "3.9.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test
line 19 corresponds to image: postgres:14 but I can't see any syntax error here. I've looked at some examples and it looks exactly the same.
For GitHub actions, configuring a Django web app service container using the Docker Hub for postgres images works fine with this code only.
image: postgres
For your particular case, you can check if it works for you.
To answer my question, I followed these two posts that are up to date:
https://www.hacksoft.io/blog/github-actions-in-action-setting-up-django-and-postgres
https://www.digitalocean.com/community/tutorials/how-to-use-postgresql-with-your-django-application-on-ubuntu-14-04
Make sure you install all the dependencies.
I also set the port to 5432 and image to postgres:14.2
(To know your postrgesql version you can enter /usr/lib/postgresql/14/bin/postgres -V)
See final yml file:
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14.2
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_action
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.10
uses: actions/setup-python#v2
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test

How to use serverless framework in github actions using github actions OIDC feature

I have followed this question How can I connect GitHub actions with AWS deployments without using a secret key?.
however i am trying to go one step further by dpeloying a lambda function using serverless.
what i have tried so far.
name: For Production
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Configure AWS
run: |
sleep 5 # Need to have a delay to acquire this
export AWS_ROLE_ARN=arn:aws:iam::xxxxxxx:role/my-role
export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
export AWS_DEFAULT_REGION=ap-southeast-1
echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=githubactions" \
| jq -r '.value' > $AWS_WEB_IDENTITY_TOKEN_FILE
sls deploy --stage prod --verbose
working-directory: './backend-operations'
# - name: Deploy to AWS
# run: serverless deploy --stage prod --verbose
# working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
I solved it using this using aws-actions/configure-aws-credentials github actions, as it sets temporary access key and id to environment.
Hence no need of creating aws programmticv keys from here on.
Note:- latest update of github OIDC has changed its domain name -> https://token.actions.githubusercontent.com
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Production-Deployment
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: ap-southeast-1
role-to-assume: ${{secrets.ROLE_ARN}}
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Serverless Authentication
run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}
- name: Deploy to AWS
run: serverless deploy --stage prod --verbose
working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}

Selenium tests fail on CircleCI

I have a Django app I am trying to Selenium tests on CircleCI, but even though they run fine locally on my test environment they keep failing with a NoSuchElementException from Selenium on CircleCI.
At the beginning of most of my browser tests, I run the following method, which is what is making the tests fail:
def login():
driver.get(self.live_server_url + reverse("login"))
# FAILURE HAPPENS HERE: Not able to find the `id_email` element
driver.find_element_by_id("id_email").send_keys(u.email)
driver.find_element_by_id("id_password").send_keys("12345678")
driver.find_element_by_id("submit-login").click()
config.yml
version: 2
jobs:
build:
docker:
- image: circleci/python:3.6.5-node-browsers
environment:
CI_TESTING: 1
- image: redis
working_directory: ~/repo
steps:
- checkout
# Selenium setup
- run: mkdir test-reports
- run:
name: Download Selenium
command: |
curl -O http://selenium-release.storage.googleapis.com/3.5/selenium-server-standalone-3.5.3.jar
- run:
name: Start Selenium
command: |
java -jar selenium-server-standalone-3.5.3.jar -log test-reports/selenium.log
background: true
- restore_cache:
name: Restore Pip Package Cache
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: Install Pip Dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- save_cache:
name: Save Pip Package Cache
key: v1-dependencies-{{ checksum "requirements.txt" }}
paths:
- ./venv
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
- yarn-packages-{{ .Branch }}
- yarn-packages-master
- yarn-packages-
- run:
name: Install Yarn Dependencies
command: |
yarn install
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- node_modules/
- run:
name: Run Django Tests
command: |
. venv/bin/activate
./test.sh
- store_artifacts:
path: test-reports
destination: test-reports
Driver definition:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("headless")
drive = webdriver.Chrome(chrome_options=chrome_options)
Is my CircleCI setup wrong? I have looked into multiple pages in documentation and it all seems right to me.
https://circleci.com/docs/2.0/project-walkthrough/#install-and-run-selenium-to-automate-browser-testing
https://github.com/CircleCI-Public/circleci-demo-python-flask/blob/master/.circleci/config.yml#L16:7
https://circleci.com/docs/2.0/browser-testing/

How can I build a Docker image and push it to ECR with CIRCLE 2.0?

I'm trying to upgrade from CIRCLE 1.0 to 2.0 & I'm having trouble getting the Docker images to build. I've got the following job:|
... There is another Job here which runs some tests
deploy-aws:
# machine: true
docker:
- image: ecrurl/backend
aws_auth:
aws_access_key_id: ID1
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
environment:
TAG: $CIRCLE_BRANCH-$CIRCLE_SHA1
ECR_URL: ecrurl/backend
DOCKER_IMAGE: $ECR_URL:$TAG
STAGING_BUCKET: staging
TESTING_BUCKET: testing
PRODUCTION_BUCKET: production
NPM_TOKEN: $NPM_TOKEN
working_directory: ~/backend
steps:
- run:
name: Install awscli
command: sudo apt-get -y -qq install awscli
- checkout
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker pull $ECR_URL:latest
docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
workflows:
version: 2
build-deploy:
jobs:
- build # This one simply runs test
- deploy-aws:
requires:
- build
Running this throws the following error:
#!/bin/bash -eo pipefail
sudo apt-get -y -qq install awscli
/bin/bash: sudo: command not found
Exited with code 127
All I had todo before was this:
dependencies:
pre:
- $(aws ecr get-login --region us-west-2)
deployment:
staging:
branch: staging
- docker pull $ECR_URL:latest
- docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
- docker tag backend $DOCKER_IMAGE
- docker push $DOCKER_IMAGE
- docker tag -f $DOCKER_IMAGE $ECR_URL:latest
- docker push $ECR_URL:latest
Here is the config I've changed to make this work:
deploy-aws:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker build -t backend --build-arg .
fi
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- run:
name: Tag and push to ECR
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
Check out this link: https://github.com/builtinnya/circleci-2.0-beta-docker-example/blob/master/.circleci/config.yml#L39