Failed to run dockerfile -> unknown instruction: APT-GET - amazon-web-services

Alright, so I'm trying to create a kind of pipeline to put my application online at Amazon AWS.
I am using the following dockerfile:
FROM maven:3.8.5-openjdk-17 AS MAVEN_BUILD
RUN apt-get update && \
apt-get upgrade -y && |
apt-get install git -y
WORKDIR /repositorio/
RUN git clone https://gitlab.com/booking_hub/grupo_04_bookinghub.git
COPY /repositorio/grupo_04_bookinghub/backend/APIrest/APIrest/pom.xml /build/
COPY /repositorio/grupo_04_bookinghub/backend/APIrest/APIrest/src /build/src/
WORKDIR /build/
RUN mvn clean install "-Dmaven.test.skip=true" && mvn package
FROM openjdk:17-slim
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/APIrest-*.jar /app/APIrest.jar
ENTRYPOINT ["java","-Dspring.profiles.active=dev", "-jar", "APIrest.jar"]
I'm having the following error:
error
My docker-compose file:
version: "3.7"
services:
#outro container --- backend
springweb:
container_name: bookinghub
build: . #preciso do Dockerfile para gerar a imagem
ports:
- "8080:8080"
networks:
- servers
environment:
SPRING_PROFILES_ACTIVE: dev
networks:
servers:
driver: bridge
This docker-compose is being called by a ansible-playbook file (from a management machine):
- hosts: backend
become: yes
tasks:
# 1
### task desnecessária para uso com docker, o mvn já está sendo puxado da imagem que irá rodar o docker
# - name: "System : instalando o mvn"
# shell:
# cmd: "sudo mvn install"
# chdir: /home/ubuntu/backend
#1
- name: "System: Atualizando o sistema"
become: true
shell:
cmd: "apt-get update"
#2
- name: "System: Instalando dependências do Docker"
become: true
apt:
name: "{{item}}"
state: present
update_cache: yes
loop:
- ca-certificates
- curl
- gnupg
- gnupg-agent
- lsb-release
#3
- name: "Docker : adicionar chave GPG"
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
#4
- name: "Docker : adicionar repositório do instalador"
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu bionic stable
state: present
#5
- name: "Docker : instalacao"
apt:
name: "{{item}}"
state: latest
update_cache: yes
loop:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose
#6
- name: "Docker : instalar python e pip, Ansible precisa para poder iniciar e executar o Docker"
apt:
name: "{{item}}"
state: latest
update_cache: yes
loop:
- python3
- pip
#7
- name: " Docker : Instalar recurso que o ansible precisa para o docker"
pip:
name: docker
#Fim da instalação do Docker
handlers:
- name: "System : reiniciar o Docker"
service:
name: docker
state: restarted
- name: Criar rede servers
docker_network:
name: servers
driver: bridge
state: present
force: yes
- hosts: backend
tasks:
- name: "System: Executando o docker compose da aplicação Backend"
become: true
shell:
cmd: "docker-compose -f docker-compose-java.yml up"
chdir: /home/ubuntu/Backend
I am stuck in this error and don't know what to do. The official maven image says it runs a Ubuntu based image so APT-GET should've worked.
I tried to change the maven version to 3.8.1 where I found out that this version still used a Ubuntu based where APT-GET work but the same error occurred.
I expect the dockerfile to download a maven image, run a git clone of my project and build it inside the docker image (which will be called by a docker-compose file)

Looks like https://hub.docker.com/_/maven is on oraclelinux which is redhat based.
See https://github.com/carlossg/docker-maven/blob/ac292f26884bf2be9fe69f6e397da3b124c1e35c/openjdk-18/Dockerfile#L1
I think you should try https://oracle-base.com/articles/linux/installing-software-packages#rpm and https://oracle-base.com/articles/linux/installing-software-packages#yum as your package managers.
Good luck!

Related

New Angular poject in a devcontainer: Access Denied

I am developing in Visual Studio Code (Windows) using the .devcontainer. I am doing some testing using django and angular frameworks. Everything is working perfectly but when inside the container I run the command ng new angular-14-crud-example I have some problems: if after this I restart for any reason Visual Studio Code, the devcontainer does not restart anymore and returns the following error:
failed to solve: error from sender: open C:\Users\user\project\angular-14-crud-example\node_modules\make-fetch-hap
pen\node_modules\mkdirp: Access denied.
Below are the details:
Django_Dockerfile:
FROM mcr.microsoft.com/devcontainers/anaconda:0-3
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 && /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
RUN pip install --upgrade pip
RUN mkdir /workspace
WORKDIR /workspace
COPY requirements.txt /workspace/
RUN pip install -r requirements.txt
COPY . /workspace/
docker-compose.yml
version: '3.8'
services:
app:
build:
context: ..
dockerfile: .devcontainer/Django_Dockerfile
env_file:
- .env
volumes:
- ../..:/workspaces:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
env_file:
- .env
volumes:
postgres-data:
devcontainer.json:
{
"name": "Anaconda (Python 3) & PostgreSQL",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"features": {
"ghcr.io/devcontainers-contrib/features/angular-cli:1": {}
},
}
Some deteailed error Log
[2023-01-02T13:39:52.218Z] Dev Containers 0.266.1 in VS Code 1.74.2 (e8a3071ea4344d9d48ef8a4df2c097372b0c5161).
[2023-01-02T13:39:52.218Z] Start: Resolving Remote
[2023-01-02T13:39:52.268Z] Setting up container for folder or workspace: c:\Users\user\project
[2023-01-02T13:39:52.288Z] Start: Check Docker is running
[2023-01-02T13:39:52.290Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2023-01-02T13:39:53.819Z] Stop (1529 ms): Run: docker version --format {{.Server.APIVersion}}
[2023-01-02T13:39:53.820Z] Server API version: 1.41
[2023-01-02T13:39:53.823Z] Stop (1535 ms): Check Docker is running
[2023-01-02T13:39:53.825Z] Start: Run: docker volume ls -q
[2023-01-02T13:39:54.907Z] Stop (1082 ms): Run: docker volume ls -q
[2023-01-02T13:39:55.293Z] Start: Run: docker ps -q -a --filter label=vsch.local.folder=c:\Users\user\project --filter label=vsch.quality=stable
[2023-01-02T13:39:56.247Z] Stop (954 ms): Run: docker ps -q -a --filter label=vsch.local.folder=c:\Users\user\project --filter label=vsch.quality=stable
[2023-01-02T13:39:56.248Z] Start: Run: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:39:57.189Z] (node:18596) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
[2023-01-02T13:39:57.190Z] (Use `Code --trace-deprecation ...` to show where the warning was created)
[2023-01-02T13:39:57.194Z] #devcontainers/cli 0.25.2. Node.js v16.14.2. win32 10.0.19044 x64.
[2023-01-02T13:39:57.194Z] Start: Run: docker buildx version
[2023-01-02T13:39:59.057Z] Stop (1863 ms): Run: docker buildx version
[2023-01-02T13:39:59.058Z] github.com/docker/buildx v0.9.1 ed00243a0ce2a0aee75311b06e32d33b44729689
[2023-01-02T13:39:59.058Z]
[2023-01-02T13:39:59.059Z] Start: Resolving Remote
[2023-01-02T13:39:59.072Z] Start: Run: docker-compose version --short
[2023-01-02T13:40:01.002Z] Stop (1930 ms): Run: docker-compose version --short
[2023-01-02T13:40:01.003Z] Docker Compose version: 2.12.0
[2023-01-02T13:40:01.006Z] Start: Run: docker ps -q -a --filter label=com.docker.compose.project=mydemo_devcontainer --filter label=com.docker.compose.service=app
[2023-01-02T13:40:02.023Z] Stop (1017 ms): Run: docker ps -q -a --filter label=com.docker.compose.project=mydemo_devcontainer --filter label=com.docker.compose.service=app
[2023-01-02T13:40:02.026Z] Start: Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:03.955Z] Stop (1929 ms): Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:03.955Z] name: devcontainer
services:
app:
build:
context: c:\Users\user\project
dockerfile: .devcontainer/Django_Dockerfile
command:
- sleep
- infinity
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
network_mode: service:db
volumes:
- type: bind
source: c:\Users\user\project
target: /workspaces
bind:
create_host_path: true
db:
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
image: postgres:latest
networks:
default: null
restart: unless-stopped
volumes:
- type: volume
source: postgres-data
target: /var/lib/postgresql/data
volume: {}
networks:
default:
name: devcontainer_default
volumes:
postgres-data:
name: devcontainer_postgres-data
[2023-01-02T13:40:03.969Z] Start: Run: docker events --format {{json .}} --filter event=start
[2023-01-02T13:40:04.315Z] PersistedPath=c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data, ContainerHasLabels=false
[2023-01-02T13:40:04.320Z] Start: Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:06.248Z] Stop (1928 ms): Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:06.248Z] name: devcontainer
services:
app:
build:
context: c:\Users\user\project
dockerfile: .devcontainer/Django_Dockerfile
command:
- sleep
- infinity
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
network_mode: service:db
volumes:
- type: bind
source: c:\Users\user\project
target: /workspaces
bind:
create_host_path: true
db:
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
image: postgres:latest
networks:
default: null
restart: unless-stopped
volumes:
- type: volume
source: postgres-data
target: /var/lib/postgresql/data
volume: {}
networks:
default:
name: devcontainer_default
volumes:
postgres-data:
name: devcontainer_postgres-data
[2023-01-02T13:40:06.257Z] Start: Run: docker inspect --type image mcr.microsoft.com/devcontainers/anaconda:0-3
[2023-01-02T13:40:07.211Z] Stop (954 ms): Run: docker inspect --type image mcr.microsoft.com/devcontainers/anaconda:0-3
[2023-01-02T13:40:07.881Z] local container features stored at: c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\node_modules\vscode-dev-containers\container-features
[2023-01-02T13:40:07.887Z] Start: Run: tar --no-same-owner -x -f -
[2023-01-02T13:40:08.644Z] Stop (757 ms): Run: tar --no-same-owner -x -f -
[2023-01-02T13:40:08.661Z] * Processing feature: ghcr.io/devcontainers-contrib/features/angular-cli:1
[2023-01-02T13:40:09.393Z] * Fetching feature: angular-cli_1_oci
[2023-01-02T13:40:10.457Z] Start: Run: docker build -t dev_container_feature_content_temp -f C:\Users\D525C~1.SAN\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1672666807879\Dockerfile.buildContent C:\Users\D525C~1.SAN\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1672666807879
[2023-01-02T13:40:11.323Z]
[2023-01-02T13:40:12.053Z]
[...]
[2023-01-02T13:41:36.506Z]
[+] Building 78.3s (7/22)
=> [internal] load build definition from Dockerfile-with-features 0.1s
=> => transferring dockerfile: 3.36kB 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/dev_container_feature_ 0.0s
=> [internal] load metadata for mcr.microsoft.com/devcontainers/anaconda 0.4s
=> [dev_containers_feature_content_source 1/1] FROM docker.io/library/de 0.0s
=> [dev_container_auto_added_stage_label 1/11] FROM mcr.microsoft.com/d 0.0s
=> ERROR [internal] load build context 77.6s
=> => transferring context: 265.86MB 77.5s
------
> [internal] load build context:
------
failed to solve: error from sender: open C:\Users\user\project\angular-14-crud-example\node_modules\make-fetch-hap
pen\node_modules\mkdirp: Accesso negato.
[2023-01-02T13:41:36.669Z] Stop (81610 ms): Run: docker-compose --project-name mydemo_devcontainer -f c:\Users\user\project\.devcontainer\docker-compose.yml -f c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data\docker-compose\docker-compose.devcontainer.build-1672666815049.yml build
[2023-01-02T13:41:36.671Z] Error: Command failed: docker-compose --project-name mydemo_devcontainer -f c:\Users\user\project\.devcontainer\docker-compose.yml -f c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data\docker-compose\docker-compose.devcontainer.build-1672666815049.yml build
[2023-01-02T13:41:36.671Z] at pF (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1850:431)
[2023-01-02T13:41:36.671Z] at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-01-02T13:41:36.672Z] at async foe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1850:2457)
[2023-01-02T13:41:36.672Z] at async loe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1832:2396)
[2023-01-02T13:41:36.672Z] at async Poe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:2301)
[2023-01-02T13:41:36.672Z] at async Zf (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:3278)
[2023-01-02T13:41:36.673Z] at async aue (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15276)
[2023-01-02T13:41:36.673Z] at async oue (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15030)
[2023-01-02T13:41:36.707Z] Stop (100459 ms): Run: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:41:36.708Z] Exit code 1
[2023-01-02T13:41:36.716Z] Command failed: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:41:36.716Z] Exit code 1

whitelist AWS RDS on CircleCI

I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context

How can I solve syntax error in yaml file when pushing to github?

I'm using postgresql with django. I set a github action that verifies my code whenever I push or pull, and I get the following error:
You have an error in your yaml syntax on line 19
Here is my yaml:
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5433:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9.7
uses: actions/setup-python#v2
with:
python-version: "3.9.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test
line 19 corresponds to image: postgres:14 but I can't see any syntax error here. I've looked at some examples and it looks exactly the same.
For GitHub actions, configuring a Django web app service container using the Docker Hub for postgres images works fine with this code only.
image: postgres
For your particular case, you can check if it works for you.
To answer my question, I followed these two posts that are up to date:
https://www.hacksoft.io/blog/github-actions-in-action-setting-up-django-and-postgres
https://www.digitalocean.com/community/tutorials/how-to-use-postgresql-with-your-django-application-on-ubuntu-14-04
Make sure you install all the dependencies.
I also set the port to 5432 and image to postgres:14.2
(To know your postrgesql version you can enter /usr/lib/postgresql/14/bin/postgres -V)
See final yml file:
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14.2
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_action
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.10
uses: actions/setup-python#v2
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test

How to publish changes to Docker images using Github Actions

I am working on a CI/CD pipeline using Docker and GitHub Packages/Actions. I have 2 workflows: build.yml and deploy.yml.
The build.yml workflow is supposed to pull the Docker images from GitHub Packages, build them, run automated tests, then push the new images to GitHub Packages.
The deploy.yml workflow pulls the images to the server and runs them.
The issue I am having is that my local changes are not being updated on the server.
build.yml:
name: Build and Test
on:
push:
branches:
- development
env:
BACKEND_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/backend
FRONTEND_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/frontend
NGINX_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx
jobs:
test:
name: Build Images and Run Automated Tests
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Add environment variables to .env
run: |
echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=${{ secrets.SECRET_KEY }} >> .env
echo SQL_DATABASE=${{ secrets.SQL_DATABASE }} >> .env
echo SQL_USER=${{ secrets.SQL_USER }} >> .env
echo SQL_PASSWORD=${{ secrets.SQL_PASSWORD }} >> .env
echo SQL_HOST=${{ secrets.SQL_HOST }} >> .env
echo SQL_PORT=${{ secrets.SQL_PORT }} >> .env
- name: Set environment variables
run: |
echo "BACKEND_IMAGE=$(echo ${{env.BACKEND_IMAGE}} )" >> $GITHUB_ENV
echo "FRONTEND_IMAGE=$(echo ${{env.FRONTEND_IMAGE}} )" >> $GITHUB_ENV
echo "NGINX_IMAGE=$(echo ${{env.NGINX_IMAGE}} )" >> $GITHUB_ENV
- name: Log in to GitHub Packages
run: echo ${PERSONAL_ACCESS_TOKEN} | docker login ghcr.io -u ${{ secrets.NAMESPACE }} --password-stdin
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull ${{ env.BACKEND_IMAGE }} || true
docker pull ${{ env.FRONTEND_IMAGE }} || true
docker pull ${{ env.NGINX_IMAGE }} || true
- name: Build images
run: |
docker-compose -f docker-compose.ci.yml build
- name: Run Backend Tests
run: |
docker-compose -f docker-compose.ci.yml run backend python manage.py test
- name: Push images
run: |
docker push ${{ env.BACKEND_IMAGE }}
docker push ${{ env.FRONTEND_IMAGE }}
docker push ${{ env.NGINX_IMAGE }}
docker-compose.ci.yml:
version: "3.8"
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile.prod
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./backend:/backend
- static_volume:/static
- media_volume:/media
expose:
- 8000
env_file: .env
frontend:
build:
context: ./frontend
volumes:
- frontend_build:/frontend/build
nginx:
build:
context: ./nginx
ports:
- 80:80
volumes:
- frontend_build:/var/www/frontend
depends_on:
- backend
- frontend
volumes:
frontend_build:
static_volume:
media_volume:
backend/Dockerfile.prod:
FROM python:3.9.5-alpine
WORKDIR /backend
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
RUN pip install --upgrade pip
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./entrypoint.prod.sh /entrypoint.prod.sh
COPY . /backend/
ENTRYPOINT ["/entrypoint.prod.sh"]
I have tried a few different things to no avail. Any help understanding why my changes are not updating would be appreciated!
It seems like you are re-pushing the same images you pulled instead of the images built. In order to validate that, you can remove the Pull images step to confirm that.
If that is the case you can either change the way you tag the images in docker-compose.ci.yml or change the images you push in the Push images step.
Alternatively, you can use the flow documented here with minor changes:
name: Create and publish a Docker image
on:
push:
branches: ['release']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Log in to the Container registry
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push backend Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: ./backend # <<<=== Notice this
file: Dockerfile.prod # <<<=== Notice this
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Add another Dockerfile for the frontend and duplicate the last step for it.

Selenium tests fail on CircleCI

I have a Django app I am trying to Selenium tests on CircleCI, but even though they run fine locally on my test environment they keep failing with a NoSuchElementException from Selenium on CircleCI.
At the beginning of most of my browser tests, I run the following method, which is what is making the tests fail:
def login():
driver.get(self.live_server_url + reverse("login"))
# FAILURE HAPPENS HERE: Not able to find the `id_email` element
driver.find_element_by_id("id_email").send_keys(u.email)
driver.find_element_by_id("id_password").send_keys("12345678")
driver.find_element_by_id("submit-login").click()
config.yml
version: 2
jobs:
build:
docker:
- image: circleci/python:3.6.5-node-browsers
environment:
CI_TESTING: 1
- image: redis
working_directory: ~/repo
steps:
- checkout
# Selenium setup
- run: mkdir test-reports
- run:
name: Download Selenium
command: |
curl -O http://selenium-release.storage.googleapis.com/3.5/selenium-server-standalone-3.5.3.jar
- run:
name: Start Selenium
command: |
java -jar selenium-server-standalone-3.5.3.jar -log test-reports/selenium.log
background: true
- restore_cache:
name: Restore Pip Package Cache
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: Install Pip Dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- save_cache:
name: Save Pip Package Cache
key: v1-dependencies-{{ checksum "requirements.txt" }}
paths:
- ./venv
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
- yarn-packages-{{ .Branch }}
- yarn-packages-master
- yarn-packages-
- run:
name: Install Yarn Dependencies
command: |
yarn install
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- node_modules/
- run:
name: Run Django Tests
command: |
. venv/bin/activate
./test.sh
- store_artifacts:
path: test-reports
destination: test-reports
Driver definition:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("headless")
drive = webdriver.Chrome(chrome_options=chrome_options)
Is my CircleCI setup wrong? I have looked into multiple pages in documentation and it all seems right to me.
https://circleci.com/docs/2.0/project-walkthrough/#install-and-run-selenium-to-automate-browser-testing
https://github.com/CircleCI-Public/circleci-demo-python-flask/blob/master/.circleci/config.yml#L16:7
https://circleci.com/docs/2.0/browser-testing/