I've been working on setting up a Github Actions workflow to build a docker image. I need to pass environment variables into the image so that my Django project will run correctly. Unfortunately, when I build the image it doesn't receive the values of the variables.
The relevant part of my workflow file:
- name: Build, tag, and push image to AWS ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
aws_ses_access_key_id: ${{ secrets.AWS_SES_ACCESS_KEY_ID }}
aws_ses_secret_access_key: ${{ secrets.AWS_SES_SECRET_ACCESS_KEY }}
DATABASE_ENGINE: ${{ secrets.DATABASE_ENGINE }}
db_host: ${{ secrets.DB_HOST }}
db_password: ${{ secrets.DB_PASSWORD }}
db_port: ${{ secrets.DB_PORT }}
db_username: ${{ secrets.DB_USERNAME }}
django_secret_key: ${{ secrets.DJANGO_SECRET_KEY }}
fcm_server_key: ${{ secrets.FCM_SERVER_KEY }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
In my Dockerfile, I've put the following:
ENV aws_ses_access_key_id=$aws_ses_access_key_id aws_ses_secret_access_key=$aws_ses_secret_access_key DATABASE_ENGINE=$DATABASE_ENGINE db_host=$db_host db_password=$db_password db_port=$db_port db_username=$db_username django_secret_key=$django_secret_key fcm_server_key=$fcm_server_key
None of the variables are passing. I've tried using $variable_name and ${variable_name} with no luck. What am I doing wrong?
Using dollar substitution in the value of an ENV instruction in the Dockerfile does not expand to an environment variable of the host on which docker build is called, but instead is replaced with a Docker ARG value, that you can pass via the --build-arg ARG_NAME=ARG_VALUE command line argument to the docker build command and then you can replace the value of ARG_NAME as $ARG_NAME to ARG_VALUE in your ENV instruction.
See: https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate or final images like ENV values do. You must add --build-arg for each build argument.
Related
Currently my ROS package doesn't have functionality to install some own ros_driver, own ros_common and node_registry. So I have to update CI/CD to use use one ROS package(repository) which I call it "system_integration" instead of using those own independent repos (own ros_driver, own ros_common and node_registry). For example here it shows all places where own_ros_driver is used in the repo. Same can be used to look for system_integration. This is the cd.yml file
name: Clone ROS Driver
uses: actions/checkout#v2
with:
repository: /own_ros_driver
path: own_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
clean: true
ref: 'v1.0.1'
and the ci.yml
repository: own_ros_driver
path: deepx_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
ssh_key: ${{ secrets.DETRABOT_SECRET }}
- name: Downloading with VCS in deepx_ros_driver
uses: ./actions/vcstool-checkout
So any help how can I update CI/CD to use one ROS package(repository) instead of using independent repos ?
I have what I would consider a relatively simple GitHub workflow file:
create_manifest_docker_files:
# needs: [build_amd64_dockerfile, build_arm64_dockerfile]
env:
IMAGE_REPO: ${{ secrets.aws_ecr_image_repo }}
AWS_ACCOUNT_ID: ${{ secrets.aws_account_id }}
AWS_REGION: ${{ secrets.aws_region }}
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Create docker manifest
run: docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
- name: Push the new manifest file to Amazon ECR
run: docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
Whenever this workflow runs via GitHub Actions, I see the following error:
Run docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
shell: /usr/bin/bash -e {0}
env:
IMAGE_REPO: ***
AWS_ACCOUNT_ID: ***
AWS_REGION: ***
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
failed to put manifest ***.dkr.ecr.***.amazonaws.com/***:latest: manifest blob unknown: Images with digests '[sha256:a1a4efe0c3d0e7e26398e522e14037acb659be47059cb92be119d176750d3b56, sha256:5d1b00451c1cbf910910d951896f45b69a9186c16e5a92aab74dcc5dc6944c60]' required for pushing image into repository with name '***' in the registry with id '***' do not exist
Error: Process completed with exit code 1.
I'm not quite sure I actually understand the problem here. The previous step, "Create docker manifest" completes successfully with no problem, but the "Push the new manifest file to AWS ECR" step fails with the error above.
When looking in AWS ECR, I only have two images -- latest-amd64 and latest-arm64. Neither of their Digests are the values that the error message above is putting out.
When exporting those same environment variables to my CLI session and running those commands manually, everything works fine:
root#github-runner:/home/ubuntu/docker-runner# docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
Created manifest list [obfuscated-from-stackoverflow].dkr.ecr.us-east-1.amazonaws.com/[obfuscated-from-stackoverflow]:latest
root#github-runner:/home/ubuntu/docker-runner# docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
sha256:e4b5cc4cfafca560724fa6c6a5f41a2720a4ccfd3a9d18f90c3091866061a88d
My question is -- why would this work from the CLI itself but not from the GitHub Actions workflow? I have some previous runs that show this working perfectly fine with the workflow contents above, but now it's failing for some reason. Not quite sure if the issue here is within my ECR repository or if it's something locally messed up on the GitHub runner.
In a GitHub Workflow I can define a strategy key and loop through all combinations of a matrix. Here is an example for a CI pipeline of a Node.js app.
name: CI
on:
pull_request:
jobs:
test:
strategy:
matrix:
node: [16, 14]
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
name: Test node#${{ matrix.node }} on ${{ matrix.os }}
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node }}
- run: npm ci
- run: npm test
Can I achieve the same thing in a cloudbuild.yaml file? I haven't found any mention of this looping functionality in the documentation regarding the Build configuration file schema.
I guess I could achieve what I want using user-defined substitutions and calling the same Cloud Build config file multiple times, passing different substitutions each time... but I was wondering if this is the only possible approach. I would rather have all configuration defined in that single cloudbuild.yaml.
Please help this learner out: I get frequent GitHub's dependabot alerts for "bumping" software versions to a more current one. My issue is I have to go into each (in my case, Django) app to pull or merge files. It tedious and time consuming to deal with my limited number of apps. How do professionals manage the process?
Is there a way to allow GitHub just bump whatever needs to be bumped (assuming one doesn't mind apps being broken)?
Yes. You can use Github actions to do this. See the following blog post: Setting up Dependabot with GitHub actions to approve and merge
The code, the way it is now written, will only automatically merge minor and patch version changes. It will not merge major version changes, which are potentially breaking changes. You could remove that check, but it is not normally recommended.
You also need to change the following settings on your repo:
Settings -> Actions -> General -> check "Allow Github Actions to create and approve pull requests.
Settings -> General -> Pull Requests -> check "Allow auto-merge".
The contents of the Github workflow file, "dependabot-approve-and-auto-merge.yml", is:
name: Dependabot Pull Request Approve and Merge
on: pull_request_target
permissions:
pull-requests: write
contents: write
jobs:
dependabot:
runs-on: ubuntu-latest
# Checking the actor will prevent your Action run failing on non-Dependabot
# PRs but also ensures that it only does work for Dependabot PRs.
if: ${{ github.actor == 'dependabot[bot]' }}
steps:
# This first step will fail if there's no metadata and so the approval
# will not occur.
- name: Dependabot metadata
id: dependabot-metadata
uses: dependabot/fetch-metadata#v1.1.1
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# Here the PR gets approved.
- name: Approve a PR
run: gh pr review --approve "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Finally, this sets the PR to allow auto-merging for patch and minor
# updates if all checks pass
- name: Enable auto-merge for Dependabot PRs
if: ${{ steps.dependabot-metadata.outputs.update-type != 'version-update:semver-major' }}
run: gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
I want to find a way to not have to specify aws_access_key and aws_secret_key when use aws modules.
Is that aws default try to use credentials in ~/.aws to run against playbooks?
If yes, how to instruct ansible to use aws credentials under whatever folder you want, e.g: ~/my_ansible_folder.
I ask this because I really want to use ansible to create a vault: cd ~/my_ansible_folder; ansible-vault create aws_keys.yml under ~/my_ansible_folder then run playbook ansible-playbook -i ./inventory --ask-vault-pass site.yml that will use aws credential in the vault that I don't have to specify aws_access_key and aws_secret_key in tasks.. that need to use aws credentials.
The list of boto3 configuration options will interest you, most most notably the $AWS_SHARED_CREDENTIALS_FILE environment variable.
I would expect you can create that shared credentials file using a traditional copy: content="[default]\naws_access_key_id=whatever\netc\netc\n" and then set the ansible_python_interpreter fact to be env AWS_SHARED_CREDENTIALS_FILE=/path/to/that/credential-file /the/original/ansible_python_interpreter to cause the actual python invocation to carry that environment variable with it. For non-boto modules, doing that will just cost you running env as well as python, but to be honest the bizarre module serialization and deserialization that ansible does anyway will cause that extra binary runtime to be invisible in the scheme of things.
You may have to override $AWS_CONFIG_FILE and $BOTO_CONFIG in the same manner, even pointing them at /dev/null in order to force boto to not look in your $HOME/.aws directory
So, for clarity:
- name: create our boto config
copy:
content: |
[default]
aws_access_key_id={{ access_key_from_vault }}
aws_secret_access_key={{ secret_key_from_vault }}
dest: /somewhere/sekrit
mode: '0600'
no_log: yes
register: my_aws_config
- name: grab existing python interp
set_fact:
backup_a_py_i: '{{ ansible_python_interpreter | default(ansible_playbook_python) }}'
- name: patch in our env-vars
set_fact:
ansible_python_interpreter: >-
env AWS_SHARED_CREDENTIALS_FILE={{ my_aws_config.path }}
{{ backup_a_py_i }}
# and away you go!
- ec2_instance_facts:
# optionally put this in a "rescue:" or whatever you think is reasonable
- file: path={{ my_aws_config.path }} state=absent