How to update CI/CD to use one ROS package(repository) instead of using independent repos? - cicd

Currently my ROS package doesn't have functionality to install some own ros_driver, own ros_common and node_registry. So I have to update CI/CD to use use one ROS package(repository) which I call it "system_integration" instead of using those own independent repos (own ros_driver, own ros_common and node_registry). For example here it shows all places where own_ros_driver is used in the repo. Same can be used to look for system_integration. This is the cd.yml file
name: Clone ROS Driver
uses: actions/checkout#v2
with:
repository: /own_ros_driver
path: own_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
clean: true
ref: 'v1.0.1'
and the ci.yml
repository: own_ros_driver
path: deepx_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
ssh_key: ${{ secrets.DETRABOT_SECRET }}
- name: Downloading with VCS in deepx_ros_driver
uses: ./actions/vcstool-checkout
So any help how can I update CI/CD to use one ROS package(repository) instead of using independent repos ?

Related

Can't push Docker manifest file to AWS ECR via GitHub actions, but works via CLI

I have what I would consider a relatively simple GitHub workflow file:
create_manifest_docker_files:
# needs: [build_amd64_dockerfile, build_arm64_dockerfile]
env:
IMAGE_REPO: ${{ secrets.aws_ecr_image_repo }}
AWS_ACCOUNT_ID: ${{ secrets.aws_account_id }}
AWS_REGION: ${{ secrets.aws_region }}
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Create docker manifest
run: docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
- name: Push the new manifest file to Amazon ECR
run: docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
Whenever this workflow runs via GitHub Actions, I see the following error:
Run docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
shell: /usr/bin/bash -e {0}
env:
IMAGE_REPO: ***
AWS_ACCOUNT_ID: ***
AWS_REGION: ***
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
failed to put manifest ***.dkr.ecr.***.amazonaws.com/***:latest: manifest blob unknown: Images with digests '[sha256:a1a4efe0c3d0e7e26398e522e14037acb659be47059cb92be119d176750d3b56, sha256:5d1b00451c1cbf910910d951896f45b69a9186c16e5a92aab74dcc5dc6944c60]' required for pushing image into repository with name '***' in the registry with id '***' do not exist
Error: Process completed with exit code 1.
I'm not quite sure I actually understand the problem here. The previous step, "Create docker manifest" completes successfully with no problem, but the "Push the new manifest file to AWS ECR" step fails with the error above.
When looking in AWS ECR, I only have two images -- latest-amd64 and latest-arm64. Neither of their Digests are the values that the error message above is putting out.
When exporting those same environment variables to my CLI session and running those commands manually, everything works fine:
root#github-runner:/home/ubuntu/docker-runner# docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
Created manifest list [obfuscated-from-stackoverflow].dkr.ecr.us-east-1.amazonaws.com/[obfuscated-from-stackoverflow]:latest
root#github-runner:/home/ubuntu/docker-runner# docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
sha256:e4b5cc4cfafca560724fa6c6a5f41a2720a4ccfd3a9d18f90c3091866061a88d
My question is -- why would this work from the CLI itself but not from the GitHub Actions workflow? I have some previous runs that show this working perfectly fine with the workflow contents above, but now it's failing for some reason. Not quite sure if the issue here is within my ECR repository or if it's something locally messed up on the GitHub runner.

Can I define a combination of steps in a cloudbuild.yaml?

In a GitHub Workflow I can define a strategy key and loop through all combinations of a matrix. Here is an example for a CI pipeline of a Node.js app.
name: CI
on:
pull_request:
jobs:
test:
strategy:
matrix:
node: [16, 14]
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
name: Test node#${{ matrix.node }} on ${{ matrix.os }}
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node }}
- run: npm ci
- run: npm test
Can I achieve the same thing in a cloudbuild.yaml file? I haven't found any mention of this looping functionality in the documentation regarding the Build configuration file schema.
I guess I could achieve what I want using user-defined substitutions and calling the same Cloud Build config file multiple times, passing different substitutions each time... but I was wondering if this is the only possible approach. I would rather have all configuration defined in that single cloudbuild.yaml.

Is it possible to allow dependabot on GitHub to automatically "bump" software to new version?

Please help this learner out: I get frequent GitHub's dependabot alerts for "bumping" software versions to a more current one. My issue is I have to go into each (in my case, Django) app to pull or merge files. It tedious and time consuming to deal with my limited number of apps. How do professionals manage the process?
Is there a way to allow GitHub just bump whatever needs to be bumped (assuming one doesn't mind apps being broken)?
Yes. You can use Github actions to do this. See the following blog post: Setting up Dependabot with GitHub actions to approve and merge
The code, the way it is now written, will only automatically merge minor and patch version changes. It will not merge major version changes, which are potentially breaking changes. You could remove that check, but it is not normally recommended.
You also need to change the following settings on your repo:
Settings -> Actions -> General -> check "Allow Github Actions to create and approve pull requests.
Settings -> General -> Pull Requests -> check "Allow auto-merge".
The contents of the Github workflow file, "dependabot-approve-and-auto-merge.yml", is:
name: Dependabot Pull Request Approve and Merge
on: pull_request_target
permissions:
pull-requests: write
contents: write
jobs:
dependabot:
runs-on: ubuntu-latest
# Checking the actor will prevent your Action run failing on non-Dependabot
# PRs but also ensures that it only does work for Dependabot PRs.
if: ${{ github.actor == 'dependabot[bot]' }}
steps:
# This first step will fail if there's no metadata and so the approval
# will not occur.
- name: Dependabot metadata
id: dependabot-metadata
uses: dependabot/fetch-metadata#v1.1.1
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# Here the PR gets approved.
- name: Approve a PR
run: gh pr review --approve "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Finally, this sets the PR to allow auto-merging for patch and minor
# updates if all checks pass
- name: Enable auto-merge for Dependabot PRs
if: ${{ steps.dependabot-metadata.outputs.update-type != 'version-update:semver-major' }}
run: gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

github pages issue when using github actions and github-pages-deploy-action?

I have simple github repo where I host the content of my CV. I use hackmyresume to generate the index.html. I'm using Github Actions to run the npm build and it should publish the generated content to the gh-pages branch.
My workflow file has
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
And the build command is
"build": "hackmyresume BUILD ./src/main/resources/json/fresh/resume.json target/index.html -t compact",
I can see the generated html file getting committed to the github branch
https://github.com/emeraldjava/emeraldjava/blob/gh-pages/index.html
but the gh-page doesn't pick this up? I get a 404 error when i hit
https://emeraldjava.github.io/emeraldjava/
I believe my repo setting and secrets are correct but I must be missing something small. Any help would be appreciated.
This is happening because of your use of the GITHUB_TOKEN variable. There's an open issue with GitHub due to the fact that the built in token doesn't trigger the GitHub Pages deploy job. This means you'll see the files get committed correctly, but they won't be visible.
To get around this you can use a GitHub access token. You can learn how to generate one here. It needs to be correctly scoped so it has permission to push to a public repository. You'd store this token in your repository's Settings > Secrets menu (Call it something like ACCESS_TOKEN), and then reference it in your configuration like so:
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
You can find an outline of these variables here. Using an access token will allow the GitHub Pages job to trigger when a new deployment is made. I hope that helps!

Serverless Python package- dlib dependency

I am building a Python deployment package for AWS Lambda that relies on dlib. dlib has OS dependencies and it relies on cmake in order to build out the binaries. I am wondering how to do this given that I have a Mac and am doing my development on that environment. I am aware of Docker but I am not sure how to setup an image to compile the binaries for AWS. Any help in doing this would be appreciated.
The easiest way is to use the plugin
serverless-package-python-functions
So simply define in the serverless.yml
package:
individually: true
custom:
pkgPyFuncs:
buildDir: _build
requirementsFile: requirements.txt
cleanup: true
useDocker: true
Important is to use useDocker: true - this is spinning up a docker (locally) based on the AWS AMI - therefore you get the right dependencies.
After that create your function in serverless.yml:
functions:
test:
name: ${opt:stage, self:provider.stage}-${self:service}-test
handler: lambda_function.lambda_handler
package:
include:
- ./test
artifact: ${self:custom.pkgPyFuncs.buildDir}/${self:functions.test.name}.zip
Inside your test-folder place the requirements.txt. This file will be used for deploying the service with the right packages.
let me know if you have further questions