How can I build a github action by using a public docker registry container? - dockerfile

I am trying to create a simple docker action but instead of using the local Dockerfile in the repository I want to use a public on dockehub. I have created two repositories:
docker-creator: which has the docker action
docker-user: which is a job that uses docker-creator
This is my action.yml in docker-creator:
name: 'Hello World'
description: 'Greet someone and record the time'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: 'World'
outputs:
time: # id of output
description: 'The time we greeted you'
runs:
using: 'docker'
image: 'docker://alpine:3.10'
args:
- ${{ inputs.who-to-greet }}
In the image param typically you will see 'Dockerfile' but in my example you see I have replaced it with a public docker registry container.
When I use docker-creator from docker-user I get this errordocker-creator error
This is the main.yml workflow in docker-user:
hello_world_job:
runs-on: ubuntu-latest
name: A job to say hello
steps:
- name: Hello world action step
id: hello
uses: bryanaexp/docker-creator#main
with:
who-to-greet: 'Cat'
# Use the output from the `hello` step
- name: Get the output time
run: echo "The time was ${{ steps.hello.outputs.time }}"
I have tried looking for help online but alot of it is only using dockerfile. Also I have referred to this documentation https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#example-using-public-docker-registry-container but no use.
LINKS:
Docker Action : https://github.com/bryanaexp/docker-creator/blob/main/action.yml
Github Action Using Docker Action: https://github.com/bryanaexp/docker-user
These are the following steps I have taken to resolve my problem:
1 - I have read through a lot of github action documentation
https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#example-using-public-docker-registry-container
2 - Done various tests on my code using different parameters
3 - Researched stackoverflow but only found a similar example but that uses a private docker registry.

Related

Filter build steps by branch google cloud plateform

I have the below steps
steps:
# This step show the version of Gradle
- id: Gradle Install
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["--version"]
# This step build the gradle application
- id: Build
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["build"]
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["publish"]
The last step I want to do only on MASTER branch
Found one link related to this https://github.com/GoogleCloudPlatform/cloud-builders/issues/138
Its using a bash command, how can I put the gradle command inside the bash.
Update
After the suggestion answer I have updated the steps as
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "bash"
args:
- "-c"
- |
[[ "$BRANCH_NAME" == "develop" ]] && gradle publish
The build pipeline failed with below exception
Starting Step #2 - "Publish"
Step #2 - "Publish": Already have image: gradle:7.4.2-jdk17-alpine
Finished Step #2 - "Publish"
ERROR
ERROR: build step 2 "gradle:7.4.2-jdk17-alpine" failed: starting step container failed: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
The suggested solution didn't work for me, I have to write the below code
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "sh"
args:
- -c
- |
if [ $BRANCH_NAME == 'master' ]
then
echo "Branch is = $BRANCH_NAME"
gradle publish
fi
Current workarounds are mentioned as following :
Using different cloudbuild.yaml files for each branch
Overriding entrypoint and injecting bash as mentioned in the link:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Here's a convenient pattern to use for embedding shell scripts in cloudbuild.yaml."
echo "This step only pushes an image if this build was triggered by a push to master."
[[ "$BRANCH_NAME" == "master" ]] && docker push gcr.io/$PROJECT_ID/image
This tutorial outlines an alternative where you check in different cloudbuild.yaml files on different development branches.
You can try the following for Gradle command as mentioned by bhito:
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: sh
args: - c
- |
[[ "$BRANCH_NAME" == "master" ]] && gradle publish
Cloud build provides configuring triggers by branch, tag, and pr . This lets you define different build configs to use for different repo events, e.g. one for prs, another for deploying to prod, etc.you can refer to the documentation on how to create and manage triggers.
you can check this blog for more updates on additional features and can go through the release notes for more Cloud build updates.
To gain some more insights on Gradle, you can refer to the link

What represents a STEP in Cloud Build

Actually I am working on a pipeline, all good until there because it has worked. But when I want to explain it is not clear to me what each step represents physically, for example a step "could" be a node within a cluster.Please, if someone has a clear explanation of it, explain it to us.
Example 1 of a step
File config cloud build:
steps:
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
args: ["bin/deploy-dags-composer.sh"]
env:
- 'COMPOSER_BUCKET=${_COMPOSER_BUCKET}'
- 'ARTIFACTS_BUCKET=${_ARTIFACTS_BUCKET}'
id: 'DEPLOY-DAGS-PLUGINS-DEPENDS-STEP'
Bash File
#! /bin/bash
gsutil cp bin/plugins/* gs://$COMPOSER_BUCKET/plugins/
gsutil cp bin/dependencies/* gs://$ARTIFACTS_BUCKET/dags/dependencies/
gsutil cp bin/dags/* gs://$COMPOSER_BUCKET/dags/
gsutil cp bin/sql-scripts/* gs://$ARTIFACTS_BUCKET/path/bin/sql-scripts/composer/
Example 2 several steps
File config cloud build
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_COMPOSER_BUCKET}/plugins/']
dir: 'bin/plugins/'
id: 'deploy-plugins-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_ARTIFACTS_BUCKET}/dags/dependencies/']
dir: 'bin/dependencies/'
id: 'deploy-dependencies-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_COMPOSER_BUCKET}/dags/']
dir: 'bin/dags/'
id: 'deploy-dags-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_ARTIFACTS_BUCKET}/projects/path/bin/sql-scripts/composer/']
dir: 'bin/sql-scripts/'
id: 'deploy-scripts-step'
In Cloud Build, a step is a stage of the processing. This stage is described by a container to load, containing the required binaries for the processing to perform in the stage.
To this container, you can define an entrypoint, the binary to run in side the container, and args to pass to it.
You have also several option that you can see here.
An important concept to understand is that ONLY the /workspace directory is kept from one step to another one. At the end of each step, the container is offloaded and you lost all the data in memory (like environment variable) and the data stored outside of the /workspace directory (such as system dependency installation). Keep this in mind, many of issues come from there.
EDIT 1:
In a step, you can, out of the box, run 1 command on one container. gsutil, gcloud, mvn, node,.... All depends on your container and your entrypoint.
But there is a useful advance capacity, when you need to run many commands on the same container. It can occur for many reason. You can do such like this
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance)
echo $${MY_ENV_VAR} > env-var-save.file
# Continue the commands that you need/want ....

How can I call gcloud commands from a shell script during a build step?

I have automatic builds set up in Google Cloud, so that each time I push to the master branch of my repository, a new image is built and pushed to Google Container Registry.
These images pile up quickly, and I don't need all the old ones. So I would like to add a build step that runs a bash script which calls gcloud container images list-tags, loops the results, and deletes the old ones with gcloud container images delete.
I have the script written and it works locally. I am having trouble figuring out how to run it as a step in Cloud Builder.
It seems there are 2 options:
- name: 'ubuntu'
args: ['bash', './container-registry-cleanup.sh']
In the above step in cloudbuild.yml I try to run the bash command in the ubuntu image. This doesn't work because the gcloud command does not exist in this image.
- name: 'gcr.io/cloud-builders/gcloud'
args: [what goes here???]
In the above step in cloudbuild.yml I try to use the gcloud image, but since "Arguments passed to this builder will be passed to gcloud directly", I don't know how to call my bash script here.
What can I do?
You can customize the entry point of your build step. If you need gcloud installed, use the gcloud cloud builder and do this
step:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "enter 1 bash command per line"
ls -la
gcloud version
...
As per the official documentation Creating custom build steps indicates, you need a custom build step to execute a shell script from your source, the step's container image must contain a tool capable of running the script.
The below example, shows how to configure your args, for the execution to perform correctly.
steps:
- name: 'ubuntu'
args: ['bash', './myscript.bash']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/custom-script-test', '.']
images: ['gcr.io/$PROJECT_ID/custom-script-test']
I would recommend you to take a look at the above documentation and the example as well, to test and confirm if it will help you achieve the execution of the script.
For your case, specifically, there is this other answer here, where is indicated that you will need to override the endpoint of the build to bash, so the script runs. It's indicated as follow:
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Besides that, these two below articles, include more information and examples on how to configure customized scripts to run in your Cloud Build, that I would recommend you to take a look.
CI/CD: Google Cloud Build — Custom Scripts
Mastering Google Cloud Build Config Syntax
Let me know if the information helped you!

Cannot run Azure CLI task on yaml build

I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.

Ansible Pull mulptiple repo

I want to pull multiple repositories via ansible playbook but with if condition matches,
tasks:
- name: pull from git abc/123
git:
repo: git#gitlab.com:xyz.git
dest: var/www/abc/123
update: yes
version: $sprint_name
tasks:
- name: pull from git abc/234
git:
repo: git#gitlab.com:xyz.git
dest: /var/www/234
update: yes
version: $sprint_name
Now here I want to pass "123" or "234" as variable and if user want to pull only "123" or only "234" user should be able to do it
if you want the user to make choices during playbook runtime, by typing in some information that will alter the playbook's execution, you can use the vars_prompt section.
You will get his response to a variable and with a when section in your tasks you can control which tasks to run.
documentation here