Local packages not loading to GCP python functions with github actions - google-cloud-platform

I am trying to deploy a GCP function. My code uses a package that's on a private repository. I create a local copy of that package in the folder, and then use gcloud function deploy from the folder to deploy the function.
This works well. I can see a function that is deployed, with the localpackage.
The problem is with using github actions to deploy the function.
The function is part of a repository that has multiple functions, so when I deploy, I run github actions from outside this folder of the function, and while the function gets deployed, the dependencies do not get picked up.
For example, this is my folder structure:
my_repo
- .github/
- workflows/
-function_deploy.yaml
- function_1_folder
- main.py
- requirements.txt
- .gcloudignore
- localpackages --> These are the packages I need uploaded to GCP
My function_deploy.yaml looks like :
name: Build and Deploy to GCP functions
on:
push:
paths:
function_1_folder/**.py
env:
PROJECT_ID: <project_id>
jobs:
job_id:
runs-on: ubuntu-latest
permissions:
contents: 'read'
id-token: 'write'
steps:
- uses: 'actions/checkout#v3'
- id: 'auth'
uses: 'google-github-actions/auth#v0'
with:
credentials_json: <credentials>
- id: 'deploy'
uses: 'google-github-actions/deploy-cloud-functions#v0'
with:
name: <function_name>
runtime: 'python38'
region: <region>
event_trigger_resource: <trigger_resource>
entry_point: 'main'
event_trigger_type: <pubsub>
memory_mb: <size>
source_dir: function_1_folder/
The google function does get deployed, but it fails with:
google-github-actions/deploy-cloud-functions failed with: operation failed: Function failed on loading user code. This is likely due to a bug in the user code. Error message: please examine your function logs to see the error cause...
When I look at the google function, I see that the localpackages folder hasn't been uploaded to GCP.
When I deploy from my local machine however, it does upload the localpackages.
Any suggestions on what I maybe doing incorrectly? And how to upload the localpackages?
I looked at this question:
Github action deploy-cloud-functions not building in dependencies?
But didn't quite understand what was done here.

Related

How to setup terraform cicd with gcp and github actions in a multidirectory repository

Introduction
I have a repository with all the infrastructure defined using IaC, separated in folders. For instance, all terraform configuration is in /terraform/. I want to apply all terraform files inside that directory from the CI/CD.
Configuration
The used github action is shown below:
name: 'Terraform'
on: [push]
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
#working-directory: terraform
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- id: 'auth'
uses: 'google-github-actions/auth#v1'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# On push to "master", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false
Problem
If I log in and then change directory to apply terraform it doesn't find to log in.
storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
On the other hand, if I don't change the directory then it doesn't find the configuration files as expected.
Error: No configuration files
Tried to move the terraform configuration files to the root of the repository and works. How could I implement it in a multidirectory repository?
Such feature was requested before. As explained in the issue, auth files is named as follows gha-creds-*.json.
Therefore, added a step just before using terraform to update the variable environment and moving the file itself:
- name: 'Setup google auth in multidirectory repo'
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=$GITHUB_WORKSPACE/terraform/`ls -1 $GOOGLE_APPLICATION_CREDENTIALS | xargs basename`" >> $GITHUB_ENV
mv $GITHUB_WORKSPACE/gha-creds-*.json $GITHUB_WORKSPACE/terraform/

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Cannot run Azure CLI task on yaml build

I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.

How to access a GCP Cloud Source Repository from another project?

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram

github pages issue when using github actions and github-pages-deploy-action?

I have simple github repo where I host the content of my CV. I use hackmyresume to generate the index.html. I'm using Github Actions to run the npm build and it should publish the generated content to the gh-pages branch.
My workflow file has
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
And the build command is
"build": "hackmyresume BUILD ./src/main/resources/json/fresh/resume.json target/index.html -t compact",
I can see the generated html file getting committed to the github branch
https://github.com/emeraldjava/emeraldjava/blob/gh-pages/index.html
but the gh-page doesn't pick this up? I get a 404 error when i hit
https://emeraldjava.github.io/emeraldjava/
I believe my repo setting and secrets are correct but I must be missing something small. Any help would be appreciated.
This is happening because of your use of the GITHUB_TOKEN variable. There's an open issue with GitHub due to the fact that the built in token doesn't trigger the GitHub Pages deploy job. This means you'll see the files get committed correctly, but they won't be visible.
To get around this you can use a GitHub access token. You can learn how to generate one here. It needs to be correctly scoped so it has permission to push to a public repository. You'd store this token in your repository's Settings > Secrets menu (Call it something like ACCESS_TOKEN), and then reference it in your configuration like so:
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
You can find an outline of these variables here. Using an access token will allow the GitHub Pages job to trigger when a new deployment is made. I hope that helps!