Cloud Build Failed to trigger build: generic::permission_denied: Permission denied - google-cloud-platform

I'm trying to use a use cloud build for my cloud run project. I have this cloudbuild.json:
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": ["build", "-t", "eu.gcr.io/$PROJECT_ID/keysafe", "."]
},
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"push",
"us-central1-docker.pkg.dev/${PROJECT_ID}/my-docker-repo/myimage"
]
}
],
"options": {
"logging": "CLOUD_LOGGING_ONLY"
}
}
And I keep getting a permission denied error. I've tried running it without a service account and using my permissions (I'm the owner), and with a service account even with the owner role.
It was originally working but since my project transitioned from Container registry to Artifact repository, I was getting an error
generic::invalid_argument: generic::invalid_argument: if 'build.service_account' is specified, the build must either (a) specify 'build.logs_bucket' (b) use the CLOUD_LOGGING_ONLY logging option, or (c) use the NONE logging option
That error persisted through both my account and the service account, which is why I switched to building from a cloudbuild.json file, not just my Dockerfile alone.
All the other Stack Overflow articles I've found suggest permissions to assign, but the service account and I have owner permissions and even adding the suggested permissions on top of Owner did not help.
Here are the permissions of the service account:
Here is the trigger configuration:

If anyone ends up in my position this is how I fixed it.
I ended up deleting the Cloud Run and Build and then recreated them. This gave me a pre-made cloudbuild.yaml which I added the option logging: CLOUD_LOGGING_ONLY, still using the same service account. I'm not sure why this fixed it but it does seem to be working.

Related

Azure ML Online Endpoint deployment DriverFileNotFound Error

When running the Azure ML Online endpoint commands, it works locally. But when I try to deploy it to Azure I get this error.
Command - az ml online-deployment create --name blue --endpoint "unique-name" -f endpoints/online/managed/sample/blue-deployment.yml --all-traffic
{
"status": "Failed",
"error": {
"code": "DriverFileNotFound",
"message": "Driver file with name score.py not found in provided dependencies. Please check the name of your file.",
"details": [
{
"code": "DriverFileNotFound",
"message": "Driver file with name score.py not found in provided dependencies. Please check the name of your file.\nThe build log is available in the workspace blob store \"coloraiamlsa\" under the path \"/azureml/ImageLogs/1673692e-e30b-4306-ab81-2eed9dfd4020/build.log\"",
"details": [],
"additionalInfo": []
}
],
This is the deployment YAML taken straight from azureml-examples repo
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: blue
endpoint_name: my-endpoint
model:
local_path: ../../model-1/model/sklearn_regression_model.pkl
code_configuration:
code:
local_path: ../../model-1/onlinescoring/
scoring_script: score.py
environment:
conda_file: ../../model-1/environment/conda.yml
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
instance_type: Standard_F2s_v2
instance_count: 1
Finally after lot of head banging, I have been able to consistently repro this bug in another Azure ML Workspace.
I tried deploying the same sample in a brand new Azure ML workspace created and it went smoothly.
At this point I remembered that I had upgraded the Storage Account of my previous AML Workspace to DataLake Gen2.
So I did the same upgrade in this new workspace’s storage account. After the upgrade, when I try to deploy the same endpoint, I get the same DriverFileNotFoundError!
It seems Azure ML does not support Storage Account with DataLake Gen2 capabilities although the support page says otherwise. https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-data#supported-data-storage-service-types.
At this point my only option is to recreate a new workspace and deploy my code there. Hope Azure team fixes this soon.

gcloud builds submit of Django website results in error "does not have storage.objects.get access"

I'm trying to deploy my Django website with Cloud Run, as described in Google Cloud Platform's documentation, but I get the error Error 403: 934957811880#cloudbuild.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object., forbidden when running the command gcloud builds submit --config cloudmigrate.yaml --substitutions _INSTANCE_NAME=trouwfeestwebsite-db,_REGION=europe-west6.
The full output of the command is: (the error is at the bottom)
Creating temporary tarball archive of 119 file(s) totalling 23.2 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [C:\Users\Sander\AppData\Roaming\gcloud\logs\2021.10.23\20.53.18.638301.log] t
o see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://trouwfeestwebsite_cloudbuild/source/1635015198.74424-eca822c138ec
48878f292b9403f99e83.tgz]
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not resolve source: googleapi: Error 403: 934957811880#cloudbuild.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object., forbidden
On the level of my storage bucket, I granted 934957811880#cloudbuild.gserviceaccount.com the permission Storage Object Viewer, as I see on https://cloud.google.com/storage/docs/access-control/iam-roles that this covers storage.objects.get access.
I also tried by granting Storage Object Admin and Storage Admin.
I also added the "Viewer" role on IAM level (https://console.cloud.google.com/iam-admin/iam) for 934957811880#cloudbuild.gserviceaccount.com, as suggested in https://stackoverflow.com/a/68303613/5433896 and https://github.com/google-github-actions/setup-gcloud/issues/105, but it seems fishy to me to give the account such a broad role.
I enabled Cloud run in the Cloud Build permissons tab: https://console.cloud.google.com/cloud-build/settings/service-account?project=trouwfeestwebsite
With these changes, I still get the same error when running the gcloud builds submit command.
I don't understand what I could be doing wrong in terms of credentials/authentication (https://stackoverflow.com/a/68293734/5433896). I didn't change my google account password nor revoked permissions of that account to the Google Cloud SDK since I initialized that SDK.
Do you see what I'm missing?
The content of my cloudmigrate.yaml is:
steps:
- id: "build image"
name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}", "."]
- id: "push image"
name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}"]
- id: "apply migrations"
name: "gcr.io/google-appengine/exec-wrapper"
args:
[
"-i",
"gcr.io/$PROJECT_ID/${_SERVICE_NAME}",
"-s",
"${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}",
"-e",
"SETTINGS_NAME=${_SECRET_SETTINGS_NAME}",
"--",
"python",
"manage.py",
"migrate",
]
- id: "collect static"
name: "gcr.io/google-appengine/exec-wrapper"
args:
[
"-i",
"gcr.io/$PROJECT_ID/${_SERVICE_NAME}",
"-s",
"${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}",
"-e",
"SETTINGS_NAME=${_SECRET_SETTINGS_NAME}",
"--",
"python",
"manage.py",
"collectstatic",
"--verbosity",
"2",
"--no-input",
]
substitutions:
_INSTANCE_NAME: trouwfeestwebsite-db
_REGION: europe-west6
_SERVICE_NAME: invites-service
_SECRET_SETTINGS_NAME: django_settings
images:
- "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}"
Thank you very much for any help.
The following solved my problem.
DazWilkin was right in saying:
it's incorrectly|unable to reference the bucket
(comment upvote for that, thanks!!). In my secret (configured on Secret Manager; or alternatively you can put this in a .env file at project root folder level and making sure you don't exclude that file for deployment in a .gcloudignore file then), I now
have set:
GS_BUCKET_NAME=trouwfeestwebsite_sasa-trouw-bucket (project ID + underscore + storage bucket ID)
instead of
GS_BUCKET_NAME=sasa-trouw-bucket
Whereas the tutorial in fact stated I had to set the first, I had set the latter since I found the underscore splitting weird, nowhere in the tutorial had I seen something similar, I thought it was an error in the tutorial.
Adapting the GS_BUCKET_NAME changed the error of gcloud builds submit to:
Creating temporary tarball archive of 412 file(s) totalling 41.6 MiB before compression.
Uploading tarball of [.] to [gs://trouwfeestwebsite_cloudbuild/source/1635063996.982304-d33fef2af77a4744a3bb45f02da8476b.tgz]
ERROR: (gcloud.builds.submit) PERMISSION_DENIED: service account "934957811880#cloudbuild.gserviceaccount.com" has insufficient permission to execute the build on project "trouwfeestwebsite"
That would mean that least now the bucket is found, only a permission is missing.
Edit (a few hours later): I noticed this GS_BUCKET_NAME=trouwfeestwebsite_sasa-trouw-bucket (project ID + underscore + storage bucket ID) setting then caused trouble in a later stage of the deployment, when deploying the static files (last step of the cloudmigrate.yaml). This seemed to work for both (notice that the project ID is no longer in the GS_BUCKET_NAME, but in its separate environment variable):
DATABASE_URL=postgres://myuser:mypassword#//cloudsql/mywebsite:europe-west6:mywebsite-db/mydb
GS_PROJECT_ID=trouwfeestwebsite
GS_BUCKET_NAME=sasa-trouw-bucket
SECRET_KEY=my123Very456Long789Secret0Key
Then, it seemed that there also really was a permissions problem:
for the sake of completeness, afterwards, I tried adding the permissions as stated in https://stackoverflow.com/a/55635575/5433896, but it didn't prevent the error I reported in my question.
This answer however helped me: https://stackoverflow.com/a/33923292/5433896. =>
Setting the Editor role on the cloudbuild service account helped the gcloud builds submit command to continue its process further without throwing the permissions error.
If you have the same problem: I think a few things mentioned in my question can also help you - for example I think doing this may also have been important:
I enabled Cloud run in the Cloud Build permissons tab:
https://console.cloud.google.com/cloud-build/settings/service-account?project=trouwfeestwebsite

Google Cloud Liens not protecting from Project Deletion

I have set up gcp liens as described here.
Unfortunately when I try to delete the project using an owner account the project is deleted.
Does it take some time to take effect or is there some other kind of extra configuration?
In order to achieve so I used the commands specified on the documentation
gcloud alpha resource-manager liens create --restrictions=resourcemanager.projects.delete --reason="Super important production system" --project projectId
Then I check the rule
> gcloud alpha resource-manager liens list --project projectId --format json
[
{
"createTime": "2020-01-23T07:53:19.938621Z",
"name": "liens/p111111111111-420a1a11-8dee-4b07-a7fe-5112b00e898d",
"origin": "john#doe.com",
"parent": "projects/111111111111",
"reason": "Super important production system",
"restrictions": [
"resourcemanager.projects.delete"
]
}
]
You need to have the “Project Lien Modifier” role for your user at the Organization level.
Then you can open the cloud shell and run this command
gcloud alpha resource-manager liens create --restrictions=resourcemanager.projects.delete --reason="Important PJ" --project=[YOUR-PJ-NAME] --verbosity=debug
** EDIT:
I test it in a a no-organization project and the lien doesn't work. This feature is in alpha, looks like this does not support individual projects currently. It was made thinking in large organisation with hundreds of projects

AWS CodePipeline build lacks Git history

Context:
I have a CodePipeline set up that uses CodeCommit and CodeBuild as its source and build phases.
My build includes a plugin (com.zoltu.git-versioning) that uses the Git commit history to dynamically create a build version number.
Issue:
This fails on the AWS pipeline because of it cannot find any Git information in the source used to perform the build.
Clearly the action used to checkout the source uses an export which omits the Git metadata and history.
Question:
How do I configure CodeCommit or CodePipeline to do a proper git clone? I've looked in the settings for both these components (as well as CodeBuild) and cannot find any configuration to set the command used by the checkout action.
Has anyone got CodePipeline builds working with a checkout containing full Git metadata?
This is currently not possible with the CodeCommit action in CodePipeline.
https://forums.aws.amazon.com/thread.jspa?threadID=248267
CodePipeline supports git full clone as of October:
https://aws.amazon.com/about-aws/whats-new/2020/09/aws-codepipeline-now-supports-git-clone-for-source-actions/
In your console, go to the source stage and edit.
You will have a new option to fully clone your git history.
full clone option
In Terraform you will have to add it to the source action's configuration:
configuration = {
RepositoryName = var.repository_name
BranchName = "master"
OutputArtifactFormat = "CODEBUILD_CLONE_REF"
}
More info:
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-codecommit-gitclone.html
Yes, CodePipeline supports now a Git Full Clone.
You just need to do some extra steps: https://docs.aws.amazon.com/codepipeline/latest/userguide/troubleshooting.html#codebuild-role-connections
However, CodePipeline does not currently support dynamic branches, Pull Requests. See Dynamically change branches on AWS CodePipeline
Therefore, if you need to extend your pipeline for Pull Requests, I'd recommend the approach posted by Timothy Jones above.
There's one more related thing that's worth mentioning. CodeBuild has the Full Clone option as well.
As long as you do not use the Local Source cache option, the Git history is there.
When I tried to use the above mentioned cache option, I noticed that .git is not a directory. It's a file containing one line of text, e.g.:
gitdir: /codebuild/local-cache/workspace/9475b907226283405f08daf5401aba99ec6111f966ae2b921e23aa256f52f0aa/.git
I don't know why it's currently implemented like this but, it's confusing (at least for me) and I don't consider it to be the expected behavior.
Although CodePipeline doesn't natively support this, you can get the information by cloning the repository in CodeBuild.
To do this, you need to set the permissions correctly, then carefully clone the repository.
Permissions
To give the permissions to clone the repository you need to:
Give your CodeBuild role the codecommit:GitPull permission, with the resource ARN of your CodeCommit repository
Put git-credential-helper: yes in the env part of your buildspec file
Cloning the repo
To clone the repo, you'll need to:
know the clone URL and branch (CodeBuild doesn't know this information)
git reset back to the commit that CodeBuild is
building (otherwise you'll have a race condition between commits and builds).
git reset "$CODEBUILD_RESOLVED_SOURCE_VERSION"
If you'd like examples, I've made a detailed writeup of the process, and published an example CodePipeline stack showing it in action.
I spent too much time on this poorly documented process, that I decided to create some documentation for myself and future developers. I hope it helps.
CodeBuild + CodePipeline
This will connect CodeBuild and CodePipeline such that changes to your GitHub repository triggers CodePipeline to do a Full clone of your repository, that is then passed to CodeBuild which just transforms the local .git folder metadata to be poiting to the correct branch, and then all of the source code plus the Git metadata is deployed to Elastic Beanstalk.
More information about this process can be found here.
Start creating a CodePipeline pipeline. In the middle of its creation, you wull be prompted to create a CodeBuild project; do it.
Feel free to select a specific location for the Artifact store (custom S3 bucket).
Select GitHub (Version 2) as the source provider, check "Start the pipeline on source code change", and select Full cone as the output artifact format.
Select AWS CodeBuild as the Build provider.
For the Project Name, click onthe "Create project" button and select the below options:
a. Environment image: Managed image
b. Operating system: Amazon Linux 2
c. Runtime(s): Standard
d. For the Buildspec, select "Insert build commands" and click on "Switch to editor". Then paste the below Buildspec code.
e. Enable CloudWatch logs.
In the Environment variables, insert:
BranchName: #{SourceVariables.BranchName} as Plaintext
CommitId: #{SourceVariables.CommitId} as Plaintext
Select Single build as the Build type.
Select AWS Elastic Beanstalk as the Deploy provider.
Review operation and create the pipeline.
Create and add a new policy to the newly created CodeBuildServiceRole role. Choose a name, like projectName-connection-permission and attach the following JSON to it (tutorial):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "arn:aws:codestar-connections:eu-central-1:123456789123:connection/sample-1908-4932-9ecc-2ddacee15095"
}
]
}
PS: Change the Resource value arn:aws:codestar-connections:eu-central-1:123456789123:connection/sample-1908-4932-9ecc-2ddacee15095 from the JSON to your connection ARN. To find the connection ARN for your pipeline, open your pipeline and click the (i) icon on your source action.
Create and add a new policy to the newly created CodeBuildServiceRole role. Choose a name, like projectName-s3-access and attach the following JSON to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket-codepipeline",
"arn:aws:s3:::my-s3-bucket-codepipeline/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
]
}
PS: Change the Resource values my-s3-bucket-codepipeline to match with your S3 bucket name for your CodePipeline.
Edit the inline policy for your CodePipelineServiceRole role by adding the following object to your Statement array:
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "*"
}
Done.
Buildspec code
version: 0.2
#env:
#variables:
# key: "value"
# key: "value"
#parameter-store:
# key: "value"
# key: "value"
#secrets-manager:
# key: secret-id:json-key:version-stage:version-id
# key: secret-id:json-key:version-stage:version-id
#exported-variables:
# - variable
# - variable
#git-credential-helper: yes
#batch:
#fast-fail: true
#build-list:
#build-matrix:
#build-graph:
phases:
#install:
#If you use the Ubuntu standard image 2.0 or later, you must specify runtime-versions.
#If you specify runtime-versions and use an image other than Ubuntu standard image 2.0, the build fails.
#runtime-versions:
# name: version
# name: version
#commands:
# - command
# - command
#pre_build:
#commands:
# - command
# - command
build:
commands:
- echo Branch - $BranchName
- echo Commit - $CommitId
- echo Checking out branch - $BranchName
- git checkout $BranchName
# - command
# - command
#post_build:
#commands:
# - command
# - command
#reports:
#report-name-or-arn:
#files:
# - location
# - location
#base-directory: location
#discard-paths: yes
#file-format: JunitXml | CucumberJson
#artifacts:
#files:
# - location
# - location
#name: $(date +%Y-%m-%d)
#discard-paths: yes
#base-directory: location
artifacts:
files:
- '**/*'
#cache:
#paths:
# - paths
Additional Info
Never edit the inline policy that was created by CodePipeline! Only create and add new policies to a role. See this issue.
The Environment Variables for CodeBuild must be set from CodePipeline -> Edit: Build -> Environment variables - optional. If you set these variables in CodeBuild -> Edit -> Environment -> Additional configuration -> Environment variables it WON'T WORK!
For a bigger list of Environment variables during CodeBuild, see Variables List, Action Variables, and CodeBuild variables.
The Git Full clone option on CodePipeline is not available without CodeBuild. This is a known annoying limitation.
You can include the buildspec.yml in your root (top level) project directory. See this.
The Full clone that CodePipeline does leaves the local repository .git in a detached HEAD state, meaning that in order to get the branch name you will have to either get it with the help of CodeBuild environment variables to retrieve it from CodePipeline, or to execute the following command (see this):
git branch -a --contains HEAD | sed -n 2p | awk '{ printf $1 }'

How do I specify a source in build requests using gcloud container builds submit?

I am trying to specify a build request with the source specified as a repoSource:
{
"source": {
"repoSource": {
"repoName": "myRepo",
"branchName": "master"
}
},
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": ["build", "-t", "gcr.io/$PROJECT_ID/zookeeper", "."]
}
],
"images": [
"gcr.io/$PROJECT_ID/zookeeper"
]
}
However, when I attempt to submit it with gcloud, I get an error:
$ gcloud container builds submit --no-source --config cloudbuild.json
ERROR: (gcloud.container.builds.submit) cloudbuild.json: config cannot specify source
In "Writing Custom Build Requests" it says:
When you submit build requests using the gcloud command-line tool, the source field may not be necessary if you specify the source as a command-line argument. You may also specify a source in your build request file. See the gcloud documentation for more information.
Note: The storageSource and repoSource fields described in the Build resource documentation differ from the source field. storageSource instructs Container Builder to find the source files in a Cloud Storage bucket, and repoSource refers to a Cloud Source Repository containing the source files.
So how, then, do you specify repoSource with gcloud? I am only seeing the gs:// url prefix documented.