Google Deployment Manager - Project creation permission denied - google-cloud-platform

I am getting a 403 PERMISSION_DENIED response from GCP when running the deployment manager to create a deployment that creates a project, two service accounts and sets IAM policy for it using the cloud resource manager API.
- code: RESOURCE_ERROR
location: /deployments/test-deployment/resources/dm-test-project
message: '{"ResourceType":"cloudresourcemanager.v1.project","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/dm-test-project","httpMethod":"GET"}}'
Before, I created a project 'DM Project Creation', enable some APIs, assign the Billing Account to it and then create a Service Account.
I already had an Organization node created, so I added the created Service Account in the org node and gave the following IAM roles:
- Project Creator
- Billing Account User
I was actually following this examples from Google Cloud Platform:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/project_creation
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/blob/master/community/cloud-foundation/templates/project/README.md
I run the following command to authenticate with the Service Account:
gcloud auth activate-service-account dm-project-creation#dm-creation-project-0.iam.gserviceaccount.com --key-file=/Users/famedina/Downloads/dm-creation-project-0-f1f92dd070ce.json
Then run the deployment manager passing the configuration file:
gcloud deployment-manager deployments create test-deployment --config config.yaml
imports:
- path: project.py
resources:
# The "name" property below will be the ID of the new project
# If you want your project to have a different name, use the "project-name"
# property.
- name: dm-test-project
type: project.py
properties:
# Change this to your organization ID.
organization-id: "<MY_ORG_ID"
# You can also create the project in a folder.
# If both organization-id and parent-folder-id are provided,
# the project will be created in parent-folder-id.
#parent-folder-id: "FOLDER_ID"
# Change the following to your organization's billing account
billing-account-name: billingAccounts/<MY_BILLING_ACC_ID>
# The apis to enable in the new project.
# To see the possible APIs, use: gcloud services list --available
apis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
- storage-component.googleapis.com
- monitoring.googleapis.com
- logging.googleapis.com
# The service accounts you want to create in the project
service-accounts:
- my-service-account-1
- my-service-account-2
bucket-export-settings:
create-bucket: true
# If using an already existing bucket, specify this
# bucket: <my bucket name>
# Makes the service account that Deployment Manager would use in the
# generated project when making deployments in this new project a
# project owner.
set-dm-service-account-as-owner: true
# The patches to apply to the project's IAM policy. Note that these are
# always applied as a patch to the project's current IAM policy, not as a
# diff with the existing properties stored in DM. This means that removing
# a binding from the 'add' section will not remove the binding on the
# project during the next update. Instead it must be added to the 'remove'
# section.
iam-policy-patch:
# These are the bindings to add.
add:
- role: roles/owner
members:
# NOTE: The DM service account that is creating this project will
# automatically be added as an owner.
- serviceAccount:98765432100#cloudservices.gserviceaccount.com
- role: roles/viewer
members:
- user:iamtester#deployment-manager.net
# The bindings to remove. Note that these are idempotent, in the sense
# that any binding here that is not actually on the project is considered
# to have been removed successfully.
remove:
- role: roles/owner
members:
# This is already not on the project, but in case it shows up, let's
# remove it.
- serviceAccount:1234567890#cloudservices.gserviceaccount.com```

I ran into this as well, and the error message is not actually explaining the underlying problem.
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
- code: RESOURCE_ERROR
location: /deployments/test-deployment/resources/dm-test-project
message: '{"ResourceType":"cloudresourcemanager.v1.project","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/dm-test-project","httpMethod":"**GET**"}}'
Alot of room for improvement in the resulting error towards the end user.

Related

Why doesn't my Cloud Function (2nd gen) trigger?

After getting some help deploying the sample code for a cloud function with a storage trigger I now get the function deployed just fine, but now it won't trigger :-(
The function is deployed:
The deployed code (per the tutorial):
'use strict';
// [START functions_cloudevent_storage]
const functions = require('#google-cloud/functions-framework');
// Register a CloudEvent callback with the Functions Framework that will
// be triggered by Cloud Storage.
functions.cloudEvent('imageAdded', cloudEvent => {
console.log(`Event ID: ${cloudEvent.id}`);
console.log(`Event Type: ${cloudEvent.type}`);
const file = cloudEvent.data;
console.log(`Bucket: ${file.bucket}`);
console.log(`File: ${file.name}`);
console.log(`Metageneration: ${file.metageneration}`);
console.log(`Created: ${file.timeCreated}`);
console.log(`Updated: ${file.updated}`);
});
// [END functions_cloudevent_storage]
The deploy command I used:
gcloud functions deploy xxx-image-handler --gen2 --runtime=nodejs16 --project myproject --region=europe-west3 --source=. --entry-point=imageAdded --trigger-event-filters='type=google.cloud.storage.object.v1.finalized' --trigger-event-filters='bucket=xxx_report_images'
To test I just uploaded a text file from command line:
gsutil cp test-finalize.txt gs://xxx_report_images/test-finalize.txt
Which worked fine, and the file was uploaded to the xxx_report_images bucket (which also is in the europe-west3 region) but the function is never triggered, which I can see by 0 items in the log:
gcloud beta functions logs read xxx-image-handler --gen2 --limit=100 --region europe-west3
Listed 0 items.
What is going on here? This seems very straight-forward and I fail to see what I'm missing, so hopefully I can get some guidance.
**** EDIT 1 *****
Regarding comment 1 below. I can see (in the Eventarch trigger list, picture below) that the service account used by eventarch is xxx-compute#developer.gserviceaccount.com
And I can see in the IAM Principals list that this indeed is the Default compute service account, which has the Editor role. I also explicitly added the Eventarc Connection Publisher role to this service account but without success (I assume the Editor role already contains this role, but just to be sure...). So I guess the issue is not related to your suggestion (?).
BTW. I tried the suggested gcloud compute project-info describe --format="value(defaultServiceAccount)" but just got Could not fetch resource: - Required 'compute.projects.get' permission for 'projects/myproject' and I couldn't figure out which role to add to which serviceaccount to enable this. However, as seen above I found the info in the Eventarc section in GCP instead.
**** EDIT 2 ****.
Indeed after more testing I think I have nailed it down to being a permission issue on Eventarc, just like suggested in the comment. I've posted a new question which is more specific.

GCP Vertex AI Training Custom Job : User does not have bigquery.jobs.create permission

I'm struggling to execute a query with Bigquery python client from inside a training custom job of Vertex AI from Google Cloud Platform.
I have built a Docker image which contains this python code then I have pushed it to Container Registry (eu.gcr.io)
I am using this command to deploy
gcloud beta ai custom-jobs create --region=europe-west1 --display-name="$job_name" \
--config=config_custom_container.yaml \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri="$docker_img_path" \
--args="${model_type},${env},${now}"
I have even tried to use the option --service-account to specify a service account with admin Bigquery role, it did not work.
According to this link
https://cloud.google.com/vertex-ai/docs/general/access-control?hl=th#granting_service_agents_access_to_other_resources
the Google-managed service accounts for AI Platform Custom Code Service Agent (Vertex AI) have already the right to access to BigQuery, so I do not understand why my job fails with this error
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/*******/jobs?prettyPrint=false:
Access Denied: Project *******:
User does not have bigquery.jobs.create permission in project *******.
I have replaced the id with *******
Edit:
I have tried several configuration, my last config YAML file only contents this
baseOutputDirectory:
outputUriPrefix:
Using the field serviceAccount does not seem to edit the actual configuration unlike --service-account option
Edit 14-06-2021 : Quick Fix
like #Ricco.D said
try explicitly defining the project_id in your bigquery code if you
have not done this yet.
bigquery.Client(project=[your-project])
has fixed my problem. I still do not know about the causes.
To fix the issue it is needed to explicitly specify the project ID in the Bigquery code.
Example:
bigquery.Client(project=[your-project], credentials=credentials)

“Create new version” ignores custom service account

I'm trying to deploy a new version of a model to AI Platform, it's a custom prediction routine. I've managed to deploy just fine when I have all the resources in the same GCP project, but when I try to deploy and I point the GCS files to a bucket in a different project, it fails to deploy. So I'm trying to pass which service account to use when creating the version, but it keeps ignoring it.
That's the message I get:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json returned "Field: version.deployment_uri Error: The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-*****#cloud-ml.google.com.iam.gserviceaccount.com.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'version.deployment_uri', 'description': 'The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-******#cloud-ml.google.com.iam.gserviceaccount.com.'}]}]
My request looks like
POST https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json
{
"name": "v1",
"deploymentUri": "gs://[bucket-gcp-project-2]",
"pythonVersion": "3.5",
"runtimeVersion": "1.13",
"package_uris": "gs://[bucket-gcp-project-2]/model.tar.gz",
"predictionClass": "predictor.Predictor",
"serviceAccount": "my-service-account#[gcp-project-1].iam.gserviceaccount.com"
}
The service account has access in both projects
Specifying a service account is documented as a beta feature. Try using the gcloud SDK, e.g.:
gcloud components install beta
gcloud beta ai-platform versions create v1 \
--service-account my-service-account#[gcp-project-1].iam.gserviceaccount.com ...

GCP project creation via deploymentmanager

So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error message:
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.

AWS - Errors - Updating Auto Scaling group, Amazon CloudFormation, Failed to deploy configuration

Within the AWS -> Elastic Beanstalk (Dashboard) -> Configuration -> Software Configuration -> Environment Properties
When I try to add & configure my "Environment Properties" from the config file ".env.default" of my node.js application which is as follows:
#.env.default
# General settings
TIMEZONE=Europe/Amsterdam
# --------
# Debug-related settings
LOG_LEVEL_CONSOLE=info
LOG_LEVEL_FILE=info
ENABLE_FILE_LOGGING=true
# Whether the local log directory (./logs/) should be preferred over /var/log/
LOG_FILE_PREFER_LOCAL=false
# Override the default logging location (/var/log/ or ./logs/)
# FORCE_LOG_LOCATION=./some-other-directory/
# /../../log/nodejs/
# --------
# Crash-related settings
MAX_CONSECUTIVE_CRASHES=5
CONSECUTIVE_CRASH_RESET_MS=5000
# --------
# Settings relating to remote API access
ENABLE_REMOTE_ACCESS=true
ENABLE_WHITELIST=true
HOST_API=true
HOST_WEB_INTERFACE=true
LISTEN_PORT=8081
JWT_SECRET=ItsASecretToEverybodyMAHBOI
# LISTEN_PORT=1903 backup
#INTERNAL_LISTEN_PORT=1939 backup
# --------
# Settings relating to internal access
INTERNAL_LISTEN_PORT=8083
# --------
# Database-related settings
DATABASE_HOST=acc-sc-3.crmhqy2lzjw4.eu-west-1.rds.amazonaws.com
DATABASE_NAME=acc_schedule_center_3
DATABASE_USER=sc_3
DATABASE_PASS=yCFKIqzLcBIBt1wYj4Qn
MAX_IDLE_TIME=28800
Environment Properties - First Side
Environment Properties - Second Side
Ignore the data that is listed inside the "Property Name" & "Property Value", cause they were from the previous configuration.
The Core Error, I'm facing at the moment is as follows:
ERROR #1
Service:AmazonCloudFormation, Message:Stack named
'awseb-e-4e98c2gukw-stack' aborted operation. Current state:
'UPDATE_ROLLBACK_IN_PROGRESS' Reason: null
ERROR #2
Updating Auto Scaling group named:
awseb-e-4e98c2gukw-stack-AWSEBAutoScalingGroup-1GR8E4SU6QZGJ failed
Reason: Template error: DBInstance aa153clv2zourf2 doesn't exist
ERROR #3
Failed to deploy configuration.
I'm fairly new, or can call me a novice coder or DevOps in general, but would like to know if anyone knows the solution for these errors?
Thanks in advance everyone!
Kind Regards,
Doga
I was able to fix my problem by adding some IAM policies to the EBS user.
I had only the AdministratorAccess-AWSElasticBeanstalk policy and after I added the AWSElasticBeanstalkRoleRDS policy it worked.
I had pretty much the same issue. For me it was the permission level of the IAM user that I was using for EB. It had EB Full permissions, but I needed to give it permissions to access other services as well.