Error Code 413 encountered in Admin SDK Directory API (Users: patch) - google-admin-sdk

I am a G Suite Super Admin. I have set up Google Single Sign On (SSO) for our AWS accounts inside our G Suite. As we have several AWS accounts, we need to run the "Users: patch" (https://developers.google.com/admin-sdk/directory/v1/reference/users/patch#try-it) to include other AWS accounts for Google Single Sign On.
While provisioning additional AWS accounts to Google Single Sign on, we encountered error "Code: 413" after running the above mentioned patch. Details below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "uploadTooLarge",
"message": "Profile quota is exceeded.: Data is too large for "
}
],
"code": 413,
"message": "Profile quota is exceeded.: Data is too large for "
}
}
What could be the possible cause of this error? Are there any workaround for this? Else, are there other ways to provision multiple AWS accounts using Google Single Sign On?
Thank you in advance for your patience and assistance in this.

Notice what the error is saying. You're probably going way beyond the designated limit. Try using the solution from this SO post which is to:
choose a different SAML ldP. New limit was said to be somewhere
between 2087 - 2315 characters

Related

Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1

I'm pretty new on Google cloud, and I'm trying to use GCloud command line, and I faced the following problem
Error: Forbidden access to resources.
Raw response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
],
"code": 403,
"message": "Required 'compute.regions.get' permission for 'projects/$project_id/regions/us-central1'"
}
}
Can someone help?
Much appreciated
To troubleshoot your issue, please try following:
Where are you running the command: Cloud Shell, Local environment?
If it is local environment, try Cloud Shell instead.
Check that you are using the latest version of gcloud sdk 262.
Did you properly initialize gcloud?
Can you confirm that you have appropriate role to run the command, like editior/owner?
Check if you are using that same location for your products
If above steps don't work, can you share your complete gcloud command to have more context?
Oh, I see where the problem is! When I created the storage, I put the region as "Asia". When I configured it via gcloud init, I put it as "us-central1-a". The "Permission denied" means in this context, I have no permission to access another server region. It is misleading in terms of thinking out the cloud scope. However, the Pawel's answer is more comprehensive, and it is a very good start to lead you to the correct direction.

Permissions Issue with Google Cloud Data Fusion

I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance

"Request payload size exceeds the limit" in google cloud json prediction request

I am trying to serve a prediction using google cloud ml engine. I generated my model using fast-style-transfer and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.
image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
Then I generated my json file for the request:
js = json.dumps({"image": matrix_test.tolist()})
Using the following code:
gcloud ml-engine predict --model {model-name} --json-instances request.json
The following error is returned:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!
This is a hard limit for the Cloud Machine Learning Engine API. There's a feature request to increase this limit. You could post a comment there asking for an update. Moreover, you could try the following solution in the meantime.
Hope it helps
If you use the batch prediction, you can make predictions on images that exceed that limit.
Here is the official documentation on that: https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict
I hope this helps you in some way!

Deploy Google Cloud Function from Cloud Function

Solved/invalid - see below
I'm trying to deploy a Google Cloud Function from a Google Cloud Function on demand.
However, whatever I try, I get a 403 Forbidden:
HttpError 403 when requesting https://cloudfunctions.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/functions?alt=json returned "The caller does not have permission"
I ended up granting the cloud function service account Project Owner role to make sure it can do anything, yet still I get the same error.
Is this limited intentionally (for example to avoid fork bombs or something) or am I doing something wrong?
Has anyone been able to make this work?
For the record: I ran the same (Python) function locally with Flask using my own account and then it will deploy the new cloud function perfectly, so the code itself seems to be ok.
Update
Code snippet of how I'm trying to deploy the cloud function:
cf_client = discovery.build('cloudfunctions', 'v1')
location = "projects/{MYPROJECT}/locations/europe-west1"
request = {
"name": "projects/{MYPROJECT}/locations/europe-west1/functions/hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"description": "Hopper hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"entryPoint": "pubsub_trigger",
"runtime": "python37",
"availableMemoryMb": 256,
"timeout": "60s",
"sourceArchiveUrl": "gs://staging.{MYPROJECT}.appspot.com/deployment/hopper.zip",
"eventTrigger": {
"eventType": "providers/cloud.pubsub/eventTypes/topic.publish",
"resource": "projects/{MYPROJECT}/topics/hopper-test-input"
},
"environmentVariables": {
"HOPPER_ID": "hopper--2376cd24d318cd2d42f000f4f1c31a8f"
}
}
response = cf_client.projects() \
.locations() \
.functions() \
.create(location=location, body=req) \
.execute()
Update
I feel like such an idiot... it turns out that for some reason I deployed the master function in a different project then the project I gave permissions on. No wonder it didn't work.
The correct answer should be: check that everything is indeed running how/where you expect it to be. Everything was configured correctly and deploying a CF in a CF is not a problem. The project was incorrect, due to a different default project being set on the gcloud utility.

EXPO - ‘exp fetch:ios:certs’ && 'exp: build:ios'

I’m currently trying to deploy a new build with “sdkVersion”: “25.0.0”, however I’m having many issues.
I have an Admin account on the Apple Enterprise Program.
Installed exp -g correctly, did 'exp login', and my app.json file is configured this way:
{
"expo": {
"name": "AppName",
"version": "1.0.0",
"icon": "./app/assets/AppName.png",
"slug": "AppName",
"sdkVersion": "25.0.0",
"privacy": "unlisted",
"orientation": "portrait",
"splash": {
"image": "./app/assets/AppName.png",
"resizeMode": "cover"
},
"ios": {
"bundleIdentifier": "com.group.AppName"
},
"android": {
"package": "com.group.AppName"
}
}
}
When I try to do run exp build:ios I let Expo handle all credentials but in the end I get the following error:
[exp] Error while gathering & validating credentials
[exp] {}
[exp] Reason:You are not allowed to perform this operation. Please check with one of your Team Admins, or, if you need further assistance, please contact Apple Developer Program Support. https://developer.apple.com/support You are not allowed to perform this operation. Please check with one of your Team Admins, or, if you need further assistance, please contact Apple Developer Program Support. https://developer.apple.com/support
I am admin so I really don’t know what this could refer to.
If I try specifying my own p12 Dist and Push certificates then I get this kind of error:
[exp] Error while gathering & validating credentials
[exp] {}
[exp] Reason:No cert available to make provision profile against, raw:"Make sure you were able to make a certificate prior to this step"
And if I try running the command exp fetch:ios:certs I get the following error:
[exp] Retreiving iOS credentials for #community/AppName
[exp] Unable to fetch credentials for this project. Are you sure they exist?
I would greatly appreciate some guidance, I think I am doing something wrong but don’t know what it is.
Even though exp was correctly creating the missing distribution certificates and push certificates for me, it was somehow having a hard time with the provisioning profile. After numerous trials, what worked was to create my certificates and the provisioning profile. Then choose the 'I will provide all the credentials and files needed, Expo does limited validation' option in 'exp build:ios'.
Guides used to create the certificates:
Here for the distribution certificate p12
Here for the push notification certificate p12
Here for the mobileprovision file
‘exp build:ios’ has a flag --apple-enterprise-account that will make things as enterprise and you won’t get that Reason:You are not allowed to perform this operation. message.