Cloud Build fails with resource location constraint - google-cloud-platform

We have a policy in place which restricts resources to EU regions.
When I try to execute a cloud build, gcloud wants to create a bucket (gs://[PROJECT_ID]_cloudbuild) to store staging sources. This step fails, because the default bucket location ('us') is used:
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
As a workaround I tried to pass an existing bucket located in a valid region (using --gcs-source-staging-dir), but I got the same error.
How can this be solved?
Here the HTTP logs:
$ gcloud --log-http builds submit --gcs-source-staging-dir gs://my-custom-bucket/staging \
--tag gcr.io/xxxxxxxxxx/quickstart-image .
=======================
==== request start ====
uri: https://www.googleapis.com/storage/v1/b?project=xxxxxxxxxx&alt=json
method: POST
== headers start ==
accept: application/json
content-type: application/json
== headers end ==
== body start ==
{"name": "my-custom-bucket"}
== body end ==
==== request end ====
---- response start ----
-- headers start --
server: UploadServer
status: 412
-- headers end --
-- body start --
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
}
}
-- body end --
---- response end ----
----------------------
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint ‘constraints/gcp.resourceLocations’

I found the solution to this problem. After you create the project (with resource location restriction enabled), you should create a new bucket with the name [PROJECT_ID]_cloudbuild in the preferred location.
When you don't do this, cloud build submit will automatically create a bucket in the US, this is not configurable. And because of the resource restrictions, cloud build is not able to create this bucket in the US and this fails with the following error:
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
When you create the bucket (by hand) with the same name, cloudbuild will take that bucket as default bucket. The solution was not immediately visible, because for projects that already had the cloudbuild bucket in place when the resource restrictions were applied, the problem did not appear.

Cloud Build will use a default bucket to store logs. You can try to add logsBucket field to the build config file with a specific bucket like in the snippet:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', '.']
logsBucket: 'gs://mybucket'
You can find detailed information about the build configuration file here

I had this issue investigated by Google Support. They came up with the following answer:
After investigating with the different teams involved in this issue,
it seems like the restrictions affecting Cloud Build (in this case,
preventing resource allocation in US) is intended. The only workaround
that came up from the investigation was the creation of a new project
without the restrictions in place that allows the use of Cloud Build.
They also referred to a feature request (already mentioned by in #Christopher P 's comment - thanks for that)

I faced the same issue with location constraints , follow the below steps to fix it
After logging to gcp console, select the required project where you are facing the issue
Then choose "Organization policies" from the IAM console. Make sure you have the necessary permissions for it to get listed
Look for the policy "Google Cloud Platform - Resource Location Restriction" policy in the list. Typically it comes under the category of custom policy
Please refer the this screenshot:
location constraint
Click on the policy, click on edit, drag down in custom values add the location which you want.
using prefix in means it includes everything to mention specific value use location name with out prefix
here " 'us' violates constraint ‘constraints/gcp.resourceLocations"
Enter value us in the custom value add it and save , if it requires all zone in us-east mention prefix in like this
in:us-east4-locations
hope this works
allowed values

Related

Missing required GCS remote state configuration location

After Google Cloud quota update, I can't run terragrunt/terraform code due to strange error. Same code worked before with other project on same account. After I tried to recreate project (to get new clear project) there was some "Billing Quota" popup and I asked support for changing quota.
I got the following message from support:
Dear Developer,
We have approved your request for additional quota. Your new quota should take effect within one hour of receiving this message.
And now (1 day after) terragrunt is not working due to error:
Missing required GCS remote state configuration location
Actually what I got:
service account for pipelines with Project Editor and Service Networking Admin;
bucket without public access (europe-west3)
following terragrunt config:
remote_state {
backend = "gcs"
config = {
project = get_env("TF_VAR_project")
bucket = "bucket name"
prefix = "${path_relative_to_include()}"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
Also i`m running following pipeline
- terragrunt run-all init
- terragrunt run-all validate
- terragrunt run-all plan
- terragrunt run-all apply --terragrunt-non-interactive -auto-approve
and its failing on init with error.'
Project and credentials are correct (also credentials stored in GOOGLE_CREDENTIALS env as json without new lines or whitespaces).
Also tryed to specify "location" in "config" but got error that bucket not found in project.
Does anybody know how to fix or where can be problem?
It worked before I got quota.

Correct naming convention for Cloud Run: gcloud builds submit media bucket names (The specified bucket does not exist)

I am following this tutorial to upload my existing Django project running locally to Google Cloud Run. I believe I have followed all the steps correctly to create the bucket and grant it the necessary permissions. But when I try to run:
gcloud builds submit \
--config cloudmigrate.yaml \
--substitutions=_INSTANCE_NAME=cgps-reg-2-postgre-sql,_REGION=us-central1
I get the error:
Step #3 - "collect static": google.api_core.exceptions.NotFound: 404 POST https://storage.googleapis.com/upload/storage/v1/b/cgps-registration-2_cgps-reg-2-static-files-bucket/o?uploadType=multipart&predefinedAcl=publicRead:
I was a little confused by this line that seams to tell you to put the bucket name in the location field, but I think its perhaps just a typo in the tutorial. I was not sure if I should leave location at the default "Multi-Region" or change it to "us-central1" where everyting else in the project is.
The instructions for telling the project the name of the bucket I interpreted as PROJECT_ID + "_" + BUCKET_NAME:
or in my case
cgps-registration-2_cgps-reg-2-static-files-bucket
But clearly this naming convention is not correct as the error clearly says it can not find a bucket with this name. So what am I missing here?
Credit for this answer really goes to dazwilken. The answer he gave in the comment is the correct one:
Your bucket name is cgps-reg-2-static-files-bucket. This is its
globally unique name. You should not prefix it (again) with the
Project name when referencing it. The error is telling you (correctly)
that the bucket (called
cgps-registration-2_cgps-reg-2-static-files-bucket) does not exist. It
does not. The bucket is called cgps-reg-2-static-files-bucket
Because bucket names must be unique, one way to create them it to
combine another unique name i.e. the Google Cloud Project ID in their
naming. The tutorial likely confused you by using this approach but
without explaining it.

AWS Api Gateway Custom Domain Name with undefined stage

I'm trying to set up a Custom Domain Name in AWS API Gateway where callers have to specify explicitly the stage name after any base path name. It is something I did in the past but now it seems that, since AWS updated the console interface, it is no more possible.
The final url should be like:
https://example.com/{basePath}/{stage}/function
I tried using the console, but stage is now a mandatory field (chose from a drop-down).
I tried using AWS CLI, but stage is again a mandatory field
aws: error: the following arguments are required: --stage
I tried using Boto3, following the documentation (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/apigateway.html#APIGateway.Client.create_base_path_mapping) but, even if stage can be specified as 'none' (The name of the API's stage that you want to use for this mapping. Specify '(none)' if you want callers to explicitly specify the stage name after any base path name.), doing this returns an error:
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the CreateBasePathMapping operation: Invalid stage identifier specified
What is funny (or frustrating) is that I have some custom domain names created with the old console and that are perfectly working, without any stage defined.
It is still possible to specify only the "API ID" and "Path" and leave out the "stage" parameter. I have tried this both from the console and the CLI:
From console: The "Stage" setting is a drop-down as you mentioned, but can be left blank (don't select anything). If you did select a stage, delete the API mapping and add it again
From CLI: Just tried this as well and works fine for me on CLI version aws-cli/1.18.69 Python/3.7.7 Darwin/18.7.0 botocore/1.16.19
$ aws apigateway create-base-path-mapping --domain-name **** --rest-api-id *** --base-path test
{
"basePath": "test",
"restApiId": "***"
}

Permissions Issue with Google Cloud Data Fusion

I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance

Force CloudFront distribution/file update

I'm using Amazon's CloudFront to serve static files of my web apps.
Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed?
Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?
Good news. Amazon finally added an Invalidation Feature. See the API Reference.
This is a sample request from the API Reference:
POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml
<InvalidationBatch>
<Path>/image1.jpg</Path>
<Path>/image2.jpg</Path>
<Path>/videos/movie.flv</Path>
<CallerReference>my-batch</CallerReference>
</InvalidationBatch>
As of March 19, Amazon now allows Cloudfront's cache TTL to be 0 seconds, thus you (theoretically) should never see stale objects. So if you have your assets in S3, you could simply go to AWS Web Panel => S3 => Edit Properties => Metadata, then set your "Cache-Control" value to "max-age=0".
This is straight from the API documentation:
To control whether CloudFront caches an object and for how long, we recommend that you use the Cache-Control header with the max-age= directive. CloudFront caches the object for the specified number of seconds. (The minimum value is 0 seconds.)
With the Invalidation API, it does get updated in a few of minutes.
Check out PHP Invalidator.
Bucket Explorer has a UI that makes this pretty easy now. Here's how:
Right click your bucket. Select "Manage Distributions."
Right click your distribution. Select "Get Cloudfront invalidation list"
Then select "Create" to create a new invalidation list.
Select the files to invalidate, and click "Invalidate." Wait 5-15 minutes.
Automated update setup in 5 mins
OK, guys. The best possible way for now to perform automatic CloudFront update (invalidation) is to create Lambda function that will be triggered every time when any file is uploaded to S3 bucket (a new one or rewritten).
Even if you never used lambda functions before, it is really easy -- just follow my step-by-step instructions and it will take just 5 mins:
Step 1
Go to https://console.aws.amazon.com/lambda/home and click Create a lambda function
Step 2
Click on Blank Function (custom)
Step 3
Click on empty (stroked) box and select S3 from combo
Step 4
Select your Bucket (same as for CloudFront distribution)
Step 5
Set an Event Type to "Object Created (All)"
Step 6
Set Prefix and Suffix or leave it empty if you don't know what it is.
Step 7
Check Enable trigger checkbox and click Next
Step 8
Name your function (something like: YourBucketNameS3ToCloudFrontOnCreateAll)
Step 9
Select Python 2.7 (or later) as Runtime
Step 10
Paste following code instead of default python code:
from __future__ import print_function
import boto3
import time
def lambda_handler(event, context):
for items in event["Records"]:
path = "/" + items["s3"]["object"]["key"]
print(path)
client = boto3.client('cloudfront')
invalidation = client.create_invalidation(DistributionId='_YOUR_DISTRIBUTION_ID_',
InvalidationBatch={
'Paths': {
'Quantity': 1,
'Items': [path]
},
'CallerReference': str(time.time())
})
Step 11
Open https://console.aws.amazon.com/cloudfront/home in a new browser tab and copy your CloudFront distribution ID for use in next step.
Step 12
Return to lambda tab and paste your distribution id instead of _YOUR_DISTRIBUTION_ID_ in the Python code. Keep surrounding quotes.
Step 13
Set handler: lambda_function.lambda_handler
Step 14
Click on the role combobox and select Create a custom role. New tab in browser will be opened.
Step 15
Click view policy document, click edit, click OK and replace role definition with following (as is):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation"
],
"Resource": [
"*"
]
}
]
}
Step 16
Click allow. This will return you to a lambda. Double check that role name that you just created is selected in the Existing role combobox.
Step 17
Set Memory (MB) to 128 and Timeout to 5 sec.
Step 18
Click Next, then click Create function
Step 19
You are good to go! Now on, each time you will upload/reupload any file to S3, it will be evaluated in all CloudFront Edge locations.
PS - When you are testing, make sure that your browser is loading images from CloudFront, not from local cache.
PSS - Please note, that only first 1000 files invalidation per month are for free, each invalidation over limit cost $0.005 USD. Also additional charges for Lambda function may apply, but it is extremely cheap.
If you have boto installed (which is not just for python, but also installs a bunch of useful command line utilities), it offers a command line util specifically called cfadmin or 'cloud front admin' which offers the following functionality:
Usage: cfadmin [command]
cmd - Print help message, optionally about a specific function
help - Print help message, optionally about a specific function
invalidate - Create a cloudfront invalidation request
ls - List all distributions and streaming distributions
You invaliate things by running:
$sam# cfadmin invalidate <distribution> <path>
one very easy way to do it is FOLDER versioning.
So if your static files are hundreds for example, simply put all of them into a folder called by year+versioning.
for example i use a folder called 2014_v1 where inside i have all my static files...
So inside my HTML i always put the reference to the folder. ( of course i have a PHP include where i have set the name of the folder. ) So by changing in 1 file it actually change in all my PHP files..
If i want a complete refresh, i simply rename the folder to 2014_v2 into my source and change inside the php include to 2014_v2
all HTML automatically change and ask the new path, cloudfront MISS cache and request it to the source.
Example:
SOURCE.mydomain.com is my source,
cloudfront.mydomain.com is CNAME to cloudfront distribution.
So the PHP called this file
cloudfront.mydomain.com/2014_v1/javascript.js
and when i want a full refresh, simply i rename folder into the source to "2014_v2" and i change the PHP include by setting the folder to "2014_v2".
Like this there is no delay for invalidation and NO COST !
This is my first post in stackoverflow, hope i did it well !
In ruby, using the fog gem
AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']
conn = Fog::CDN.new(
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY,
:aws_secret_access_key => AWS_SECRET_KEY
)
images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']
conn.post_invalidation AWS_DISTRIBUTION_ID, images
even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers
current AWS CLI support invalidation in preview mode. Run the following in your console once:
aws configure set preview.cloudfront true
I deploy my web project using npm. I have the following scripts in my package.json:
{
"build.prod": "ng build --prod --aot",
"aws.deploy": "aws s3 sync dist/ s3://www.mywebsite.com --delete --region us-east-1",
"aws.invalidate": "aws cloudfront create-invalidation --distribution-id [MY_DISTRIBUTION_ID] --paths /*",
"deploy": "npm run build.prod && npm run aws.deploy && npm run aws.invalidate"
}
Having the scripts above in place you can deploy your site with:
npm run deploy
Set TTL=1 hour and replace
http://developer.amazonwebservices.com/connect/ann.jspa?annID=655
Just posting to inform anyone visiting this page (first result on 'Cloudfront File Refresh')
that there is an easy-to-use+access online invalidator available at swook.net
This new invalidator is:
Fully online (no installation)
Available 24x7 (hosted by Google) and does not require any memberships.
There is history support, and path checking to let you invalidate your files with ease. (Often with just a few clicks after invalidating for the first time!)
It's also very secure, as you'll find out when reading its release post.
Full disclosure: I made this. Have fun!
Go to CloudFront.
Click on your ID/Distributions.
Click on Invalidations.
Click create Invalidation.
In the giant example box type * and click invalidate
Done
If you are using AWS, you probably also use its official CLI tool (sooner or later). AWS CLI version 1.9.12 or above supports invalidating a list of file names.
Full disclosure: I made this. Have fun!