After Google Cloud quota update, I can't run terragrunt/terraform code due to strange error. Same code worked before with other project on same account. After I tried to recreate project (to get new clear project) there was some "Billing Quota" popup and I asked support for changing quota.
I got the following message from support:
Dear Developer,
We have approved your request for additional quota. Your new quota should take effect within one hour of receiving this message.
And now (1 day after) terragrunt is not working due to error:
Missing required GCS remote state configuration location
Actually what I got:
service account for pipelines with Project Editor and Service Networking Admin;
bucket without public access (europe-west3)
following terragrunt config:
remote_state {
backend = "gcs"
config = {
project = get_env("TF_VAR_project")
bucket = "bucket name"
prefix = "${path_relative_to_include()}"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
Also i`m running following pipeline
- terragrunt run-all init
- terragrunt run-all validate
- terragrunt run-all plan
- terragrunt run-all apply --terragrunt-non-interactive -auto-approve
and its failing on init with error.'
Project and credentials are correct (also credentials stored in GOOGLE_CREDENTIALS env as json without new lines or whitespaces).
Also tryed to specify "location" in "config" but got error that bucket not found in project.
Does anybody know how to fix or where can be problem?
It worked before I got quota.
I am following this tutorial to upload my existing Django project running locally to Google Cloud Run. I believe I have followed all the steps correctly to create the bucket and grant it the necessary permissions. But when I try to run:
gcloud builds submit \
--config cloudmigrate.yaml \
--substitutions=_INSTANCE_NAME=cgps-reg-2-postgre-sql,_REGION=us-central1
I get the error:
Step #3 - "collect static": google.api_core.exceptions.NotFound: 404 POST https://storage.googleapis.com/upload/storage/v1/b/cgps-registration-2_cgps-reg-2-static-files-bucket/o?uploadType=multipart&predefinedAcl=publicRead:
I was a little confused by this line that seams to tell you to put the bucket name in the location field, but I think its perhaps just a typo in the tutorial. I was not sure if I should leave location at the default "Multi-Region" or change it to "us-central1" where everyting else in the project is.
The instructions for telling the project the name of the bucket I interpreted as PROJECT_ID + "_" + BUCKET_NAME:
or in my case
cgps-registration-2_cgps-reg-2-static-files-bucket
But clearly this naming convention is not correct as the error clearly says it can not find a bucket with this name. So what am I missing here?
Credit for this answer really goes to dazwilken. The answer he gave in the comment is the correct one:
Your bucket name is cgps-reg-2-static-files-bucket. This is its
globally unique name. You should not prefix it (again) with the
Project name when referencing it. The error is telling you (correctly)
that the bucket (called
cgps-registration-2_cgps-reg-2-static-files-bucket) does not exist. It
does not. The bucket is called cgps-reg-2-static-files-bucket
Because bucket names must be unique, one way to create them it to
combine another unique name i.e. the Google Cloud Project ID in their
naming. The tutorial likely confused you by using this approach but
without explaining it.
I'm trying to set up a Custom Domain Name in AWS API Gateway where callers have to specify explicitly the stage name after any base path name. It is something I did in the past but now it seems that, since AWS updated the console interface, it is no more possible.
The final url should be like:
https://example.com/{basePath}/{stage}/function
I tried using the console, but stage is now a mandatory field (chose from a drop-down).
I tried using AWS CLI, but stage is again a mandatory field
aws: error: the following arguments are required: --stage
I tried using Boto3, following the documentation (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/apigateway.html#APIGateway.Client.create_base_path_mapping) but, even if stage can be specified as 'none' (The name of the API's stage that you want to use for this mapping. Specify '(none)' if you want callers to explicitly specify the stage name after any base path name.), doing this returns an error:
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the CreateBasePathMapping operation: Invalid stage identifier specified
What is funny (or frustrating) is that I have some custom domain names created with the old console and that are perfectly working, without any stage defined.
It is still possible to specify only the "API ID" and "Path" and leave out the "stage" parameter. I have tried this both from the console and the CLI:
From console: The "Stage" setting is a drop-down as you mentioned, but can be left blank (don't select anything). If you did select a stage, delete the API mapping and add it again
From CLI: Just tried this as well and works fine for me on CLI version aws-cli/1.18.69 Python/3.7.7 Darwin/18.7.0 botocore/1.16.19
$ aws apigateway create-base-path-mapping --domain-name **** --rest-api-id *** --base-path test
{
"basePath": "test",
"restApiId": "***"
}
I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance
I'm using Amazon's CloudFront to serve static files of my web apps.
Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed?
Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?
Good news. Amazon finally added an Invalidation Feature. See the API Reference.
This is a sample request from the API Reference:
POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml
<InvalidationBatch>
<Path>/image1.jpg</Path>
<Path>/image2.jpg</Path>
<Path>/videos/movie.flv</Path>
<CallerReference>my-batch</CallerReference>
</InvalidationBatch>
As of March 19, Amazon now allows Cloudfront's cache TTL to be 0 seconds, thus you (theoretically) should never see stale objects. So if you have your assets in S3, you could simply go to AWS Web Panel => S3 => Edit Properties => Metadata, then set your "Cache-Control" value to "max-age=0".
This is straight from the API documentation:
To control whether CloudFront caches an object and for how long, we recommend that you use the Cache-Control header with the max-age= directive. CloudFront caches the object for the specified number of seconds. (The minimum value is 0 seconds.)
With the Invalidation API, it does get updated in a few of minutes.
Check out PHP Invalidator.
Bucket Explorer has a UI that makes this pretty easy now. Here's how:
Right click your bucket. Select "Manage Distributions."
Right click your distribution. Select "Get Cloudfront invalidation list"
Then select "Create" to create a new invalidation list.
Select the files to invalidate, and click "Invalidate." Wait 5-15 minutes.
Automated update setup in 5 mins
OK, guys. The best possible way for now to perform automatic CloudFront update (invalidation) is to create Lambda function that will be triggered every time when any file is uploaded to S3 bucket (a new one or rewritten).
Even if you never used lambda functions before, it is really easy -- just follow my step-by-step instructions and it will take just 5 mins:
Step 1
Go to https://console.aws.amazon.com/lambda/home and click Create a lambda function
Step 2
Click on Blank Function (custom)
Step 3
Click on empty (stroked) box and select S3 from combo
Step 4
Select your Bucket (same as for CloudFront distribution)
Step 5
Set an Event Type to "Object Created (All)"
Step 6
Set Prefix and Suffix or leave it empty if you don't know what it is.
Step 7
Check Enable trigger checkbox and click Next
Step 8
Name your function (something like: YourBucketNameS3ToCloudFrontOnCreateAll)
Step 9
Select Python 2.7 (or later) as Runtime
Step 10
Paste following code instead of default python code:
from __future__ import print_function
import boto3
import time
def lambda_handler(event, context):
for items in event["Records"]:
path = "/" + items["s3"]["object"]["key"]
print(path)
client = boto3.client('cloudfront')
invalidation = client.create_invalidation(DistributionId='_YOUR_DISTRIBUTION_ID_',
InvalidationBatch={
'Paths': {
'Quantity': 1,
'Items': [path]
},
'CallerReference': str(time.time())
})
Step 11
Open https://console.aws.amazon.com/cloudfront/home in a new browser tab and copy your CloudFront distribution ID for use in next step.
Step 12
Return to lambda tab and paste your distribution id instead of _YOUR_DISTRIBUTION_ID_ in the Python code. Keep surrounding quotes.
Step 13
Set handler: lambda_function.lambda_handler
Step 14
Click on the role combobox and select Create a custom role. New tab in browser will be opened.
Step 15
Click view policy document, click edit, click OK and replace role definition with following (as is):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation"
],
"Resource": [
"*"
]
}
]
}
Step 16
Click allow. This will return you to a lambda. Double check that role name that you just created is selected in the Existing role combobox.
Step 17
Set Memory (MB) to 128 and Timeout to 5 sec.
Step 18
Click Next, then click Create function
Step 19
You are good to go! Now on, each time you will upload/reupload any file to S3, it will be evaluated in all CloudFront Edge locations.
PS - When you are testing, make sure that your browser is loading images from CloudFront, not from local cache.
PSS - Please note, that only first 1000 files invalidation per month are for free, each invalidation over limit cost $0.005 USD. Also additional charges for Lambda function may apply, but it is extremely cheap.
If you have boto installed (which is not just for python, but also installs a bunch of useful command line utilities), it offers a command line util specifically called cfadmin or 'cloud front admin' which offers the following functionality:
Usage: cfadmin [command]
cmd - Print help message, optionally about a specific function
help - Print help message, optionally about a specific function
invalidate - Create a cloudfront invalidation request
ls - List all distributions and streaming distributions
You invaliate things by running:
$sam# cfadmin invalidate <distribution> <path>
one very easy way to do it is FOLDER versioning.
So if your static files are hundreds for example, simply put all of them into a folder called by year+versioning.
for example i use a folder called 2014_v1 where inside i have all my static files...
So inside my HTML i always put the reference to the folder. ( of course i have a PHP include where i have set the name of the folder. ) So by changing in 1 file it actually change in all my PHP files..
If i want a complete refresh, i simply rename the folder to 2014_v2 into my source and change inside the php include to 2014_v2
all HTML automatically change and ask the new path, cloudfront MISS cache and request it to the source.
Example:
SOURCE.mydomain.com is my source,
cloudfront.mydomain.com is CNAME to cloudfront distribution.
So the PHP called this file
cloudfront.mydomain.com/2014_v1/javascript.js
and when i want a full refresh, simply i rename folder into the source to "2014_v2" and i change the PHP include by setting the folder to "2014_v2".
Like this there is no delay for invalidation and NO COST !
This is my first post in stackoverflow, hope i did it well !
In ruby, using the fog gem
AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']
conn = Fog::CDN.new(
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY,
:aws_secret_access_key => AWS_SECRET_KEY
)
images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']
conn.post_invalidation AWS_DISTRIBUTION_ID, images
even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers
current AWS CLI support invalidation in preview mode. Run the following in your console once:
aws configure set preview.cloudfront true
I deploy my web project using npm. I have the following scripts in my package.json:
{
"build.prod": "ng build --prod --aot",
"aws.deploy": "aws s3 sync dist/ s3://www.mywebsite.com --delete --region us-east-1",
"aws.invalidate": "aws cloudfront create-invalidation --distribution-id [MY_DISTRIBUTION_ID] --paths /*",
"deploy": "npm run build.prod && npm run aws.deploy && npm run aws.invalidate"
}
Having the scripts above in place you can deploy your site with:
npm run deploy
Set TTL=1 hour and replace
http://developer.amazonwebservices.com/connect/ann.jspa?annID=655
Just posting to inform anyone visiting this page (first result on 'Cloudfront File Refresh')
that there is an easy-to-use+access online invalidator available at swook.net
This new invalidator is:
Fully online (no installation)
Available 24x7 (hosted by Google) and does not require any memberships.
There is history support, and path checking to let you invalidate your files with ease. (Often with just a few clicks after invalidating for the first time!)
It's also very secure, as you'll find out when reading its release post.
Full disclosure: I made this. Have fun!
Go to CloudFront.
Click on your ID/Distributions.
Click on Invalidations.
Click create Invalidation.
In the giant example box type * and click invalidate
Done
If you are using AWS, you probably also use its official CLI tool (sooner or later). AWS CLI version 1.9.12 or above supports invalidating a list of file names.
Full disclosure: I made this. Have fun!