get cloudfront usage report via aws cli - amazon-web-services

I have a bunch of Cloudfront distributions scattered across a number of AWS accounts. I'd like to get the Usage Reports for all Cloudfront distros across all AWS accounts.
Now, I have the change-account bit already automated, but I'm not sure how to get the CSV report via the AWS CLI.
I know I can do some ClickOps and download the report via the Cloudfront Console, like here:
but I can't find the command to get the report with the AWS CLI.
I know I can get the Cloudfront metrics via the Cloudwatch API but the documentation doesn't mention the API endpoint I should be querying.
Also, there's aws cloudwatch get-metric-statistics, but I'm not sure how to use that to download the Cloudfront Usage CSV Report.
Question: How can I get the Cloudfront Usage Report for all distributions in an AWS account using the AWS CLI?

I can't find a Cloudfront API to fetch the Usage Report. I know such report can be constructed from Cloudwatch logs, but I'm lazy and I'd like to download the report directly from Cloudfront.
There is no such command in AWS CLI or function in Boto3 (AWS SDK for Python) introduced yet but there are a couple of workarounds that you can use which are as follows:
Use Selenium to access AWS Console for CloudFront and click on that Download CSV button. You can write a script for that in Python.
You can use the curl command used by CloudFront on AWS Console to fetch the results in XML format and then you can convert them into CSV using Python or any CLI tool. That curl command can be found after clicking on the Download CSV button and then from the item named cloudfrontreporting which appears under the Network tab under Inspect console on the page in Google Chrome browser (or using any other browser of your choice), right-click on that item and then click on Copy as cURL button.
The curl command is as follows:
curl 'https://console.aws.amazon.com/cloudfront/v3/api/cloudfrontreporting' \
-H 'authority: console.aws.amazon.com' \
-H 'sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"' \
-H 'content-type: application/json' \
-H 'x-csrf-token: ${CSRF_TOKEN}' \
-H 'accept: */*' \
-H 'origin: https://console.aws.amazon.com' \
-H 'sec-fetch-site: same-origin' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-dest: empty' \
-H 'referer: https://console.aws.amazon.com/cloudfront/v3/home?region=eu-central-1' \
-H 'accept-language: en-US,en;q=0.9' \
-H 'cookie: ${COOKIE}' \
--data-raw '{"headers":{"X-Amz-User-Agent":"aws-sdk-js/2.849.0 promise"},"path":"/2014-01-01/reports/series","method":"POST","region":"us-east-1","params":{},"contentString":"<DataPointSeriesRequestFilters xmlns=\"http://cloudfront.amazonaws.com/doc/2014-01-01/\"><Report>Usage</Report><StartTime>2022-01-28T11:23:35Z</StartTime><EndTime>2022-02-04T11:23:35Z</EndTime><TimeBucketSizeMinutes>ONE_DAY</TimeBucketSizeMinutes><ResourceId>All Web Distributions (excludes deleted)</ResourceId><Region>ALL</Region><Series><DataKey><Name>HTTP</Name><Description></Description></DataKey><DataKey><Name>HTTPS</Name><Description></Description></DataKey><DataKey><Name>HTTP-BYTES</Name><Description></Description></DataKey><DataKey><Name>HTTPS-BYTES</Name><Description></Description></DataKey><DataKey><Name>BYTES-OUT</Name><Description></Description></DataKey><DataKey><Name>BYTES-IN</Name><Description></Description></DataKey><DataKey><Name>FLE</Name><Description></Description></DataKey></Series></DataPointSeriesRequestFilters>","operation":"listDataPointSeries"}' \
--compressed > report.xml
where ${CSRF_TOKEN} and ${COOKIE} needs to be provided by yourself which can be found from the browser or can be prepared programmatically.
Use logs generated by CloudFront as mentioned here in the answer and code in the question: Boto3 CloudFront Object Usage Count

You'll need to use Cost-Explorer API for that.
aws ce get-cost-and-usage \
--time-period Start=2022-01-01,End=2022-01-03 \
--granularity MONTHLY \
--metrics "BlendedCost" "UnblendedCost" "UsageQuantity" \
--group-by Type=DIMENSION,Key=SERVICE Type=TAG,Key=Environment
https://docs.aws.amazon.com/cli/latest/reference/ce/get-cost-and-usage.html#examples

Related

GCP Impersonation not working with BQ command

I am trying to use impersonation while using BQ command but getting below error.
This is the command i am trying to run:
gcloud config set auth/impersonate_service_account sa-account ;\
gcloud config list ; \
bq query --use_legacy_sql=false "SELECT * from prj-name.dataset-name.table-name ORDER BY Version" ;\
This is the error i am getting:
Your active configuration is: [default]
+ bq query --use_legacy_sql=false SELECT * from xxx-prj.dataset-name.table-name ORDER BY Version
ERROR: (bq) gcloud is configured to impersonate service account [XXXXXX.iam.gserviceaccount.com] but impersonation support is not available.
what change is needed here?
Here is how you can use service account impersonation with BigQuery API in gcloud CLI:
Impersonate the relevant service account:
gcloud config set auth/impersonate_service_account=SERVICE_ACCOUNT
Run the following CURL command, specifying your PROJECT_ID and SQL_QUERY:
curl --request POST \
'https://bigquery.googleapis.com/bigquery/v2/projects/PROJECT_ID/queries' \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"query":"SQL_QUERY"}' \
--compressed
P.S. gcloud auth print-access-token will make it use the access token of the impersonated service account, which will allow you to run queries.

Cannot create new AWS IAM role. Getting "Rate exceeded" error [duplicate]

This question already has answers here:
AWS Create Role Rate exceeded [closed]
(5 answers)
Closed 2 years ago.
Started to explore AWS and stuck with problem. Trying to create new IAM role using AWS website and get the same error
"An error occurred
Your request has a problem. Please see the following details.
Rate exceeded"
Here is short cURL request of creating new IAM role.
curl 'https://console.aws.amazon.com/iam/api/roles' \
-H 'Connection: keep-alive' \
-H 'Accept: application/json, text/plain, */*' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/83.0.4103.97 Safari/537.36' \
-H 'Content-Type: application/json;charset=UTF-8' \
-H 'Origin: https://console.aws.amazon.com' \
-H 'Referer: https://console.aws.amazon.com/iam/home?region=eu-central-1' \
--data-binary '{"name":"AWSServiceRoleForAmazonElasticsearchService","description":"Allows
EC2 instances to call AWS services on your behalf.","trustPolicyDocument":"
{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":
[\"sts:AssumeRole\"],\"Principal\":{\"Service\":
[\"ec2.amazonaws.com\"]}}]}","scopeArn":null,"tags":[]}' \
Response
{"errors":[{"message":"Rate exceeded","code":"Throttling","httpStatus":400,"__type__":"ErrorMessage"}]}
What kind of rate did exceed? Where can i find limits for this rate?
Thank you a lot.
This is related to an ongoing incident.
We have identified the root cause of the increased error rates and latencies on the AWS IAM CreateRole and CreateServiceLinkedRole APIs and are working towards resolution. Other AWS services such as AWS CloudFormation whose features require these actions may also be impacted. User authentications and authorizations are not impacted.
You can view the progress here.
The suggestion is to try again

how to add custom header upload to Google Cloud Storage?

I use Flask to create an API, but I am having trouble uploading when I create custom headers to upload to my Google Cloud Storage. Fyi, the permissions details on my server are the same as my local machine to test upload of images to GCS, admin storage and admin object storage, there are no problems on my local machine. but when I curl or test upload on my server to my Google Cloud Storage bucket, the response is always the same:
"rc": 500,
"rm": "403 POST https://storage.googleapis.com/upload/storage/v1/b/konxxxxxx/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', )"
im testing in postman using custom header :
upload_key=asjaisjdaozmzlaljaxxxxx
and i curl like this :
url --location --request POST 'http://14.210.211.xxx:9001/koxxx/upload_img?img_type=img_x' --header 'upload_key: asjaisjdaozmzlaljaxxxxx' --form 'img_file=#/home/user/image.png'
and I have confirmed with "gcloud auth list" that the login data that I use on the server is correct and the same with my local machine.
you have a permission error, to fix it use service accounts method, it's easy and straightforward.
create a service account
gcloud iam service-accounts create \
$SERVICE_ACCOUNT_NAME \
--display-name $SERVICE_ACCOUNT_NAME
add permissions to your service account
gcloud projects add-iam-policy-binding $PROJECT_NAME \
--role roles/bigtable.user \
--member serviceAccount:$SA_EMAIL
$SA_EMAIL is the service account here. you can get it using:
SA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
download the service account to a destination $SERVICE_ACCOUNT_DEST and save it to variable $KEY
export KEY=$(gcloud iam service-accounts keys create $SERVICE_ACCOUNT_DEST --iam-account $SA_EMAIL)
upload to Cloud Storage Bucket using the rest api:
curl -X POST --data-binary #[OBJECT_LOCATION] \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: [OBJECT_CONTENT_TYPE]" \
"https://storage.googleapis.com/upload/storage/v1/b/[BUCKET_NAME]/o?uploadType=media&name=[OBJECT_NAME]"

How to enable server side encryption on DynamoDB via CLI?

I want to enable encryption on my production tables in DynamoDB. According to their docs at https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html#encryption.tutorial-cli I just use the --sse-specification flag; however, it's not working via CLI
I copied their exact command from the docs, below
aws dynamodb create-table \
--table-name Music \
--attribute-definitions \
AttributeName=Artist,AttributeType=S \
AttributeName=SongTitle,AttributeType=S \
--key-schema \
AttributeName=Artist,KeyType=HASH \
AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput \
ReadCapacityUnits=10,WriteCapacityUnits=5 \
--sse-specification Enabled=true
Using their exact example or any other contrived setup I keep getting the same error message when ran from CLI
Unknown options: --sse-specification, Enabled=true
Is it possible to turn this on from CLI? The only other way I see is to create each table manually from the console and tick the encryption button during creation there
My AWS version is
aws-cli/1.14.1 Python/2.7.10 Darwin/17.5.0 botocore/1.8.32
You just need to update your version of the CLI. Version 1.14.1 was released on 11/29/2017, SSE on DynamoDB wasn't released until 2/8/2018.

InvalidSignatureException: Credential should be scoped to correct service: 'lex'

I am trying to call the Amazon Lex APIs through curl and by doing so I am stuck with this error:
<InvalidSignatureException>
<Message>InvalidSignatureException: Credential should be scoped to correct service: 'lex'. </Message>
</InvalidSignatureException>
My curl request:
curl -X GET \
'https://runtime.lex.us-east-1.amazonaws.com/bots/botname/versions/versionoralias' \
-H 'authorization: AWS4-HMAC-SHA256 Credential=xxxxxxxxxxxx/20171228/us-east-1/execute-api/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=xxxxxxxxxxxxxxxx' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'x-amz-date: 20171228T114646Z'
You should probably use the AWS CLI instead of cURL. Signatures will be managed for you. Trying to sign your AWS calls yourself, you're going to end in a world of pain and 403 errors.
The Lex API call you're looking for is here.
See this documentation to get started with the AWS CLI.