GCP Impersonation not working with BQ command - google-cloud-platform

I am trying to use impersonation while using BQ command but getting below error.
This is the command i am trying to run:
gcloud config set auth/impersonate_service_account sa-account ;\
gcloud config list ; \
bq query --use_legacy_sql=false "SELECT * from prj-name.dataset-name.table-name ORDER BY Version" ;\
This is the error i am getting:
Your active configuration is: [default]
+ bq query --use_legacy_sql=false SELECT * from xxx-prj.dataset-name.table-name ORDER BY Version
ERROR: (bq) gcloud is configured to impersonate service account [XXXXXX.iam.gserviceaccount.com] but impersonation support is not available.
what change is needed here?

Here is how you can use service account impersonation with BigQuery API in gcloud CLI:
Impersonate the relevant service account:
gcloud config set auth/impersonate_service_account=SERVICE_ACCOUNT
Run the following CURL command, specifying your PROJECT_ID and SQL_QUERY:
curl --request POST \
'https://bigquery.googleapis.com/bigquery/v2/projects/PROJECT_ID/queries' \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"query":"SQL_QUERY"}' \
--compressed
P.S. gcloud auth print-access-token will make it use the access token of the impersonated service account, which will allow you to run queries.

Related

'stateInfo.state' filter not working for GCP IAM Recommender API

I'm trying to query the GCP IAM recommender API (API documentation here) and fetch role revision recommendations for my project. I'm looking for ACTIVE recommendations only. However, the input filter stateInfo.state filter (listed in the above documentation) is not working for me. It returns the error Invalid Filter. Can someone please let me know what am I doing wrong here? Thanks.
Here's my API query: https://recommender.googleapis.com/v1/projects/my-demo-project/locations/global/recommenders/google.iam.policy.Recommender/recommendations?filter=stateInfo.state:ACTIVE
Please include a minimally reproducible example in questions:
PROJECT=[YOUR-PROJECT-ID]
LOCATION="global"
SERVICES=(
"cloudresourcemanager"
"recommender"
)
for SERVICE in ${SERVICES[#]}
do
gcloud services enable ${SERVICE}.googleapis.com \
--project=${PROJECT}
done
ACCOUNT="tester"
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Minimal role for Recommender for IAM
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/recommender.iamViewer
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Be careful this overwrites the default gcloud auth account
# Remember to revert this to your e.g. me#gmail.com afterwards
gcloud auth activate-service-account --key-file=${PWD}/${ACCOUNT}.json
TOKEN=$(gcloud auth print-access-token ${EMAIL})
RECOMMENDER="google.iam.policy.Recommender"
PARENT="projects/${PROJECT}/locations/${LOCATION}/recommenders/${RECOMMENDER}"
FILTER="stateInfo.state=ACTIVE"
curl \
--header "Authorization: Bearer ${TOKEN}" \
https://recommender.googleapis.com/v1/${PARENT}/recommendations?filter=${FILTER}
Yields (HTTP 200):
{}

get cloudfront usage report via aws cli

I have a bunch of Cloudfront distributions scattered across a number of AWS accounts. I'd like to get the Usage Reports for all Cloudfront distros across all AWS accounts.
Now, I have the change-account bit already automated, but I'm not sure how to get the CSV report via the AWS CLI.
I know I can do some ClickOps and download the report via the Cloudfront Console, like here:
but I can't find the command to get the report with the AWS CLI.
I know I can get the Cloudfront metrics via the Cloudwatch API but the documentation doesn't mention the API endpoint I should be querying.
Also, there's aws cloudwatch get-metric-statistics, but I'm not sure how to use that to download the Cloudfront Usage CSV Report.
Question: How can I get the Cloudfront Usage Report for all distributions in an AWS account using the AWS CLI?
I can't find a Cloudfront API to fetch the Usage Report. I know such report can be constructed from Cloudwatch logs, but I'm lazy and I'd like to download the report directly from Cloudfront.
There is no such command in AWS CLI or function in Boto3 (AWS SDK for Python) introduced yet but there are a couple of workarounds that you can use which are as follows:
Use Selenium to access AWS Console for CloudFront and click on that Download CSV button. You can write a script for that in Python.
You can use the curl command used by CloudFront on AWS Console to fetch the results in XML format and then you can convert them into CSV using Python or any CLI tool. That curl command can be found after clicking on the Download CSV button and then from the item named cloudfrontreporting which appears under the Network tab under Inspect console on the page in Google Chrome browser (or using any other browser of your choice), right-click on that item and then click on Copy as cURL button.
The curl command is as follows:
curl 'https://console.aws.amazon.com/cloudfront/v3/api/cloudfrontreporting' \
-H 'authority: console.aws.amazon.com' \
-H 'sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"' \
-H 'content-type: application/json' \
-H 'x-csrf-token: ${CSRF_TOKEN}' \
-H 'accept: */*' \
-H 'origin: https://console.aws.amazon.com' \
-H 'sec-fetch-site: same-origin' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-dest: empty' \
-H 'referer: https://console.aws.amazon.com/cloudfront/v3/home?region=eu-central-1' \
-H 'accept-language: en-US,en;q=0.9' \
-H 'cookie: ${COOKIE}' \
--data-raw '{"headers":{"X-Amz-User-Agent":"aws-sdk-js/2.849.0 promise"},"path":"/2014-01-01/reports/series","method":"POST","region":"us-east-1","params":{},"contentString":"<DataPointSeriesRequestFilters xmlns=\"http://cloudfront.amazonaws.com/doc/2014-01-01/\"><Report>Usage</Report><StartTime>2022-01-28T11:23:35Z</StartTime><EndTime>2022-02-04T11:23:35Z</EndTime><TimeBucketSizeMinutes>ONE_DAY</TimeBucketSizeMinutes><ResourceId>All Web Distributions (excludes deleted)</ResourceId><Region>ALL</Region><Series><DataKey><Name>HTTP</Name><Description></Description></DataKey><DataKey><Name>HTTPS</Name><Description></Description></DataKey><DataKey><Name>HTTP-BYTES</Name><Description></Description></DataKey><DataKey><Name>HTTPS-BYTES</Name><Description></Description></DataKey><DataKey><Name>BYTES-OUT</Name><Description></Description></DataKey><DataKey><Name>BYTES-IN</Name><Description></Description></DataKey><DataKey><Name>FLE</Name><Description></Description></DataKey></Series></DataPointSeriesRequestFilters>","operation":"listDataPointSeries"}' \
--compressed > report.xml
where ${CSRF_TOKEN} and ${COOKIE} needs to be provided by yourself which can be found from the browser or can be prepared programmatically.
Use logs generated by CloudFront as mentioned here in the answer and code in the question: Boto3 CloudFront Object Usage Count
You'll need to use Cost-Explorer API for that.
aws ce get-cost-and-usage \
--time-period Start=2022-01-01,End=2022-01-03 \
--granularity MONTHLY \
--metrics "BlendedCost" "UnblendedCost" "UsageQuantity" \
--group-by Type=DIMENSION,Key=SERVICE Type=TAG,Key=Environment
https://docs.aws.amazon.com/cli/latest/reference/ce/get-cost-and-usage.html#examples

SAML2AWS connecting to k8s issues

I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine.
While trying to connect to k8s suppose to list pods, a simple kubectl get pods returns an error [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
But if i do saml2aws exec kubectl get pods i am able to fetch pods.
I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.
Any kind of help will be appreciated.
To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first
Configure Profile
saml2aws configure \
--skip-prompt \
--mfa Auto \
--region <region, ex us-east-2> \
--profile <awscli_profile> \
--idp-account <saml2aws_profile_name>> \
--idp-provider Okta \
--username <your email> \
--role arn:aws:iam::<account_id>:role/<aws_role_initial_assume> \
--session-duration 28800 \
--url "https://<company>.okta.com/home/amazon_aws/......."
URL, region ... can be got from OKTA integration UI.
Login
samle2aws login --idp-account <saml2aws_profile_name>
that should prompt you for password and MFA if exist.
Verification
aws --profile=<awscli_profile> s3 ls
then finally , Just export AWS_PROFILE by
export AWS_PROFILE=<awscli_profile>
and use awscli directly
aws sts get-caller-identity

how to add custom header upload to Google Cloud Storage?

I use Flask to create an API, but I am having trouble uploading when I create custom headers to upload to my Google Cloud Storage. Fyi, the permissions details on my server are the same as my local machine to test upload of images to GCS, admin storage and admin object storage, there are no problems on my local machine. but when I curl or test upload on my server to my Google Cloud Storage bucket, the response is always the same:
"rc": 500,
"rm": "403 POST https://storage.googleapis.com/upload/storage/v1/b/konxxxxxx/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', )"
im testing in postman using custom header :
upload_key=asjaisjdaozmzlaljaxxxxx
and i curl like this :
url --location --request POST 'http://14.210.211.xxx:9001/koxxx/upload_img?img_type=img_x' --header 'upload_key: asjaisjdaozmzlaljaxxxxx' --form 'img_file=#/home/user/image.png'
and I have confirmed with "gcloud auth list" that the login data that I use on the server is correct and the same with my local machine.
you have a permission error, to fix it use service accounts method, it's easy and straightforward.
create a service account
gcloud iam service-accounts create \
$SERVICE_ACCOUNT_NAME \
--display-name $SERVICE_ACCOUNT_NAME
add permissions to your service account
gcloud projects add-iam-policy-binding $PROJECT_NAME \
--role roles/bigtable.user \
--member serviceAccount:$SA_EMAIL
$SA_EMAIL is the service account here. you can get it using:
SA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
download the service account to a destination $SERVICE_ACCOUNT_DEST and save it to variable $KEY
export KEY=$(gcloud iam service-accounts keys create $SERVICE_ACCOUNT_DEST --iam-account $SA_EMAIL)
upload to Cloud Storage Bucket using the rest api:
curl -X POST --data-binary #[OBJECT_LOCATION] \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: [OBJECT_CONTENT_TYPE]" \
"https://storage.googleapis.com/upload/storage/v1/b/[BUCKET_NAME]/o?uploadType=media&name=[OBJECT_NAME]"

InvalidSignatureException: Credential should be scoped to correct service: 'lex'

I am trying to call the Amazon Lex APIs through curl and by doing so I am stuck with this error:
<InvalidSignatureException>
<Message>InvalidSignatureException: Credential should be scoped to correct service: 'lex'. </Message>
</InvalidSignatureException>
My curl request:
curl -X GET \
'https://runtime.lex.us-east-1.amazonaws.com/bots/botname/versions/versionoralias' \
-H 'authorization: AWS4-HMAC-SHA256 Credential=xxxxxxxxxxxx/20171228/us-east-1/execute-api/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=xxxxxxxxxxxxxxxx' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'x-amz-date: 20171228T114646Z'
You should probably use the AWS CLI instead of cURL. Signatures will be managed for you. Trying to sign your AWS calls yourself, you're going to end in a world of pain and 403 errors.
The Lex API call you're looking for is here.
See this documentation to get started with the AWS CLI.