I use Flask to create an API, but I am having trouble uploading when I create custom headers to upload to my Google Cloud Storage. Fyi, the permissions details on my server are the same as my local machine to test upload of images to GCS, admin storage and admin object storage, there are no problems on my local machine. but when I curl or test upload on my server to my Google Cloud Storage bucket, the response is always the same:
"rc": 500,
"rm": "403 POST https://storage.googleapis.com/upload/storage/v1/b/konxxxxxx/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', )"
im testing in postman using custom header :
upload_key=asjaisjdaozmzlaljaxxxxx
and i curl like this :
url --location --request POST 'http://14.210.211.xxx:9001/koxxx/upload_img?img_type=img_x' --header 'upload_key: asjaisjdaozmzlaljaxxxxx' --form 'img_file=#/home/user/image.png'
and I have confirmed with "gcloud auth list" that the login data that I use on the server is correct and the same with my local machine.
you have a permission error, to fix it use service accounts method, it's easy and straightforward.
create a service account
gcloud iam service-accounts create \
$SERVICE_ACCOUNT_NAME \
--display-name $SERVICE_ACCOUNT_NAME
add permissions to your service account
gcloud projects add-iam-policy-binding $PROJECT_NAME \
--role roles/bigtable.user \
--member serviceAccount:$SA_EMAIL
$SA_EMAIL is the service account here. you can get it using:
SA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
download the service account to a destination $SERVICE_ACCOUNT_DEST and save it to variable $KEY
export KEY=$(gcloud iam service-accounts keys create $SERVICE_ACCOUNT_DEST --iam-account $SA_EMAIL)
upload to Cloud Storage Bucket using the rest api:
curl -X POST --data-binary #[OBJECT_LOCATION] \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: [OBJECT_CONTENT_TYPE]" \
"https://storage.googleapis.com/upload/storage/v1/b/[BUCKET_NAME]/o?uploadType=media&name=[OBJECT_NAME]"
Related
I am trying to create a signedurl for accessing private cloud storage which contains static files.
I have configured http loadbalancer and enabled cdn on it. I have generated key and attached it to backend bucket.
I have followed each and every step mentioned in this document :-
https://cloud.google.com/cdn/docs/using-signed-urls#gcloud
But while generating signed url it is not able to access the content of bucket. I am getting 403 forbidden error.
For creating key
head -c 16 /dev/urandom | base64 | tr +/ -_ > KEY_FILE_NAME
For adding the key to backend
gcloud compute backend-buckets \
add-signed-url-key BACKEND_NAME \
--key-name KEY_NAME \
--key-file KEY_FILE_NAME
I am using this code to generate url
Gcloud compute sign-url \
"Http://Ip/" \
--key-name key \
--key-file keyfile \
--expires-in 30m \
--validate
But i am getting 403 error when accessing the generated url.
your client does not have permission to get url/ from this server
I'm trying to query the GCP IAM recommender API (API documentation here) and fetch role revision recommendations for my project. I'm looking for ACTIVE recommendations only. However, the input filter stateInfo.state filter (listed in the above documentation) is not working for me. It returns the error Invalid Filter. Can someone please let me know what am I doing wrong here? Thanks.
Here's my API query: https://recommender.googleapis.com/v1/projects/my-demo-project/locations/global/recommenders/google.iam.policy.Recommender/recommendations?filter=stateInfo.state:ACTIVE
Please include a minimally reproducible example in questions:
PROJECT=[YOUR-PROJECT-ID]
LOCATION="global"
SERVICES=(
"cloudresourcemanager"
"recommender"
)
for SERVICE in ${SERVICES[#]}
do
gcloud services enable ${SERVICE}.googleapis.com \
--project=${PROJECT}
done
ACCOUNT="tester"
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Minimal role for Recommender for IAM
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/recommender.iamViewer
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Be careful this overwrites the default gcloud auth account
# Remember to revert this to your e.g. me#gmail.com afterwards
gcloud auth activate-service-account --key-file=${PWD}/${ACCOUNT}.json
TOKEN=$(gcloud auth print-access-token ${EMAIL})
RECOMMENDER="google.iam.policy.Recommender"
PARENT="projects/${PROJECT}/locations/${LOCATION}/recommenders/${RECOMMENDER}"
FILTER="stateInfo.state=ACTIVE"
curl \
--header "Authorization: Bearer ${TOKEN}" \
https://recommender.googleapis.com/v1/${PARENT}/recommendations?filter=${FILTER}
Yields (HTTP 200):
{}
I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine.
While trying to connect to k8s suppose to list pods, a simple kubectl get pods returns an error [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
But if i do saml2aws exec kubectl get pods i am able to fetch pods.
I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.
Any kind of help will be appreciated.
To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first
Configure Profile
saml2aws configure \
--skip-prompt \
--mfa Auto \
--region <region, ex us-east-2> \
--profile <awscli_profile> \
--idp-account <saml2aws_profile_name>> \
--idp-provider Okta \
--username <your email> \
--role arn:aws:iam::<account_id>:role/<aws_role_initial_assume> \
--session-duration 28800 \
--url "https://<company>.okta.com/home/amazon_aws/......."
URL, region ... can be got from OKTA integration UI.
Login
samle2aws login --idp-account <saml2aws_profile_name>
that should prompt you for password and MFA if exist.
Verification
aws --profile=<awscli_profile> s3 ls
then finally , Just export AWS_PROFILE by
export AWS_PROFILE=<awscli_profile>
and use awscli directly
aws sts get-caller-identity
We have a Vertex AI model that was created using a custom image.
We are trying to access a bucket on startup but we are getting the following error:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/...?projection=noAcl&prettyPrint=false: {service account name} does not have storage.buckets.get access to the Google Cloud Storage bucket.
The problem is that I can't find the service account that is mentioned in the error to give it the right access permissions..
Solved it by giving the endpoint the right service account
I am dealing with the same general issue. Vertex AI can't access my Cloud Storage bucket from a custom prediction container.
After digging through a complex tree of GCP Vertex docs, I found this:
https://cloud.google.com/vertex-ai/docs/general/custom-service-account#setup
And
https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#artifacts
UPDATE
After working through this, I was able to get GCS access to work.
Create a new service account and give it proper roles/permissions. You need your PROJECT_ID and PROJECT_NUMBER.
NEW_SA_EMAIL=new-service-account#$PROJECT_ID.iam.gserviceaccount.com
AI_PLATFORM_SERVICE_AGENT=service-$PROJECT_NUMBER#gcp-sa-aiplatform.iam.gserviceaccount.com
gcloud iam service-accounts create $NEW_SA_NAME \
--display-name="New Vertex AI Service Account" \
--quiet
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$NEW_SA_EMAIL" \
--role="roles/storage.admin" \
--quiet
gcloud iam service-accounts add-iam-policy-binding \
--role=roles/iam.serviceAccountAdmin \
--member=serviceAccount:$AI_PLATFORM_SERVICE_AGENT \
--quiet \
$NEW_SA_EMAIL
Upload your model and create the endpoint using the gcloud vertex ai commands...
Associate your new SA with the Vertex AI Endpoint when you deploy the model. You will need the GCP_REGION, ENDPOINT_ID, MODEL_ID, DEPLOYED_MODEL_NAME, and the NEW_SA_EMAIL from the first step.
gcloud ai endpoints deploy-model $ENDPOINT_ID \
--region=$GCP_REGION \
--model=$MODEL_ID \
--display-name=$DEPLOYED_MODEL_NAME \
--machine-type=n1-standard-4 \
--service-account=$NEW_SA_EMAIL
I am trying to use impersonation while using BQ command but getting below error.
This is the command i am trying to run:
gcloud config set auth/impersonate_service_account sa-account ;\
gcloud config list ; \
bq query --use_legacy_sql=false "SELECT * from prj-name.dataset-name.table-name ORDER BY Version" ;\
This is the error i am getting:
Your active configuration is: [default]
+ bq query --use_legacy_sql=false SELECT * from xxx-prj.dataset-name.table-name ORDER BY Version
ERROR: (bq) gcloud is configured to impersonate service account [XXXXXX.iam.gserviceaccount.com] but impersonation support is not available.
what change is needed here?
Here is how you can use service account impersonation with BigQuery API in gcloud CLI:
Impersonate the relevant service account:
gcloud config set auth/impersonate_service_account=SERVICE_ACCOUNT
Run the following CURL command, specifying your PROJECT_ID and SQL_QUERY:
curl --request POST \
'https://bigquery.googleapis.com/bigquery/v2/projects/PROJECT_ID/queries' \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"query":"SQL_QUERY"}' \
--compressed
P.S. gcloud auth print-access-token will make it use the access token of the impersonated service account, which will allow you to run queries.