I am using cloud storage upload a file with kms key. Here is my code:
await storage.bucket(config.bucket).upload(file, {
kmsKeyName: `projects/${process.env.PROJECT_ID}/locations/global/keyRings/test/cryptoKeys/nodejs-gcp`,
destination: 'mmczblsq.kms.encrypted.doc'
});
I have a cloud-storage-admin.json service account with cloud storage admin permission. Initialize the storage with this service account.
const storage: Storage = new Storage({
projectId: process.env.PROJECT_ID,
keyFilename: path.resolve(__dirname, '../.gcp/cloud-storage-admin.json')
});
And, I use gcloud kms keys add-iam-policy-binding add roles/cloudkms.cryptoKeyEncrypterDecrypter to cloud-storage-admin.json service account.
When I try to upload a file with kms key, still got this permission error:
Permission denied on Cloud KMS key. Please ensure that your Cloud Storage service account has been authorized to use this key.
update
☁ nodejs-gcp [master] ⚡ gcloud kms keys get-iam-policy nodejs-gcp --keyring=test --location=global
bindings:
- members:
- serviceAccount:cloud-storage-admin#<PROJECT_ID>.iam.gserviceaccount.com
- serviceAccount:service-16536262744#gs-project-accounts.iam.gserviceaccount.com
role: roles/cloudkms.cryptoKeyEncrypterDecrypter
etag: BwWJ2Pdc5YM=
version: 1
When you use kmsKeyName, Google Cloud Storage is the entity calling KMS, not your service account. It's a bit confusing:
Your service account has permission to call the Cloud Storage API
The Cloud Storage service account then calls the KMS API in transit
You will need to get the Cloud Storage service account and grant that service account the ability to invoke Cloud KMS:
Option 1: Open the API explorer, authorize, and execute
Option 2: Install gcloud, authenticate to gcloud, install oauth2l, and run this curl command replacing [PROJECT_ID] with your project ID:
curl -X GET -H "$(oauth2l header cloud-platform)" \
"https://www.googleapis.com/storage/v1/projects/[PROJECT_ID]/serviceAccount"
Option 3: Trust me that it's in the format service-[PROJECT_NUMBER]#gs-project-accounts.iam.gserviceaccount.com and get your [PROJECT_NUMBER] from gcloud projects list or the web interface.
Is it possible to encrypt file using provided service account instead of cloud storage service account? It's a bit confusing. If I login to Cloud Storage then I can see all files decrypted (because Cloud storage service account has permission to decrypt it). If I use my service account then any person who log in to Cloud storage will see encrypted files (of course this person should not have access to KMS key).
I tried to encrypt this file on application side (using KMS) but there is a length limitation (65KB).
Related
I have a impersonated a Service Account in gcloud through the command gcloud config set auth/impersonate_service_account [SA_FULL_EMAIL].
Now, all my API calls are impersonating the Service Account., is there a way to download the Service Account JSON at this point?
Because I do not have the original Service Account JSON that was created earlier and also as an User I do not have permissions to Manage Keys for this Service Account.
Please let me know, if I can download the Service Account JSON from gcloud by impersonating the Service Account.
Thanks in advance!
I do not have permissions to Manage Keys for this Service Account
Without those permissions, you cannot create or download service account JSON keys. If the service account has those permissions, which it should not for security reasons, then yes.
The following command will create a new JSON key and download it:
gcloud iam service-accounts keys create my-service-account.json --iam-account <EMAIL ADDRESS>
I am trying to sign URL in GCP storage from AWS EC2 or Lambda, I have generated a JSON file for permissions providing my AWS account ID and role which is given to EC2 or Lambda. When I call sign URL even with storage admin or owner permission I get: Error: The caller does not have permission.
I used the code provided by GCP documentation.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const options = {
version: 'v4',
action: 'read',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
};
// Get a v4 signed URL for reading the file
const [url] = await storage
.bucket(bucketName)
.file(fileName)
.getSignedUrl(options);
Can anybody tell me what did I miss? What is wrong?
Seems the pro
*** update.
I am creating a service account, granting this service account storage admin to my project, then creating pull in Workload Identity Pools, setting AWS and my AWS account ID, then granting access by my AWS identities matching role, downloading JSON, and putting environment variables - GOOGLE_APPLICATION_CREDENTIALS - path to my JSON file and GOOGLE_CLOUD_PROJECT - my project ID. How to correctly load that clientLibraryConfig.json file to run functions I need?
update ** 2
my clientLibraryConfig JSON has the following content..
{
"type": "external_account",
"audience": "..",
"subject_token_type": "..",
"service_account_impersonation_url": "..",
"token_url": "..",
"credential_source": {
"environment_id": "aws1",
"region_url": "..",
"url": "..",
"regional_cred_verification_url": ".."
}
}
How can I generate an access token in node js SDK from this config file to access GCP storage from AWS ec2?
You have to set up the following permissions for the IAM service account:
Storage Object Creator: This is to create signed URLs.
Service Account Token Creator role: This role enables impersonation
of service accounts to create OAuth2 access tokens, sign blobs, or sign JWTs.
Also, you can try to run locally in GCP to sign the URL with the service account.
You can use an existing private key for a service account. The key can be in JSON or PKCS12 format.
Use the command gsutil signurl and pass the path to the private key from the previous step, along with the name of the bucket and object.
For example, if you use a key stored in the folder Desktop, the following command will generate a signed URL for users to view the object cat.jpegfor for 10 minutes.
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg
If successful, the response should look like this:
URL HTTP Method Expiration Signed URL
gs://example-bucket/cat.jpeg GET 2018-10-26 15:19:52 https://storage.googleapis.
com/example-bucket/cat.jpeg?x-goog-signature=2d2a6f5055eb004b8690b9479883292ae74
50cdc15f17d7f99bc49b916f9e7429106ed7e5858ae6b4ab0bbbdb1a8ccc364dad3a0da2caebd308
87a70c5b2569d089ceb8afbde3eed4dff5116f0db5483998c175980991fe899fbd2cd8cb813b0016
5e8d56e0a8aa7b3d7a12ee1baa8400611040f05b50a1a8eab5ba223fe5375747748de950ec7a4dc5
0f8382a6ffd49941c42498d7daa703d9a414d4475154d0e7edaa92d4f2507d92c1f7e811a7cab64d
f68b5df4857589259d8d0bdb5dc752bdf07bd162d98ff2924f2e4a26fa6b3cede73ad5333c47d146
a21c2ab2d97115986a12c28ff37346d6c2ca83e5618ec8ad95632710b489b75c35697d781c38e&
x-goog-algorithm=GOOG4-RSA-SHA256&x-goog-credential=example%40example-project.
iam.gserviceaccount.com%2F20181026%2Fus%2Fstorage%2Fgoog4_request&x-goog-date=
20201026T211942Z&x-goog-expires=3600&x-goog-signedheaders=host
The signed URL is the string that starts with https://storage.googleapis.com, and it is likely to span multiple lines. Anyone can use the URL to access the associated resource (in this case, cat.jpeg) during the designated time frame (in this case, 10 minutes).
So if this works locally, then you can start configuring Workload Identity Federation to impersonate your service account. In this link, you will find a guide to deploy it.
To access resources from AWS using your Workload Identity Federation you will need to review if the following requirements have been already configured:
The workload identity pool has been created.
AWS has been added as an identity provider in the workload identity
pool (The Google organization policy needs to allow federation from
AWS).
The permissions to impersonate a service account have been granted to the external account.
I will add this guide to configure the Workload Identity Federation.
Once the previous requirements have been completed, you will need to generate the service account credential, this file will only contain non sensitive metadata in order to instruct the library on how to retrieve external subject tokens and exchange them for service accounts tokens, as you mentioned the file could be an config.json and could be generated running the following command:
# Generate an AWS configuration file.
gcloud iam workload-identity-pools create-cred-config \
projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
--service-account $SERVICE_ACCOUNT_EMAIL \
--aws \
--output-file /path/to/generated/config.json
Where the following variables need to be substituted:
$PROJECT_NUMBER: The Google Cloud project number.
$POOL_ID:The workload identity pool ID.
$AWS_PROVIDER_ID: The AWS provider ID.
$SERVICE_ACCOUNT_EMAIL: The email of the service account to
impersonate.
Once you generate the JSON credentials configuration file for your external identity, you can store the path at the GOOGLE_APPLICATION_CREDENTIALS environment variable.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/config.json
So, with this, the library can automatically choose the right type of client and initialize the credential from the configuration file. Please note that the service account will also need the roles/browser when using external identities with Application Default Credentials in Node.js or you can pass the project ID to avoid the need to grant roles/browser to the service account as is shown in the bellow code:
async function main() {
const auth = new GoogleAuth({
scopes: 'https://www.googleapis.com/auth/cloud-platform'
// Pass the project ID explicitly to avoid the need to grant `roles/browser` to the service account
// or enable Cloud Resource Manager API on the project.
projectId: 'CLOUD_RESOURCE_PROJECT_ID',
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
// List all buckets in a project.
const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
const res = await client.request({ url });
console.log(res.data);
}
I have saved BI tool setup files in a folder on google cloud storage . we have windows VM created on GCP where i want to move this folder containing all the setup files ( around 60 gb) from google cloud storage by using gsutil command but it is throwing error
I am using below command
gsutil cp -r gs://bucket-name/folder-name C:\Users\user-name\
getting error as AccessDeniedException: 403 sa-d-edw-ce-cognosserver#prj-edw-d-edw-7f58.iam.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
can someone please help me to understand where I am making mistake ?
There are two likely problems:
The CLI is using an identity that does not possess the required permissions.
The Compute Engine instance has restricted the permissions via scopes or has disabled scopes preventing all API access.
To modify IAM permissions/roles requires permissions as well on your account. Otherwise, you will need to contact an administrator for the ORG or project.
The CLI gsutil is using an identity (either a user or service account). That identity does not have an IAM role attached that contains the IAM permission storage.objects.list.
There are a number of IAM roles that have that permission. If you only need to list and read Cloud Storage objects, use the role Storage Legacy Bucket Reader aka roles/storage.legacyBucketReader. The following link provides details on the available roles:
IAM roles for Cloud Storage
Your Google Compute Engine Windows VM instance has a service account attached to it. The Google Cloud CLI tools can use that service account or the credentials from gcloud auth login. There are a few more methods.
To complicate this a bit more, each Compute Engine has scopes assigned which limit a service accounts permissions. The default scopes allow Cloud Storage object read. In the Google Cloud Console GUI lookup or modify the assigned scopes. The following command will output details on the VM which will include the key serviceAccounts.scope.
gcloud compute instances describe INSTANCE_NAME --project PROJECT_ID --zone ZONE
Figure out which identity your VM is using
gcloud auth list
Add an IAM role to that identity
Windows command syntax.
For a service account:
gcloud projects add-iam-policy-binding PROJECT_ID ^
--member="serviceAccount:REPLACE_WITH_SERVICE_ACCOUNT_EMAIL_ADDRESS" ^
--role="roles/storage.legacyBucketReader"
For a user account:
gcloud projects add-iam-policy-binding PROJECT_ID ^
--member="user:REPLACE_WITH_USER_EMAIL_ADDRESS" ^
--role="roles/storage.legacyBucketReader"
I have an storage bucket that I created on GCP. I created the bucket following the instructions described here (https://cloud.google.com/storage/docs/creating-buckets). Additionally, I created it using uniform bucket-level access control.
However, I want the objects in the bucket to be accessible by instances running under a certain service account. Although, I do not see how to do that. In the permissions settings, I do not see how I can specify a service account for read-write access.
To create a service account, run the following command in Cloud Shell:
gcloud iam service-accounts create storage-sa --display-name "storage service account"
You can grant roles to a service account so that the service account can perform specific actions on the resources in your GCP project. For example, you might grant the storage.admin role to a service account so that it has control over objects and buckets in Google Cloud Storage.
gcloud projects add-iam-policy-binding <Your Project ID> --member <Service Account ID> --role <Role You want to Grant>
Once role is granted you can select this service account while creating the instance.
Alternatively, to do this via Google Cloud Console see Creating and enabling service accounts for instances
Once you have created your service account, you can then change/set the access control list (ACL) permissions on your bucket or objects using ths gsutil command.
Specifically:
gsutil acl set [-f] [-r] [-a] file-or-canned_acl_name url...
gsutil acl get url
gsutil acl ch [-f] [-r] <grant>... url...
where each <grant> is one of the following forms:
-u <id|email>:<perm>
-g <id|email|domain|All|AllAuth>:<perm>
-p <viewers|editors|owners>-<project number>:<perm>
-d <id|email|domain|All|AllAuth|<viewers|editors|owners>-<project number>>:<perm>
Please review the following article for more depth and description:
acl - Get, set, or change bucket and/or object ACLs
You can also set/change acls through the Cloud Console web interface and through GCS API.
You have to create a service account Creating a new service account.
Set up a new instance to run as a service account Set instance.
In the Google Cloud Console go to Storage/bucket/right_corner dots/Edit bucket permissions
Add Member/servive account/
Role/Storage Admin
I'm sure I granted all the permissions that I can give:
louchenyao#dev ~> gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* 290002171211-compute#developer.gserviceaccount.com
louchenyao#gmail.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
louchenyao#dev ~> curl -H 'Metadata-Flavor: Google' "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes"
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
louchenyao#dev ~> gsutil cp pgrc.sh gs://hidden-buckets-name
Copying file://pgrc.sh [Content-Type=text/x-sh]...
AccessDeniedException: 403 Insufficient Permission
And I have granted Storage Admin to cloud computing default account.
If I switch to my personal account, it works. So I'm wondering if I missed some important permissions.
To grant access to write to the bucket from VM Instance, using default service account:
change API access scopes for the VM instance, follow these steps:
Stop the instance
Enter VM instance details > Edit
Change Cloud API access scopes > Storage: Full
Save changes and start the instance
It is also possible to set access scopes creating VM instance in the Identity and API access section of the console
If you do not want to use default service account, create new service account for your VM Instance, and use it to access bucket.