lets suppose I have google cloud storage bucket in project X and want to upload object in the bucket which is in project X from Code(Python) which is deployed on project Y.
Both project X and Y are under same credentials(login id).
Is it achievable using OAuth2.0 or any other suggestion?
I have tried using Service Account,AppAssertionCredentials & OAuth2DecoratorFromClientSecrets but failed.
credentials = GoogleCredentials.get_application_default()
service = discovery.build('storage', 'v1', credentials=credentials)
req = service.objects().insert(
bucket=bucket_name,
name=fileName,
media_body=media)
This is a very common use case. You don't need to do anything special in your code to access buckets in other projects. Bucket names are globally unique, so your app will refer to an existing bucket in another project in the same way that it refers to buckets in its own project.
In order for that insert call to succeed, though, you'll need to make the account that is running that code an OWNER of the bucket that you're writing to.
Is that app engine code? App engine code runs as a particular service account. You'll need to grant permission to that service account. Head over to https://console.developers.google.com/permissions/serviceaccounts?project=_ to find out the name of that service account. It's probably something like It's probably my-project-name#appspot.gserviceaccount.com.
Now, using the GCS UI, or via gsutil, give that account full control over the bucket:
gsutil acl ch -u my-project-name#appspot.gserviceaccount.com:FC gs://myBucketName
Related
we are working on one requirement where we want to check that which service account having what type of access on particular GCS bucket from cloud composer.
For dataset we can use below code,
dataset = client.get_dataset(dataset_id) # Make an API request.
entries = list(dataset.access_entries)
we are looking some thing similar to this for gcs bucket.
You can use the Policy Analyser service that you can find in the Asset Inventory section (sure, it's not obvious)
You can try that query for instance
gcloud asset search-all-iam-policies --scope=projects/<ProjectID> --asset-types="storage.googleapis.com/Bucket"
And then filter only on the bucket that you target (use jq for instance). You can also search at the folder or organization scope to get also the inherited roles from higher levels.
I am writing an application where I have to upload media files to GCS. So, I created a storage bucket and also created a service account which is being used by the application to put and get images from the bucket. To access this service account from the application I had to generate a private key as a JSON file.
I am tested my code and it is working fine. Now, I want to push this code to my Github repository but I don't want this service account key to be in Github.
How do I manage to keep this service account key secret, yet all my fellow colleagues should be able to use it.
I am going to put my application on GCP Container Instance and I want it to work there as well.
As I understand, if your application works from inside the GCP and use some custom service account, you might not need any private keys (as json files) at all.
The custom service account, which is used by your application, should get relevant IAM roles/permissions on the correspondent GCS bucket. And that's all you might need to do.
You can assign those IAM roles/permissions either manually (through UI console), or using CLI commands, or as part of your deployment CI/CD pipeline.
I have 1 s3 bucket per customer. Customers are external entities and they dont share data with anyone else. I write to S3 and customer reads from S3. As per this architecture, I can only scale to 1000 buckets as there is a limit to s3 buckets per account. I was hoping to use APs to create 1 AP per customer and put data in one bucket. The customer can then read the files from the bucket using AP.
Bucket000001/prefix01 . -> customeraccount1
Bucket000001/prefix02 . -> customeraccount2
...
S3 access points require you to set policy for a IAM user in access point as well as the bucket level. If I have 1000s of IAM users, do I need to set policy for each of them in the bucket? This would result in one giant policy. there is a max policy size in the bucket, so I may not be able to do that.
Is this the right use case where access points can help?
The recommended approach would be:
Do NOT assign IAM Users to your customers. These types of AWS credentials should only be used by your internal staff and your own applications.
You should provide a web application (or an API) where customers can authenticate against your own user database (or you could use Amazon Cognito to manage authentication).
Once authenticated, the application should grant access either to a web interface to access Amazon S3, or the application should provide temporary credentials for accessing Amazon S3 (more details below).
Do not use one bucket per customer. This is not scalable. Instead, store all customer data in ONE bucket, with each user having their own folder. There is no limit on the amount of data you can store in Amazon S3. This also makes it easier for you to manage and maintain, since it is easier to perform functions across all content rather than having to go into separate buckets. (An exception might be if you wish to segment buckets by customer location (region) or customer type. But do not use one bucket per customer. There is no reason to do this.)
When granting access to Amazon S3, assign permissions at the folder-level to ensure customers only see their own data.
Option 1: Access via Web Application
If your customers access Amazon S3 via a web application, then you can code that application to enforce security at the folder level. For example, when they request a list of files, only display files within their folder.
This security can be managed totally within your own code.
Option 2: Access via Temporary Credentials
If your customers use programmatic access (eg using the AWS CLI or a custom app running on their systems), then:
The customer should authenticate to your application (how this is done will vary depending upon how you are authenticating users)
Once authenticated, the application should generate temporary credentials using the AWS Security Token Service (STS). While generating the credentials, grant access to Amazon S3 but specify the customer's folder in the ARN (eg arn:aws:s3:::storage-bucket/customer1/*) so that they can only access content within their folder.
Return these temporary credentials to the customer. They can then use these credentials to make API calls directly to Amazon S3 (eg from the AWS Command-Line Interface (CLI) or a custom app). They will be limited to their own folder.
This approach is commonly done with mobile applications. The mobile app authenticates against the backend, receives temporary credentials, then uses those credentials to interact directly against S3. Thus, the back-end app is only used for authentication.
Examples on YouTube:
5 Minutes to Amazon Cognito: Federated Identity and Mobile App Demo
Overview Security Token Service STS
AWS: Use the Session Token Service to Securely Upload Files to S3
We have some way to achieve your goal.
use IAM group to grant access to a folder. Create a group, add a user to a group, and assign a role to the group to access the folder.
Another way is to use bucket policy (${aws:username} in Condition) to grant Access to User-Specific Folders. Refer to this link https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
i am trying to give permissions for google cloud storage buckets with service account json file using django-storages.But the items in buckets are getting accessible only when i gave access to allUsers with Objects View Permission.How can i restrict public access for the bucket.
You can take a look to this link that contains a useful guide about the process required to connect Django to GCS by using Service account JSON files; in this way, you can implement this authentication method to access to your buckets instead of making your data public. Adittionally, please keep in mind it is required to assign the Cloud Storage IAM Roles to the service account, by using the IAM console or by creating ACLs, in order to grant the access permissions.
Finally, once you have your Service account key file ready, you can authenticate your application by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable with the path of the json file or passing the path directly into your code, as explained in the GCS official documentation.
We wanted to copy a file from one project's storage to another.
I have credentials for project A and project B in separate service accounts.
The only way we knew how to copy files was to add service key credential permissions to the bucket's access control list.
Is there some other way to run commands across accounts using multiple service keys?
You can use Cloud Storage Transfer Service to accomplish this.
The docs should guide you to setup the permissions for buckets in both projects and do the transfers programmatically or on the console.
You need to get the service account email associated to the Storage Transfer Service by entering your project ID in the Try this API page. You then need to give this service account email the required roles to access the data from the source. Storage Object Viewer should be enough permissions.
At the data destination, you need get the service account email for the second project ID, then give it the Storage Legacy Bucket Writer role.
You can then do the transfer using the snippets in the docs.