I am currently hosting several cloud storage buckets with archived data for some of my clients.
For one client I would like to transfer the ownership and subsequent billing of multiple storage buckets to that client, but continue to administer them myself.
The buckets in question are already in their own (the clients name) project, but the are all hosted within my company domain.
How would I go about that transfer? Does my client need to create their own company domain and I then somehow transfer the project to them? Or do they get user access within my company domain and get a separate billing instance within my company domain?
It all a bit confusing to me.
Buckets are owned by the Project ID. Objects within the bucket are owned by the IAM Member ID that created the objects.
Billing for buckets is controlled by the Project ID. If the customer already owns the project (which can be changed), all you need to do is change the billing account for the project.
You can continue to have access by granting your IAM Member ID access to the bucket and its objects.
Access to a bucket and its contents via a domain name is not a Cloud Storage issue. This is controlled via the HTTP(S) Load Balancer. You can domain transfer the domain to the customer via normal registrar transfer procedures. Ownership of the domain will not affect the load balancer. The project that owns the load balancer will, so you may need to recreate it to transfer billing responsibility.
To disattach this project from your organization, you need to check that the project is in compliance with this document after that is necessary to file a case with support.
when the project is disattached from an organization, the billing account for this project is deleted and your customer need to a billing account in order to attach it to the project that has the buckets.
A billing account is created when a GCP project is created and upgraded(non free trial project)
And Complementing the answer from #JohnHanley is necessary to change the billing account in the customer project, this change must be performed by a user with this permissions.
Project Owner or Project Billing Manager on the project, AND Billing Account Administrator or Billing Account User for the target Cloud Billing Account.
you can find more information on this link
Keep in mind that is not possible to transfer a bucket from one project to another or from a domain to another domain, you must copy the contents of the existing bucket to a new bucket that belongs to your customer's project.
This action need to be executed by an user that has access to read and write objects over the 2 buckets (source & target)
Related
I have 1 s3 bucket per customer. Customers are external entities and they dont share data with anyone else. I write to S3 and customer reads from S3. As per this architecture, I can only scale to 1000 buckets as there is a limit to s3 buckets per account. I was hoping to use APs to create 1 AP per customer and put data in one bucket. The customer can then read the files from the bucket using AP.
Bucket000001/prefix01 . -> customeraccount1
Bucket000001/prefix02 . -> customeraccount2
...
S3 access points require you to set policy for a IAM user in access point as well as the bucket level. If I have 1000s of IAM users, do I need to set policy for each of them in the bucket? This would result in one giant policy. there is a max policy size in the bucket, so I may not be able to do that.
Is this the right use case where access points can help?
The recommended approach would be:
Do NOT assign IAM Users to your customers. These types of AWS credentials should only be used by your internal staff and your own applications.
You should provide a web application (or an API) where customers can authenticate against your own user database (or you could use Amazon Cognito to manage authentication).
Once authenticated, the application should grant access either to a web interface to access Amazon S3, or the application should provide temporary credentials for accessing Amazon S3 (more details below).
Do not use one bucket per customer. This is not scalable. Instead, store all customer data in ONE bucket, with each user having their own folder. There is no limit on the amount of data you can store in Amazon S3. This also makes it easier for you to manage and maintain, since it is easier to perform functions across all content rather than having to go into separate buckets. (An exception might be if you wish to segment buckets by customer location (region) or customer type. But do not use one bucket per customer. There is no reason to do this.)
When granting access to Amazon S3, assign permissions at the folder-level to ensure customers only see their own data.
Option 1: Access via Web Application
If your customers access Amazon S3 via a web application, then you can code that application to enforce security at the folder level. For example, when they request a list of files, only display files within their folder.
This security can be managed totally within your own code.
Option 2: Access via Temporary Credentials
If your customers use programmatic access (eg using the AWS CLI or a custom app running on their systems), then:
The customer should authenticate to your application (how this is done will vary depending upon how you are authenticating users)
Once authenticated, the application should generate temporary credentials using the AWS Security Token Service (STS). While generating the credentials, grant access to Amazon S3 but specify the customer's folder in the ARN (eg arn:aws:s3:::storage-bucket/customer1/*) so that they can only access content within their folder.
Return these temporary credentials to the customer. They can then use these credentials to make API calls directly to Amazon S3 (eg from the AWS Command-Line Interface (CLI) or a custom app). They will be limited to their own folder.
This approach is commonly done with mobile applications. The mobile app authenticates against the backend, receives temporary credentials, then uses those credentials to interact directly against S3. Thus, the back-end app is only used for authentication.
Examples on YouTube:
5 Minutes to Amazon Cognito: Federated Identity and Mobile App Demo
Overview Security Token Service STS
AWS: Use the Session Token Service to Securely Upload Files to S3
We have some way to achieve your goal.
use IAM group to grant access to a folder. Create a group, add a user to a group, and assign a role to the group to access the folder.
Another way is to use bucket policy (${aws:username} in Condition) to grant Access to User-Specific Folders. Refer to this link https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
I know it might sound like a basic question but I haven't figured out what to do.
We're working on having a testing environment for screening candidates for Cloud Engineer and BigData interviews.
We are looking into creating on demand AWS environments probably using Cloudformation service and test if the user is able to perform specific tasks in the environment like creating s3 buckets, assigning roles, creating security groups etc using boto3.
But once the screening is finished, we want to automatically tear down the entire setup that has been created earlier.
There could be multiple candidates taking the test at same time. We want to create the environments (which might contain ec2 instances, s3 buckets etc which are not visible to other users) and tear down them once the tests are finished.
We thought of creating IAM users for every candidate dynamically using an IAM role and create a stack automatically and delete those users once the test is finished.
However, I think the users will be able to see the resources created by other users which is not what we are expecting.
Is there any other better approach that we can use for creating these environments or labs and deleting them for users? something like ITversity and Qwiklabs.
The logged in user should have access to and view the resources created only for him.
Please suggest.
Query1:
Let's say I have created 10 IAM roles using and one user using each of those roles. Will the user in created from IAM role 1 be able to see the VPCs or EC2 instances or S3 or any other resources created by another user which is created by IAM role 2?
Will the resources be completely isolated from one IAM role to another?
Or does service like AWS Organizations be much helpful in this case?
The Qwiklabs environment works as follows:
A pool of AWS accounts is maintained
When a student starts a lab, one of these accounts is allocated to the lab/student
A CloudFormation template is launched to provision initial resources
A student login (either via IAM User or Federated Login) is provisioned and is assigned a limited set of permissions
At the conclusion of the lab, the student login is removed, a "reaper" deletes resources in the account and the CloudFormation stack is deleted
The "reaper" is a series of scripts that recursively go through each service in each region and deletes resources that were created during the lab. A similar capability can be obtained with rebuy-de/aws-nuke: Nuke a whole AWS account and delete all its resources.
You could attempt to create such an environment yourself.
I would recommend looking at Scenario 3 in the following AWS document:
Setting Up Multiuser Environments in the AWS Cloud
(for Classroom Training and Research)
It references a "students" environment, however it should suite an interview-candidate testing needs.
The “Separate AWS Account for Each User” scenario with optional consolidated billing provides an excellent
environment for users who need a completely separate account environment, such as researchers or graduate students.
It is similar to the “Limited User Access to AWS Management Console” scenario, except that each IAM user is created in
a separate AWS account, eliminating the risk of users affecting each other’s services.
As an example, consider a research lab with 10 graduate students. The administrator creates one paying AWS account,
10 linked student AWS accounts, and 1 restricted IAM user per linked account. The administrator provisions separate
AWS accounts for each user and links the accounts to the paying AWS account. Within each account, the administrator
creates an IAM user and applies access control policies. Users receive access to an IAM user within their AWS account.
They can log into the AWS Management Console to launch and access different AWS services, subject to the access
control policy applied to their account. Students don’t see resources provisioned by other students.
One key advantage of this scenario is the ability for a student to continue using the account after the completion of the
course. For example, if students use AWS resources as part of a startup course, they can continue to use what they have
built on AWS after the semester is over.
https://d1.awsstatic.com/whitepapers/aws-setting-up-multiuser-environments-education.pdf
However, I think the users will be able to see the resources created by other users which is not what we are expecting.
AWS resources are visible to their owners and to those, with whom they are shared by the owner.
New IAM users should not see any AWS resources at all.
I'd like to integrate Filestack with a GCP storage bucket, which requires:
setting up a service account in my GCP project with a set of required roles
providing a JSON key for the service account as well as the bucket ID and the project ID to the Filestack storage config
I've been given the list of required roles from the Filestack support, which is as follows:
Owner
Storage Admin
Storage Object Admin
Storage Object Creator
Storage Object Viewer
The only Owner role I can find, and that Filestack is using in their youtube guide for GCP storage integration is the project owner role, which seems to give a lot of privileges to the service account outside the scope of managing a storage bucket. I don't have a lot of experience with service accounts, but I'm worried about giving a role with these privileges to a third party when it doesn't seem to require it. Am I right in being skeptical about this, or is there some detail that I'm missing wrt. integrating GCP resources with an external 3rd party?
EDIT: There is a button in the Filestack storage config one can use to test the integration, which only succeeds if the Owner role is assigned to the service account. I have also asked their support about this, but haven't received an answer to this yet.
I didn't look at the video but I would advise against doing this... Service account should be used with limited rights on the project only up to the task they need to do as much as possible.
You are right to be skeptical and if I were you I would test with only storage rights to see if it works with only this.
If not maybe you could try to contact them and ask why they need ownership of the project and maybe add the missing right without giving them ownership of your project...
We wanted to copy a file from one project's storage to another.
I have credentials for project A and project B in separate service accounts.
The only way we knew how to copy files was to add service key credential permissions to the bucket's access control list.
Is there some other way to run commands across accounts using multiple service keys?
You can use Cloud Storage Transfer Service to accomplish this.
The docs should guide you to setup the permissions for buckets in both projects and do the transfers programmatically or on the console.
You need to get the service account email associated to the Storage Transfer Service by entering your project ID in the Try this API page. You then need to give this service account email the required roles to access the data from the source. Storage Object Viewer should be enough permissions.
At the data destination, you need get the service account email for the second project ID, then give it the Storage Legacy Bucket Writer role.
You can then do the transfer using the snippets in the docs.
If I create a service account in my project and give it to a third party, can that third party abuse it to create VM instances etc? Or is it only allowed to do things that I give it explicit permission to do?
In the "permissions" section of the Google developers console I can set the service account to "Can edit" or "Can view", but what do those mean?
If you give "edit" or "owner" permissions, the user can create, modify, or delete GCE VM instances (among other resources). If you only give "view" permissions, then they can't create, modify, or delete GCE VM instances.
However, you cannot give fine-grained permissions such as "user can only edit this VM instance, but not this other one".
Per Google Compute Engine docs:
Can View
Provides READ access:
Can see the state of your instances.
Can list and get any resource type.
Can Edit
Provides "Can View" access, plus:
Can modify instances.
On standard images released after March 22, 2012, can ssh into the
project's instances.
Is Owner
Provides "Can Edit" access, plus:
Can change membership of the project.
Per Google Cloud Storage docs:
Project team members are given the following permissions based on
their roles:
All Project Team Members
All project team members can list buckets
within a project.
Project Editors
All project editors can list, create, and delete buckets.
Project Owners
All project owners can list, create, and delete buckets, and can also perform administrative tasks like adding and removing team members and changing billing. The project owners group is the owner of all buckets within a project, regardless of who may be the original bucket creator.
When you create a bucket without specifying an ACL, the project-private ACL is applied to the bucket automatically. This ACL provides additional permissions to team members, as described in default bucket ACLs.
Per Google Cloud SQL docs:
Team members may be authorized to have one of three levels of access:
“can View” (called Viewer in App Engine Console) allows read-only
access.
“can Edit” (called Developer in App Engine Console) allows
modify and delete access.
This allows a developer to deploy the
application and modify or configure its resources.
“is Owner” (called
Owner in App Engine Console) allows full administrative access.
This
includes the ability to add members and set the authorization level of
team members.