I am trying to load S3 data from Account A Bucket into an RDS instance in account B. I cannot use Resource policy. It needs to be done only using IAM roles.
I have a Role Created in Account A and it has access to the S3 Bucket.
I have a Role Create in Account B which allows Assume Role from Account A.
How do I use these to load Data from Account A to Account B?
You will need to write a custom loader because you need to assume 2 roles at the same time.
One way would be to create 2 programs that pipe the data.
$ read_bucket | write_rds
The read_bucket script would assume the Account A role and read the contents from the bucket. It would then print the data to stdout.
The write_rds script would assume the Account B role and read the contents from stdin and then write to RDS.
You can do this all inside on program because when you assume a role you get back a set of temporary credentials that you include with each API request. Here is another answer specifically on temporary credentials.
Related
Say I set up a AWS organization from account 111111111111, and then I create/invite 2 accounts, 222222222222 and 33333333333. As soon as I enabled SCP, I see a FullAWSAccess Policy attached to all members. I am trying to update each account programmatically in AWS SDK, and not having to switch roles on Console each time. For example:
AWSOrganizations client = AWSOrganizationsClientBuilder.standard().build();
ListAccountsResult result = client.listAccounts(new ListAccountsRequest().withMaxResults(10))
result.getAccounts()
.stream()
.forEach(account -> {
// I am not sure what to do with below data
// account.getArn()
// account.getId()
})
Say I want each member to put a s3 object like so:
s3.putObject(..)
Do I need to assume a role (AWS creates a OrganizationAccountAccessRole role by default) for each member account and call AWS service? Or am I missing something?
Your assumption is correct, in order to execute actions in other member accounts you need to assume a role in that account first. AWS Organizations creates OrganizationAccountAccessRole in each newly created account, this role has a trust policy to trust the master account. So as long as you're authenticated to the master account with any role that has sts:AssumeRole action you can assume OrganizationAccountAccessRole in the target account and do the "needfuls".
As the best practise you should your own automation role in each account and a dedicated automation account. This automation role lets say "pipeline-role" will have limited permissions that can be assumed only from your automation account.
This way you're reducing the need to utilise your master account and also making this automation role only as powerful as your automation needs instead of using the full AdministratorAccess policy.
Usually there is Compute Engine default service account that is created automatically by GCP, this account is used for example by VM agents to access different resources across GCP and by default has role/editor permissions.
Suppose I want to create GCS bucket that can only be accessed by this default service account and no one else. I've looked into ACLs and tried to add an ACL to the bucket with this default service account email but it didn't really work.
I realized that I can still access bucket and objects in this bucket from other accounts that have for example storage bucket read and storage object read permissions and I'm not sure what I did wrong (maybe some default ACLs are present?).
My questions are:
Is it possible to limit access to just that default account? In that case who will not be able to access it?
What would be the best way to do it? (would appreciate a lot an example using Storage API)
There are still roles such as role/StorageAdmin, and actually no matter what ACLs will be put on the bucket I could still access it if I had this role (or higher role such as owner) right?
Thanks!
I recommend you not to use ACL (and Google also). It's better to switch the bucket in uniform IAM policy.
There are 2 bad side of ACL:
New created files aren't ACL and you need to set it everytime that you create a ne file
It's difficult to know who has and who hasn't access with ACL. IAM service is better for auditing.
When you switch to Uniform IAM access, Owner, Viewer, and Editor role no longer have access to buckets (the role/storage.admin isn't included in this primitive role). It could solve in one click all the unwanted access. Else, as John said, remove all the IAM permission on the bucket and the project that have access to the bucket except your service account.
You can control access to buckets and objects using Cloud IAM and ACLs.
For example grant the service account WRITE (R: READ,W: WRITE,O: OWNER) access to the bucket using ACLs:
gsutil acl ch -u service-account#project.iam.gserviceaccount.com:W gs://my-bucket
To remove access of service account from the bucket:
gsutil acl ch -d service-account#project.iam.gserviceaccount.com gs://my-bucket
If There are roles such as role/StorageAdmin in the IAM identities (project level), they will have access to all the GCS resources of the project. You might have to change the permission to avoid them having access.
I have created the Roles and policies according to the AWS document for the S3 cross account access and I'm able to list all the buckets and do stuff only using command Line.
I need all the buckets in Account 'A' to Account 'B' and buckets from Account 'A' should be visible in the Account 'B' S3 Console.
Is there a way for the Account 'A' buckets to appear in Account 'B' console?
You can switching to a role in AWS console. However, the use of console requires more permissions then just barely listing S3 buckets, thus you may find that you may need to add more permissions to your cross-account role.
We(account A) would like to use programmatically way to trigger athena query(startQueryExecution) in different aws account ( Account B), we use assumed role to achieve it. After athena query done, we are expecting that result should be written to our aws account s3 bucket (Account A). We managed to do so by setting both side IAM policy to allow B to write to A's S3 bucket.
However, it seemed S3 object in account A is still owned by Account B, user/role in account A has no access to those object.
I was thinking either ways to fix this, but I can not find any example of how to do either
somehow make sure athena writing to s3 with acl = bucket-owner-full-control
somehow change s3 object acl to bucket-owner-full-control after object created
Any idea?
#user273098 curious to see how you figured out the answer
Athena currently does not give the ability to specify the ACL. However, two workaround is possible
Instead of having account A assume a role in Account B, have Account B grant account A's account root/role with access to Account B's Athena. Then Account A can use its own role/account to query data in Account B, and the results can still be accessed by Account A because Account A is the object owner.
Have a S3 streaming lambda listening to the output S3 bucket. Once the object comes in, use SetObjectACL to grant bucket-owner-full-control on the object.
As of now there is no direct way to give access to the s3 files written in Account A. One way we are solving this problem is writing the s3 file in Account B bucket and have a policy that gives access to Account A access to Account B s3 bucket and then read/process the files. Very hacky but works
I have a redshift cluster in an AWS account "A" and an S3 bucket in account "B". I need to unload data from redshift account in A to an S3 bucket in B.
I've already provided the necessary bucket policy and role policy to unload the data. The data is also getting unloaded successfully. Now the problem is that the owner of the file created from this unload is account
A and the file needs to be used by user B. On trying to access that object I am getting access denied. How do I solve this?
PS: ListBucket and GetObject permissions have been granted by the redshift IAM policy.
This is what worked for me - Chaining IAM roles.
For example, suppose Company A wants to access data in an Amazon S3 bucket that belongs to Company B. Company A creates an AWS service role for Amazon Redshift named RoleA and attaches it to their cluster. Company B creates a role named RoleB that's authorized to access the data in the Company B bucket. To access the data in the Company B bucket, Company A runs a COPY command using an iam_role parameter that chains RoleA and RoleB. For the duration of the UNLOAD operation, RoleA temporarily assumes RoleB to access the Amazon S3 bucket.
More details here: https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html#authorizing-redshift-service-chaining-roles