I am trying to copy and restore a DB snapshot from one account to the other, but it seems as though I'm running into permission issues. Here's my process:
In AccountA, I am restoring an automated, encrypted snapshot to a "manual" snapshot.
In AccountA, I am sharing this "manual" snapshot to AccountB.
In AccountA, I am also sharing the KMS Key that was used to create this snapshot with AccountB.
In AccountB, I have a user set up with API access and an attempting to run copy-db-snapshot.
In step 4 (from AccountB), I am providing the KMS Key ID that belongs to AccountA. I am getting the following error when trying to run copy-db-snapshot:
An error occurred (KMSKeyNotAccessibleFault) when calling the CopyDBSnapshot operation: The target KMS key [arn:aws:kms:us-east-1::key/<my_key_id>] does not exist, is not enabled or you do not have permissions to access it.
After reviewing the KMS Key in AccountA, I noticed that, while I have shared permission to AccountB, it appears as though it is only the "root" account and I am unable to change that for some strange reason.
Is it not possible to restore a shared RDS snapshot from AccountA to a user account in AccountB other than the root account, or am I doing something incorrectly?
Related
I have setup a RDS proxy for Aurora DB. I am able to connect to the RDS proxy endpoint but not able to perform any operations.
For e.g if I do show processlist; I get below error:
ERROR 1045 (28000): Database Access denied for user 'admin'#'ip-address' (using password: YES)
Note: I am able to access RDS endpoint and perform all the operations.
Thanks in advance!
I encountered this same issue. Turns out it was related to the auto-generated IAM role permissions.
The secrets manager had 2 user accounts added to it (with verified correct credentials), and both were added to the RDS proxy. However, only the first user account worked. The second user account would get a permission denied error.
Checking the CloudWatch logs, I saw a message similar to:
Credentials couldn't be retrieved. The IAM role "arn:aws:iam::ACCOUNT:role/service-role/rds-proxy-role-TIMESTAMP" is not authorized to read the AWS Secrets Manager secret with the ARN "arn:aws:secretsmanager:REGION:ACCOUNT:secret:SECRET_NAME"
When I looked at the IAM policy for the rds-proxy-role-TIMESTAMP role, it had only been granted access to the secret for the first user. This appears to be an issue with the creation of the IAM role when the proxy is set up.
To resolve it, I modified the policy for the rds-proxy-role-TIMESTAMP role to give it access to the ARN for the second user's secret as well. After a few minutes, I was able to log in as the second user.
If you are getting a Database access denied error please check the user permissions in RDS first.
If you can connect to RDS directly with this credentials, check that credentials in Secret Manager are the same.
Then check if you RDS Proxy policy has permission to access all you Secret Manager records as I mention here https://stackoverflow.com/a/73649818/4642536
Stupidly enough, I did delete by mistake my default AWS IAM user!
I used it for example do aws s3 sync...
Now the error I get is:
$ aws s3 sync build/ s3://mybucket.mydomain.com
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Is there a way to recover?
I think I need instructions how to create a new user with the sufficient roles to enable my local aws cli to be able to do aws s3 sync ...
UPDATE: I did just create a new user on my AWS console, and added a policy (to start with) to list my bucket. The problem is I don't know how to attach my aws cli to that new user... :-(
If you are the only person using this AWS Account, then add the AdministratorAccess Policy to your IAM User. That will grant complete access.
Then, in the Security credentials tab of the IAM User click Create access key. Copy the Access Key and Secret Access Key.
On the command line, run aws configure and provide those keys to configure the user.
Test with: aws s3 ls
Is there a way to decrypt the AWS managed keys?
AWS managed keys have been applied as default for root volumes/EBS & AMI, which is preventing sharing of AMI/snapshots across other AWS accounts & regions.
How to create an unencrypted AMI or decrypt the AWS managed keys?
It is possible to share encrypted AMI's across accounts which I'll detail below.
To answer the original question: you can't decrypt an encrypted AMI and you can't decrypt AWS managed keys.
What you can do is create a CMK (Customer Master Key), re-encrypt your image with the new key, and share it with the account(s) you wish.
If you are starting with snapshots encrypted under the default EBS CMK (with the key alias, aws/ebs), copy those snapshots and reencrypt them under a custom CMK you created in KMS. You will then be able to modify the key policy on the custom CMK to be able to grant access to the key to any number of external accounts.
Create an AWS KMS customer master key (CMK)
Create a policy in the source account with permissions to share the AMI, using the ec2 ModifyImageAttribute operation
Add the target account to the CMK created in step 1. (In Other AWS Accounts subsection)
Create a policy on the target account to the AWS KMS operations. Allow kms actions - DescribeKey, ReEncrypt*, CreateGrant, and Decrypt.
You can then share the key using a CLI command like the following:
aws ec2 modify-image-attribute --image-id <ami-12345678> --launch-permission "Add=[{UserId=<target account number>}]"
The attached references go into much greater detail about this process.
References
How To Share Encrypted AMIs Across Accounts
How To Create a Custom AMI with Encrypted EBS and Share It
I recently enabled default ebs encryption as mentioned here: https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/. Afterwards, when attempting to launch a beanstalk instance, I get a generic 'ClientError' and the instance immediately terminates. If I disabled default encryption it works fine.
Does anyone know what changes are required to get beanstalk to work with a customer managed encryption key? I suspected it was a permissions issue so I temporarily gave the beanstalk roles full admin access but that did not solve the issue. Is there something else I am missing?
I saw this related question but it was before default EBS encryption was released and I was hoping to avoid having to copy and encrypt the AMI manually...
If you are using a custom CMK, you have to update the key policy and assign permissions explicitly. For EBS encryption, a principal usually requires the following permissions:
kms:CreateGrant
kms:Encrypt
kms:Decrypt
kms:ReEncrypt*
kms:GenerateDataKey*
kms:DescribeKey
The best way to troubleshoot key permission issues is to check the Cloudtrail event history. Filter the events by event source and check if there is any "access denied" error.
Filter: Event source: kms.amazonaws.com
You can see which action is denied here and adjust the key policy accordingly. "User name" field in the event gives you a hint to determine the ARN of the principal to use in the policy.
In your case, it is very likely that one of the service-linked roles requires permissions to access the KMS key. There is a good explanation for key permissions here for auto-scaling service-linked role.
I have a HDP cluster on AWS and I have one s3(in other account) also, my hadoop version is Hadoop 3.1.1.3.0.1.0-187
Now I want to read from the s3 (which is in different account) and process, then write the result to my s3(same account as cluster).
But as per the HDP guide Here tells, I can configure only one keys of either my account or other account.
But in my case I want to configure two account keys, so How to do do that ?
Due to some security reason, other account can not change the bucket policy to add IAM role which is created in my account , Hence I tried to access like below
Configured the keys of other account
Added IAM role(which has access policy for my bucket) of my account
but Still I got below error when I tried to access my account s3 from spark write
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3
What you need is to use the EC2 instance profile role. It is an IAM role that is attached to your instance: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
You first create a role with permissions that allow s3 access. Then you attach that role to your HDP cluster(EC2 autoscaling group and EMR can both achieve that).No IAM access key configuration needed on your side, although AWS still does that for you in the background. This is the s3 "outbound" access part.
The 2nd step is to set up the bucket policy to allow cross-account access: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
You will need to do this for each bucket in your different accounts. This is basically the "inbound" s3 access permission part.
You will encounter 400 if any part of your access(i.e., your instance profile role's permission, S3 bucket ACL, bucket policy, public access block setting and etc..) is denied in the permission chain. There are much more layers on the "inbound" side. So to start to get things working, if you are not IAM expert, try to start with a very open policy(use '*' wildcard) and then narrow things down.
If I've understood right
you want your EC2 VMs to access an S3 bucket to which the IAM role doesn't have access
your have a set of AWS login details for the external S3 bucket (login and password)
HDP3 has an default auth chain of, in order
per-bucket secrets. fs.s3a.bucket.NAME.access.key, fs.s3a.bucket.NAME.secret.key
config-wide secrets fs.s3a.access.key, fs.s3a.secret.key
env vars AWS_ACCESS_KEY and AWS_SECRET_KEY
the IAM Role (it does an HTTP GET to the 169.something server which serves up a new set of IAM role credentials at least once an hour)
What you need to try here is set up some per-bucket secrets for only the external source (either in a JCEKS file on all nodes in core-site.xml, or in the spark default. For example, if the external bucket was s3a://external, you'd have
spark.hadoop.fs.s3a.bucket.external.access.key AKAISOMETHING spark.hadoop.fs.s3a.bucket.external.secret.key SECRETSOMETHING
HDP3/Hadoop 3 can handle >1 secret in the same JCEKS file without problems. HADOOP-14507. my code. Older versions let you put username:secret in the URI, but that's such a security troublespot (everything logs those URIs as they aren't viewed as sensistive), that feature has been cut from Hadoop now. Stick to the JCEKs file with a per-bucket secret, falling back to IAM role for your own data
Note you can fiddle with the authentication list for ordering and behaviour: if you add use the TemporaryAWSCredentialsProvider then it'll support session keys as well, which is often handy.
<property>
<name>fs.s3a.aws.credentials.provider</name>
<value>
org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
</value>
</property>