I have created AWS Neptune DB.
However, now I want to encrypt it. As per AWS documentation, I should take snapshot and while restoring, encrypt new DB instance.
However, enabling Encryption check-box is disabled in my account.
Am I missing anything? Does it has anything to do with IAM roles/permissions?
I am trying all these steps from AWS Management Console.
There are few instance type that does not support encryption, please check here https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Availability if you are using one of them.
Related
I was going by this update for EKS https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/ and this blog from AWS https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/.
This is a very cryptic line which never confirms whether EKS encrypts secrets or not by default
In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.
I did understand that:-
KMS with EKS will provide envelope encryption,like encrypting the DEK using CMK.
But it never mentioned that if I don't use this feature ( of course KMS will cost ), does EKS encrypts data by default?
Because Kubernetes by default does not encrypt data . Source
Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
I think I found it, the blog and update post by aws are very cryptic.
According to docs and console :-
All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS KMS.
Using KMS with EKS is additional encryption or a better way of envelope encryption. It allows deploying a defense-in-depth strategy for Kubernetes applications by encrypting Kubernetes secrets with a KMS key that you define and manage.
short answer, yes it encrypted at rest
The answer is yes, the data stored by etcd is encrypted at rest.
Encrypt secrets at rest in etcd
This encryption is in addition to the EBS volume encryption that is enabled by default for all data (including secrets) that is stored in etcd as part of an EKS cluster. so When encryption is enabled, then the secret store is encrypted form using KMS within etcd
The info tab contains further information
AWS EKS is a managed Kubernetes offering. Kubernetes control plane components such as API Server, and ETCD are installed, managed, and upgraded by AWS. Hence you can neither see these components nor can exec into these components.
The below article also shows how to get cloud trail events when Kubernetes secrets are decrypted using KMS.
eks-encryption
Ensure AWS EKS cluster has secrets encryption enabled
I have a general question for the rds feature within aws credentials manager. When I get the secret, it looks like this:
Does this mean that these credentials directly will work or is the password encrypted? Like if I wanted to sign into my database with a connection what credentials do I use and do these credentials auto rotate with the cycling feature?
I assume you mean the RDSDataClient to access a database such as a Serverless Amazon Aurora instance.
To successfully connect to the database using the RdsDataClient object, you must setup an AWS Secrets Manager secret that is used for authentication. For information, see Rotate Amazon RDS database credentials automatically with AWS Secrets Manager.
To see an AWS tutorial that shows this concept and the corresponding code, see this example that uses the AWS SDK for Kotlin. You will need these values to make a successful connection:
private val secretArnVal = "<Enter the secret manager ARN>"
private val resourceArnVal = "<Enter the database ARN>" ;
See the full example here:
Creating the Serverless Amazon Aurora item tracker application using the Kotlin RdsDataClient API
I just tested this again (been a while since it was developed), and it works perfectly.
We will port this example to use other supported programming languages too - like AWS SDK for Java.
UPDATE
You only need to use Secret Manager when using the RDSDataClient. As mentioned in that tutorial, the RdsDataClient object is only supported for an Aurora Serverless DB cluster or an Aurora PostgreSQL. If you are using MySQL RDS, you cannot use the the RdsDataClient object. You would use a supported JDBC API.
I know we can select a KMS (customer or AWS managed) key when creating our RDS database.
However I find the documentation quite vague about the different processes so I've got the following questions:
Does it mean that only one datakey will be used for the whole database to encrypt everything ?
Where exactly is the encrypted version of the data key located ?
When does RDS decrypt the encrypted datakey to use it ?
How often does RDS need to make an API call against KMS to decrypt the encrypted version of the data key it keeps ?
Does it mean that only one datakey will be used for the whole database
The documentation really doesn't specify any details. Based on the aws best practices and other documentation I'd assume the data key is cached and reused for certain time and then regenerated.
However the details are not publicly available. All the storage encryption is hidden under the hood and not visible to the client
How often does RDS need to make an API call against KMS to decrypt the encrypted version of the data key it keeps ?
AWS KMS calls are logged in the CloudTrail and you will see the calls on the bill as well. At least for the CMK (I'm not sure how is it for the default service KMS).
Given already deployed AWS resources that use the default AWS managed keys, is it possible to change the default encryption key from AWS managed to a Customer Managed Key (CMK)?
Resources in question:
EFS
FSx
Thanks!
I don't think you can change it, at least the API documentation don't have this options.
EFS:
https://docs.aws.amazon.com/efs/latest/ug/API_UpdateFileSystem.html
FSx:
https://docs.aws.amazon.com/fsx/latest/APIReference/API_UpdateFileSystem.html
We have an EC2 instance is coming up as part of autoscaling configuration. This instance can retrieve AWS credentials using the IAM role assigned to it. However, the instance needs additional configuration to get started, some of which is sensitive (passwords to non-EC2 resources) and some of which is not (configuration parameters).
It seems that the best practice from AWS is to store instance configuration in IAM and retrieve it at run-time. The problem I have with this approach is that configuration is sitting unprotected in S3 bucket - incorrect policy may expose it to parties who were never meant to see it.
What is a best practice for accomplishing my objective so that configuration data stored in S3 is also encrypted?
PS: I have read this question but it does not address my needs.
[…] incorrect policy may expose it to parties who were never meant to see it.
Well, then it's important to ensure that the policy is set correctly. :) Your best bet is to automate your deployments to S3 so that there's no room for human error.
Secondly, you can always find a way to encrypt the data before pushing it to S3, then decrypt it on-instance when the machine spins-up.
AWS does not provide clear guidance on this situation, which is a shame. This is how I am going to architect the solution:
Developer box encrypts per-instance configuration blob using the
private portion of asymmetric keypair and places it in an S3 bucket.
Restrict access to S3 bucket using IAM policy.
Bake public portion of asymmetric keypair into AMI.
Apply IAM role to EC2 instance and launch it from AMI
EC2 instance is able to download configuration from S3 (thanks to IAM role) and decrypt it (thanks to having the public key available).
The private key is never shared sent to an instance so it should not be compromised. If the public key is compromised (e.g. if the EC2 instance is rooted), then the attacker can decrypt the contents of the S3 bucket (but at that point they already have root access to the instance and can read configuration directly from the running service).