I am a beginner of AWS and I have a question about the EBS volume. I know that when we create an EBS volume, there is an option for enabling the encryption (default is unencrypted). With security concern, it is better to enable the encryption of EBS volume, why EBS is not force to be encrypted? What is the use cases/reasons for choosing unencrypted EBS volume?
My guess is that it would be because Amazon EBS encryption was not always available. It was a feature added at some point, so the ability to use a non-encrypted volume remains.
Encrypted volumes also make some tasks more difficult, such as sharing AMIs publicly or between Accounts. There's plenty of reason to offer non-encrypted volumes.
Therefore, it would not be a good idea to "force" encryption.
However, you are welcome to force encryption within your organization, but be aware that there may be times when you do not want it activated.
This is likely down to the fact that it might change the way in which users can interact with their resources (and is technically a breaking change as the previous default was unencrypted volumes), so a user should understand these changes before they actually start using encryption.
Encryption in AWS is actively encouraged but by enabling it for services such as EBS, it does change some flows for example snapshots will be encrypted. If you need to migrate these between regions (or accounts) for DR you have additional steps to allow this.
Regarding security, yes it is better for security primarily from the physical level of AWS. It means that should anyone gain physical access to the storage they will not be able to access the data without access to the key used to encrypt the volume. However, should someone SSH into your server it will behave like normal.
AWS has enabled a feature for users who was this to be the default, you need to Opt-In to default encryption to enable this.
Related
We have an EC2 instance running in AWS EC2 instance. We have our ML algorithms and data that. We have also hosted a web-based interface also in that machine.
Now there are no new developments happening in that EC2 instance. We would like to terminate AWS subscription for a short period of time (for the purpose of cost-reduction and exploring new cloud services). Most importantly, we want to be in a position where we can purchase a new EC2 instance with a fresh AWS subscription, use the backup which we take now, and resume all operations (web-backend, SMS services for our app which is hosted in AWS, etc.).
What is the best way to do it? Is temporary termination of AWS subscription advisable?
There is no concept of an "AWS Subscription". AWS is charged on-demand, which means you only pay when you use resources.
If you temporarily do not want the Amazon EC2 instance, you could:
Stop the instance, which is like turning off the power. You will not be charged for the instance, but you will still pay for the disk storage attached to the instance. You can simply Start the instance again when you wish to use it. You will only be charged while the instance is running. OR
Create an image of the instance, then terminate the instance. This will create an Amazon Machine Image (AMI), which contains a copy of the disks. You can then launch a new Amazon EC2 instance from the AMI when you wish to use it again. This is a lower-cost option compared to simply stopping the instance, but it takes more effort to stop/start.
It is quite common for companies to stop Amazon EC2 instances at night or over the weekend to reduce costs while they are not needed.
EDIT: Just thought of a third option. Will test it and be back. Not worth it; it would involve creating an image from the EC2 instance and then convert that image to a VM image, storing the VM image in S3. There may be some advantages to this, but I do not see them.
I think you have two options, both of them very reasonably priced. If you can separate the data from the operating system, then your best option would be to use an S3 bucket as a file system within the EC2 instance. Your EC2 instance would use this bucket to store all your "ML algorithms and data" and, possibly, even your "web-based interface". Whenever you decide that you no longer need the processing capacity of the EC2, you would unmount the S3 bucket file system from the EC2 instance and terminate that instance. After configuring an appropriate lifecycle rule for the S3 bucket, it would transition to Glacier, or even Glacier Deep Archive [you must considerer the different options of long term storage]. In the future, whenever you want to work with your data again, you would move your data from Glacier back to S3, create a new EC2 instance, install your applications, mount your S3 bucket as a file system and you would have access to all your data. I think this is your least expensive and shortest recovery time objective option. To implement this option, look at my answer to this question; everything you need to use an S3 bucket as a regular folder inside the EC2 instance is there.
The second option provides an integrated solution, meaning the operating system and the data stay together, and allows you to restore everything as it was the day you stopped processing your data. It's made up of the following cycle:
Shutdown your EC2 and make a note of all the specs [you need them further down].
Export your instance to a virtual image, vmdk for example, and store it in your S3 bucket. Something like this:
aws ec2 create-instance-export-task --instance-id i-0d54b0682aa3998a0
--target-environment vmware --export-to-s3-task DiskImageFormat=VMDK,ContainerFormat=ova,S3Bucket=sm-vm-backup,S3Prefix=vms
Configure an appropriate lifecycle rule for the S3 bucket so that it transitions to Glacier, or even Glacier Deep Archive.
Terminate the EC2 instance.
In the future you will need to implement the inverse, so you will need to restore the archived S3 Object [make sure you you can live with the time needed by AWS to do this]
Import the virtual image as an EC2 AMI, something like this [this is not complete - you will need some more options that you saved above]:
aws ec2 import-image --disk-containers
Format=ova,UserBucket="{S3Bucket=sm-vm-backup,S3Key=vmsexport-i-0a1c382e740f8b0ee.ova}"
Create an EC2 instance based on the image and you're back in business.
Obviously you should do some trial runs and even automate the entire process if it's something that will be done frequently. I have a feeling, based on what you said, that the first option is a better option, provided you can easily install whatever applications they use.
I'm assuming that you launched an EC2 instance from a base Amazon Machine Image and then added your own software and models to it. As opposed to launched an EC2 instance from an AWS Marketplace offering.
The simplest thing to do is to create an Amazon Machine Image (AMI) from your running EC2 instance. That will capture the current state of the instance and persist it in your AWS account. Then you can terminate the instance. Later, when you want to recreate it, launch a new instance, selecting the saved AMI instead of a standard AMI.
An alternative is to avoid the need to capture machine state at all, by using standard DevOps practices to revision-control everything you need to recreate the state of a running machine.
Note that there are costs associated with an AMI, though they are minimal ($0.05 per GB-month of data stored, for example).
I had contacted AWS customer care regarding this issue. Given below is the response I received. Please add your comments on which option might be good for me.
Note: I acknowledge the AWS customer care team for their help.
I understand that you require some information on cost saving for your
Instance since you will not be utilizing the service for a while.
To assist you with this I would recommend checking out the Instance
Stop/Start link here:
==>https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html .
When you stop an Instance, you do not lose any data & you are not
charged for the resources any further. However please keep in mind
that you will still be charged for any EBS Storage Volumes attached to
the stopped Instance(s).
I also recommend checking out the below links on how you can reduce
your costs.
==>https://aws.amazon.com/premiumsupport/knowledge-center/reduce-aws-bill/
==>https://aws.amazon.com/blogs/compute/10-things-you-can-do-today-to-reduce-aws-costs/
That being said, please note that as I am in the billing department,
for the best assistance with the various plans you will require the
assistance of our Sales Team.
The Sales Team will be able to assist with ways to save while
maintaining your configurations.
You will be able to reach the Sales Team here:
==>https://aws.amazon.com/websites/contact-us/.
Once you have completed the details in the link, a member of the team
will be in touch with you at their soonest.
I am using a shared AWS account (Everyone in the account has root access) to deploy servers on an EC2 instance that I created. Is there a way to prevent anyone else that have access to the same AWS account to access the content that I put on the EC2 instance?
As far as I know, creating a separate key pair won't work because someone else can snapshot the instance and launch it with another key pair that they own.
I am using a shared AWS account (Everyone in the account has root
access) to deploy servers on an EC2 instance that I created. Is there
a way to prevent anyone else that have access to the same AWS account
to access the content that I put on the EC2 instance?
No, you cannot achieve your objective.
When it comes to access security, you either have 100% security or none. If other users have root access, you cannot prevent them from doing what they want to your instance. They can delete, modify, clone, and access. There are methods to make this more difficult, but anyone with solid experience will bypass these methods.
My recommendation is to create a separate account. This is not always possible, as in your case, but is a standard best practice (separation of access/responsibility). This would isolate your instance from others.
There are third-party tools that support the encryption of data. You will not be able to store the keys/passphrase on the instance. You will need to enter the keys/passphrase each time you encrypt/decrypt your data.
As far as I know, creating a separate key pair won't work because
someone else can snapshot the instance and launch it with another key
pair that they own.
With root access, there are many ways to access the data stored on your instance's disk. Clone the disk and just mount it on another instance is one example.
By default, IAM Users do not have access to any AWS services. They can't launch any Amazon EC2 instances, access Amazon S3 data or snapshot an instance.
However, for them to do their work, it is necessary to assign permissions to IAM Users. It is generally recommended not to grant Admin access to everyone. Rather, people should be assigned sufficient permissions to do their assigned job.
Some companies separate Dev/Test/Prod resources, giving lots of people permission in Dev environments, but locking-down access to Production. This is done to ensure continuity, recoverability and privacy.
Your requirement is to prevent people from accessing information on a specific Amazon EC2 instance. This can be done by using a keypair that only you know. Thus, nobody can login to the instance.
However, as you point out, there can be ways around this such as copying the disk (EBS Snapshot) and mounting it on another computer, thereby gaining access to the data. This is analogous to security in a traditional data center — if somebody has physical access to a computer, they can extract the disk, attach it to another computer and access the data. This is why traditional data centers have significant physical security to prevent unauthorized access. The AWS equivalent to this physical security are IAM permissions that grant specific users permission to perform certain actions (such as creating a disk snapshot).
If there are people who have Admin/root access on the AWS account, then they can do whatever they wish. This is by design. If you do not wish people to have such access, then do not assign them these permissions.
Sometimes companies face a trade-off: They want Admins to be able to do anything necessary to provide IT services, but they also want to protect sensitive data. An example of this is an HR system that contains sensitive information that they don't want accessible to general staff. A typical way this is handled is to put the HR system in a separate AWS Account that does not provide general access to IT staff, and probably has additional safeguards such as MFA and additional audit logging.
Bottom line: If people have physical access or Admin-like permissions, they can do whatever they like. You should either restrict the granting of permissions, or use a separate AWS Account.
Hi I am an AWS newbie and I am moving an AMI instance from one availability zone to another, and I was wondering if I need to select the encrypt EBS Snapshot option when copying an AMI from say Oregon to Virginia.
If I don't encrypt the snapshot, does that mean any hacker can see what is in my AMI enroute from one availability zone to another?
Thanks
The option to encrypt an EBS Snapshot provides encryption-at-rest. This is to prevent someone with access to the underlying hardware, like an Amazon employee, from being able to read the information on the disk.
Your concern that someone could see the data as it is transmitted between regions is covered by encyption-in-motion. AWS will automatically use SSL encryption to ensure that the data being transmitted will not be readable by anyone.
When copying data over a public network (including to a cloud) you should always use encryption. Amazon provides encryption for data at rest, data movements within AWS offerings and for any snapshots you create. When moving data they do recommend using a custom CMK, not your standard one, and then allowing individual users access to that key. Their documentation has more details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html.
And since you can't directly change the encryption status of a volume, encrypting your snapshot is the way to go. Depending on your needs, you may decide to encrypt new volumes, or all snapshots -- regardless of availability zone.
If you'd like more information on managing EBS volumes, NetApp has a good article here.
I'm trying to copy an encrypted EBS instance from one AWS account to another as part of a disaster recovery process.
I'm hoping someone has done this before, I'm basically looking for a clever way to approach it.
The big problem seems to with the encryption keys.
I've been able to create a k8s pod to do the "backup" automatically but when introducing encryption it seems to break because it can't find the key.
One more note, I've familiarized myself with the sharing process, that is the EBS is shared with a different account then form that account I would do the copy and so forth - found a few posts here but nothing similar to exactly what I'm looking for.
Advanced thanks.
When you create an encrypted EBS volume, you will want to specify an custom encryption key. The key can then be shared across regions/accounts.
You must use a custom key if you want to copy the snapshot to another account
When you start the copy operation you can specify a new key. According to AWS:
Using a new key for the copy provides an additional level of isolation
between the two accounts. As part of the copy operation, the data will
be re-encrypted using the new key.
Please review https://aws.amazon.com/blogs/aws/new-cross-account-copying-of-encrypted-ebs-snapshots/
You can find some additional information about how to copy encrypted EBS snapshot to another account in this link:
https://n2ws.com/how-to-guides/how-to-copy-encrypted-aws-snapshots.html
Another handy solution for AWS disaster recovery that we implemented in my company is to copy EBS snapshots from one region to another. It can be done by using the AWS Management Console or the AWS CLI.
We all know about what happened to Cold Spaces getting hacked and their AWS account essentially erased. I'm trying to put together recommendation on set of tools, best practices on archiving my entire production AWS account into a backup only where only I would have access to. The backup account will be purely for DR purposes storing EBS snapshots, AMI's, RDS etc.
Thoughts?
Separating the production account from the backup account for DR purposes is an excellent idea.
Setting up a "cross-account" backup solution can be based on the EBS snapshot sharing feature that is currently not available for RDS.
If you want to implement such a solution, please consider the following:
Will the snapshots be stored in both the source and DR accounts? If they are, it will cost you twice.
How do you protect the credentials of the DR account? You should make sure the credentials used to copy snapshots across accounts are not permitted to delete snapshots.
Consider the way older snapshots get deleted at some point. You may want to deal with snapshot deletion separately using different credentials.
Make sure your snapshots can be easily recovered back from the DR account to the original account
Think of ways to automate this cross-account process and make it simple and error free
The company I work for recently released a product called “Cloud Protection Manager (CPM) v1.8.0” in the AWS Marketplace which supports cross-account backup and recovery in AWS and a process where a special account is used for DR only.
I think you would be able to setup A VPC and then use VPC peering to see the other account and access S3 in that account.
To prevent something like coldspaces, make sure you use MFA authentication (no excuse for not using it, the google authentication app for your phone is free and safer than just having a single password as protection.
Also dont use the account owner but setup a separate IAM role with just the permissions you need (and enable MFA on this account as well).
Only issues is that VPC peering doesnt work across regions which would be nicer than having the DR in a different AZ in the same region.