AWS EC2 data at rest - security - amazon-web-services

My java servlet web app is hosted in AWS EC2 instance. Is storing sensitive data (say db credentials) in my property (config) file of my java web app safe? When the EBS volumn is deallocated, will it contain the data I saved and used by someone else with in the same/different AWS account? Are there any security risks?

Data stored on the EBS volume is zeroed out after you delete the volume. This is carried out by AWS automatically.

Yes, the blocks on the EBS volume will be zeroed after you delete the volume.
From Amazon EBS volumes - Amazon Elastic Compute Cloud:
The data persists on the volume until the volume is deleted explicitly. The physical block storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account. If you are dealing with sensitive data, you should consider encrypting your data manually or storing the data on a volume protected by Amazon EBS encryption.
For more information on EBS encryption, see Amazon EBS encryption - Amazon Elastic Compute Cloud

I went with another approach considering the reason that anyone who has access to the file (via remote or someway) can read and pass it across. I used AWS systems manager (param store) to store the sensitive values as secure string. App retrieves it from param store and use it at run time. To reduce multiple hits, the value is cached for a configurable time. The original question is about the security of EBS and not about the alternate. However sharing my approach to let someone aware the alternate.

Related

AWS Import-Snapshot with shared S3 Bucket

I am currently looking for a way of easily distributing customised volumes to clients.
An approach I am looking at is creating RAW disk images, saving them to S3 and having clients import them as snapshots using the AWS CLI.
My question is - who pays for the data access request/data transfer?
...I'm assuming its bucket owner as there is no "requester-pays" option for the Import-Snapshot command. Has anybody done anything similar?
Another approach is directly sharing snapshots to a clients account - but this involves an added charge on our part to create the ideal sized volumes + generate the snapshots to share.
Is there a better method of generating + sharing data (essentially what would become EBS volumes) of varying sizes and content?
The easiest method would be to create an Amazon Machine Image (AMI), which is a bootable snapshot. You can list this it as a Community AMI.
Your clients can select the AMI when launching an Amazon EC2 instance. The boot disk will be exactly as you configured -- with the operating system, your application and all configurations that were saved on the disk.
There is no cost to you when a client uses the AMI.
See: Make an AMI public - Amazon Elastic Compute Cloud

AWS Auto-Scaling

I'm trying AWS auto-scaling for the first time, as far as I understand it creates instances if for example my CPU Utilization reaches critical level, that I define.
So I am curious, after I lunch my instance I spend a fair amount of time configuring it and copying the data, if AWS auto-scales my instance how will it configure the new instances and move the data to it?
You can't store any data that you want to keep on an instance that is part of an autoscaling group (well you can, but you will lose it).
There are (at least) two ways to answer your question:
Create a 'golden image', in other words spin-up an instance, configure it, install the software etc and then save it as an AMI (amazon machine image). Then tell the autoscaling group to use that AMI each time an instance starts - it will be pre-configured when it starts.
Put a script on the instance that tells the instance how to configure itself when it starts up (in the user data). SO basically each time an instance scales up, it runs the script and does all the steps it needs to to configure itself.
As for you data, best practice would be to store any data you want to keep in a database or object store that is not on the instance - so something like RDS, DynamoDB or even S3 objects.
You could also use AWS EFS, store there your data/scripts that the EC2 Instances will be sharing, and automatically mount it every time a new EC2 Instance is created via /etc/fstab.
Once you have configured the EFS to be mounted on the EC2 Instance (/etc/fstab), you should create a new AMI, and use this new AMI to create a new Launch Configuration and AutoScaling Group, so that the new Instances automatically mount your EFS and are able to consume that shared data.
https://aws.amazon.com/efs/faq/
Q. What use cases is Amazon EFS intended for?
Amazon EFS is designed to provide performance for a broad spectrum of
workloads and applications, including Big Data and analytics, media
processing workflows, content management, web serving, and home
directories.
Q. When should I use Amazon EFS vs. Amazon Simple Storage Service (S3)
vs. Amazon Elastic Block Store (EBS)?
Amazon Web Services (AWS) offers cloud storage services to support a
wide range of storage workloads.
Amazon EFS is a file storage service for use with Amazon EC2. Amazon
EFS provides a file system interface, file system access semantics
(such as strong consistency and file locking), and
concurrently-accessible storage for up to thousands of Amazon EC2
instances. Amazon EBS is a block level storage service for use with
Amazon EC2. Amazon EBS can deliver performance for workloads that
require the lowest-latency access to data from a single EC2 instance.
Amazon S3 is an object storage service. Amazon S3 makes data available
through an Internet API that can be accessed anywhere.
https://docs.aws.amazon.com/efs/latest/ug/mount-fs-auto-mount-onreboot.html
You can use the file fstab to automatically mount your Amazon EFS file
system whenever the Amazon EC2 instance it is mounted on reboots.
There are two ways to set up automatic mounting. You can update the
/etc/fstab file in your EC2 instance after you connect to the instance
for the first time, or you can configure automatic mounting of your
EFS file system when you create your EC2 instance.
I recommend using a shared data container if it is data that is updated and the updated data is needed by all instances that might be spinning up.
If it is database data or you could store the needed data in a database I would consider using an RDS.
If it is static data only used to configure the instances like dumps or configuration files which are not updated by running instances then I would recommend pulling them from CloudFlare or S3 of iT is not possible to pull them from a repository.
Good luck

Do I need to Encrypt EBS Snapshot when copying AMI to another availability zone?

Hi I am an AWS newbie and I am moving an AMI instance from one availability zone to another, and I was wondering if I need to select the encrypt EBS Snapshot option when copying an AMI from say Oregon to Virginia.
If I don't encrypt the snapshot, does that mean any hacker can see what is in my AMI enroute from one availability zone to another?
Thanks
The option to encrypt an EBS Snapshot provides encryption-at-rest. This is to prevent someone with access to the underlying hardware, like an Amazon employee, from being able to read the information on the disk.
Your concern that someone could see the data as it is transmitted between regions is covered by encyption-in-motion. AWS will automatically use SSL encryption to ensure that the data being transmitted will not be readable by anyone.
When copying data over a public network (including to a cloud) you should always use encryption. Amazon provides encryption for data at rest, data movements within AWS offerings and for any snapshots you create. When moving data they do recommend using a custom CMK, not your standard one, and then allowing individual users access to that key. Their documentation has more details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html.
And since you can't directly change the encryption status of a volume, encrypting your snapshot is the way to go. Depending on your needs, you may decide to encrypt new volumes, or all snapshots -- regardless of availability zone.
If you'd like more information on managing EBS volumes, NetApp has a good article here.

Comparative Application ebs vs s3

I am new to cloud environment. I do understand the definition and storage types EBS and S3. I wanted to understand the application of EBS as compared to S3.
I do understand EBS looks like a device for heavy though put operations. I cannot find any application where this can be used in comparison to S3. I could think of putting server logs on EBS on magnetic storage, as one EBS can be attached to one instance.
S3 you can use the scaling property to add some heavy data and expand in real time. We can deploy our slef managed dbs on this service.
Please correct me if I am wrong. Please help me understand what is best suited for what and application of them in comparison with one another.
As you stated, they are primarily different types of storage:
Amazon Elastic Block Store (EBS) is a persistent disk-storage service, which provides storage volumes to a virtual machine (similar to VMDK files in VMWare)
Amazon Simple Storage Service (S3) is an object store system that stores files as objects and optionally makes them available across the Internet.
So, how do people choose which to use? It's quite simple... If they need a volume mounted on an Amazon EC2 instance, they need to use Amazon EBS. It gives them a C:, D: drive, etc in Windows and a mountable volume in Linux. Computers traditionally expect to have locally-attached disk storage. Put simply: If the operating system or an application running on an Amazon EC2 instance wants to store data locally, it will use EBS.
EBS Volumes are actually stored on two physical devices in case of failure, but an EBS volume appears as a single volume. The volume size must be selected when the volume is created. The volume exists in a single Availability Zone and can only be attached to EC2 instances in the same Availability Zone. EBS Volumes persist even when the attached EC2 instance is Stopped; when the instance is Started again, the disk remains attached and all data has been presrved.
Amazon S3, however, is something quite different. It is a storage service that allows files to be uploaded/downloaded (PutObject, GetObject) and files are replicated across at least three data centers. Files can optionally be accessed via the Internet via HTTP/HTTPS without requiring a web server. There are no limits on the amount of data that can be stored. Access can be granted per-object, per-bucket via a Bucket Policy, or via IAM Users and Groups.
Amazon S3 is a good option when data needs to be shared (both locally and across the Internet), retained for long periods, backed-up (or even for storing backups) and made accessible to other systems. However, applications need to specifically coded to use Amazon S3 and many traditional application expect to store data on a local drive rather than on a separate storage service.
While Amazon S3 has many benefits, there are still situations where Amazon EBS is a better storage choice:
When using applications that expect to store data locally
For storing temporary files
When applications want to partially update files, because the smallest storage unit in S3 is a file and updating a part of a file requires re-uploading the whole file
For very High-IO situations, such as databases (EBS Provisioned IOPS can provide volumes up to 20,000 IOPS)
For creating volume snapshots as backups
For creating Amazon Machine Images (AMIs) that can be used to boot EC2 instances
Bottom line: They are primarily different types of storage and each have their own usage sweet-spot, just like a Database is a good form of storage depending upon the situation.

Persistent storage on Elastic Beanstalk

How can i attach persistent storage on Elastic Beanstalk ?
I know i need to have a .config file where i set the parameters of the environment to run every time an instance is created.
My goal is to have a volume, let's say 100GB, that even if the instances got deleted/terminated, i have this volume with persistent data where all instances can access to read from.
I could use S3 to store this data, but it would require changes to the application, and latency could be a problem.
This way i could access the filesystem like any common server.
AWS now offer a solution called Elastic File System (Amazon EFS) that lets multiple instances access a shared file store.
If your desire is to have a central data repository that all EC2 instances can access, then Amazon S3 would be your best option.
Normal disk volumes are provided via Elastic Block Store (EBS). EBS volumes can only be mounted to one EC2 instance at a time. Therefore, to share data that is contained on an EBS volume, you will need to use normal network sharing methods to mount network volumes.
However, if your goal is to provide shared access without one specific instance sharing a volume to other instances, then it is better to use S3 because it is accessible from all instances. It would likely be worth the effort of modifying your application to take advantage of S3.