can i use multi-attach ebs volume with multiple ec2 in one zone for read and write some shared files? - amazon-web-services

i have multiple micro services and all use some local files, now i want to run each micro service on EC2 instance separately and perform file operations
(i found some hints from here :- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html )
so i want to know, is it possible?
if possible, then what should configuration of EC2 ?
if not possible then how can i archive it?

Definitely, yes.
According to documentation, there are some limitations:
Your EC2 instances should be in one Availability Zone
EBS multi-attach supported only for io1/io2 EBS volume family
You should use a file system that's cluster-aware (not EX4, etc...)
In case of microservices communication, best practice is use EFS that can be mounted to your EC2 instances. In case of EFS, you can use share storage between availability zones within VPC that increases availability of your application.

Yes, it's possible. However, multiple writes at a time might result in corrupted files (been there, done that). You can install Gluster to prevent that.
On the other hand, It's recommend to use EFS instead of EC2 multi attach for this kind of work, just remember to put dump file to EFS to increase iops.

Related

What service should I use if I want to share a volume across multiple Windows EC2 instances?

I've looked at EFS and EBS Multi-Attach, but neither support Windows. I've looked at S3, but it's blob storage (I guess?), not a proper volume.
I just want a simple volume that I can mount from multiple instances. I realize I could just set up a share on one of the instances, but I'd prefer it not be instance-dependent (IE if I do scaling, etc).
Ideally I'd be able to access it as a mounted disk, but I'd also be fine with a mapped network drive.
Is there an AWS service that will do this?
I think you can mount an S3 bucket on ec2 instances.
Reference:
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/

How to synchronize data between 2 EBS volumes in AWS?

I have 2 EBS volumes in 2 availability zones in the same region, one is primary and another is backup. Generally, I just read and write data from primary volume. Is it possible to synchronize data from primary to back up EBS volume? if yes, how can I do that?
Thanks
1 year after this question has been posted but I hope it helps anyone looking into this.
Amazon EFS is a great solution. An alternative for what you require is using Snapshots. With AWS Backup you can schedule Amazon EBS snapshots and have them shared across AZ or even different accounts.
As very well proposed in the previous answer, you should first try to understand your performance requirements for the workload and also the RPO and RTO requirements.
Comparing EFS and EBS, I could say that:
A. EFS (Elastic File System) is a managed parallel NFS (based on NFSv4). You are going to mount it as a directory. EFS leverages the same technology as EBS, and the disks are replicated in the AZ and also between AZ. You don’t chose or control the disks, just what performance you expect from the managed service.
B. EBS (Elastic Block Storage) is also network attached but is a block storage, which means that your OS will see it as a disk and not a directory. You have to format it as a file system (or group it with other EBS and create LVM, RAID, etc) before you can use it. EBS are replicated within the same AZ but bot across AZ. You can have snapshots of your EBS and copy them to the other AZ, for example.
So you have to take into account not only the performance you require but also what type of storage (block or file) your application need.
Can you use EFS for this? You might be able to avoid having to replicate the data is you can have the primary and backup instance/applications looking at the same data volume.

AWS ELB Autoscaling group with common filesystem (e.g. EBS)

I am using AWS elastic beanstalk with autoscaling group
I wish to log events into files and be able to finish processing the files before instances terminate during a shutdown.
I read that lifecycle hooks can answer my requirement.
My question is: is there an alternative like using a common EBS file system for all the instances in the group that will always be kept live. If that is possible, is there any cons using that approach? Is IO slower?
EBS volume can not be attached to several EC2 instances at the same time.
But shared storage is possible with EFS — Elastic File System. It's pricey, so EFS is not suitable for large amounts of data. But it is as fast as any NFS share and can be mounted to hundreds of servers at the same time.
The only consideration is how you will mount EFS volume. Since Elastic Beanstalk doesn't support cloud-init, you will have to build an AMI or issue a mount command from your code.

AWS - HA NFS - Best practices

Anyone have a sound strategy for implementing NFS on AWS in such a way that it's not a SPoF (single point of failure), or at the very least, be able to recover quickly if an instance crashes?
I've read this SO post, relating to the ability to share files with multiple EC2 instances, but it doesn't answer the question of how to ensure HA with NFS on AWS, just that NFS can be used.
A lot of online assets are saying that AWS EFS is available, but it is still in preview mode and only available in the Oregon region, our primary VPC is located in N. Cali., so can't use this option.
Other online assets are saying that GlusterFS is a way to go, but after some research I just don't feel comfortable implementing this solution due to race conditions and performance concerns.
Another options is SoftNAS but I want to avoid bringing in an unknown AMI into a tightly controlled, homogeneous environment.
Which leaves NFS. NFS is what we use in our dev environment and works fine, but it's dev, so if it crashes we go get a couple beers while systems fixes the problem, but on production, this is obviously a no go.
The best solution I can come up with at this point is to create an EBS and two EC2 instances. Both instances will be updated as normal (via puppet) to maintain stack alignment (kernel, nfs libs etc), but only one instance will mount the EBS. We set up a monitor on the active NFS instance, and if it goes down, we are notified and we manually detach and attach to the backup EC2 instance. I'm thinking we also create a network interface that can also be de/re-attached so we only need to maintain a single IP in DNS.
Although I suppose we could do this automatically with keepalived, and a IAM policy that will allow the automatic detachment/re-attachment.
--UPDATE--
It looks like EBS volumes are tied to specific availability zones, so re-attaching to an instance in another AZ is impossible. The only other option I can think of is:
Create EC2 in each AZ, in public subnet (each have EIP)
Create route 53 healthcheck for TCP:2049
Create route 53 failover policies for nfs-1 (AZ1) and nfs-2 (AZ2)
The only question here is, what's the best way to keep the two NFS servers in-sync? Just cron an rsync script between them?
Or is there a best practice that I am completely missing?
There are a few options to build a highly available NFS server. Though I prefer using EFS or GlusterFS because all these solutions have their downsides.
a) DRBD
It is possible to synchronize volumes with the help of DRBD. This allows you to mirror your data. Use two EC2 instances in different availability zones for high availability. Downside: configuration and operation is complex.
b) EBS Snapshots
If a RPO of more than 30 minutes is reasonable you can use periodic EBS snapshots to be able to recover from an outage in another availability zone. This can be achieved with an Auto Scaling Group running a single EC2 instance, a user-data script and a cronjob for periodic EBS snapshots. Downside: RPO > 30 min.
c) S3 Synchronisation
It is possible to synchronize the state of an EC2 instance acting as NFS server to S3. The standby server uses S3 to stay up to date. Downside: S3 sync of lots of small files will take too long.
I recommend watching this talk from AWS re:Invent: https://youtu.be/xbuiIwEOCAs
AWS has reviewed and approved a number of SoftNAS AMIs, which are available on AWS Marketplace. The jointly published SoftNAS Architecture on AWS White Paper provides more details:
Security (pages 4-11)
HA across AZs (pages 13-14)
You can also try a 30 day free trial to see if it meets your needs.
http://softnas.com/tryaws
Full disclosure: I work for SoftNAS.

Persistent storage on Elastic Beanstalk

How can i attach persistent storage on Elastic Beanstalk ?
I know i need to have a .config file where i set the parameters of the environment to run every time an instance is created.
My goal is to have a volume, let's say 100GB, that even if the instances got deleted/terminated, i have this volume with persistent data where all instances can access to read from.
I could use S3 to store this data, but it would require changes to the application, and latency could be a problem.
This way i could access the filesystem like any common server.
AWS now offer a solution called Elastic File System (Amazon EFS) that lets multiple instances access a shared file store.
If your desire is to have a central data repository that all EC2 instances can access, then Amazon S3 would be your best option.
Normal disk volumes are provided via Elastic Block Store (EBS). EBS volumes can only be mounted to one EC2 instance at a time. Therefore, to share data that is contained on an EBS volume, you will need to use normal network sharing methods to mount network volumes.
However, if your goal is to provide shared access without one specific instance sharing a volume to other instances, then it is better to use S3 because it is accessible from all instances. It would likely be worth the effort of modifying your application to take advantage of S3.