Anyone have a sound strategy for implementing NFS on AWS in such a way that it's not a SPoF (single point of failure), or at the very least, be able to recover quickly if an instance crashes?
I've read this SO post, relating to the ability to share files with multiple EC2 instances, but it doesn't answer the question of how to ensure HA with NFS on AWS, just that NFS can be used.
A lot of online assets are saying that AWS EFS is available, but it is still in preview mode and only available in the Oregon region, our primary VPC is located in N. Cali., so can't use this option.
Other online assets are saying that GlusterFS is a way to go, but after some research I just don't feel comfortable implementing this solution due to race conditions and performance concerns.
Another options is SoftNAS but I want to avoid bringing in an unknown AMI into a tightly controlled, homogeneous environment.
Which leaves NFS. NFS is what we use in our dev environment and works fine, but it's dev, so if it crashes we go get a couple beers while systems fixes the problem, but on production, this is obviously a no go.
The best solution I can come up with at this point is to create an EBS and two EC2 instances. Both instances will be updated as normal (via puppet) to maintain stack alignment (kernel, nfs libs etc), but only one instance will mount the EBS. We set up a monitor on the active NFS instance, and if it goes down, we are notified and we manually detach and attach to the backup EC2 instance. I'm thinking we also create a network interface that can also be de/re-attached so we only need to maintain a single IP in DNS.
Although I suppose we could do this automatically with keepalived, and a IAM policy that will allow the automatic detachment/re-attachment.
--UPDATE--
It looks like EBS volumes are tied to specific availability zones, so re-attaching to an instance in another AZ is impossible. The only other option I can think of is:
Create EC2 in each AZ, in public subnet (each have EIP)
Create route 53 healthcheck for TCP:2049
Create route 53 failover policies for nfs-1 (AZ1) and nfs-2 (AZ2)
The only question here is, what's the best way to keep the two NFS servers in-sync? Just cron an rsync script between them?
Or is there a best practice that I am completely missing?
There are a few options to build a highly available NFS server. Though I prefer using EFS or GlusterFS because all these solutions have their downsides.
a) DRBD
It is possible to synchronize volumes with the help of DRBD. This allows you to mirror your data. Use two EC2 instances in different availability zones for high availability. Downside: configuration and operation is complex.
b) EBS Snapshots
If a RPO of more than 30 minutes is reasonable you can use periodic EBS snapshots to be able to recover from an outage in another availability zone. This can be achieved with an Auto Scaling Group running a single EC2 instance, a user-data script and a cronjob for periodic EBS snapshots. Downside: RPO > 30 min.
c) S3 Synchronisation
It is possible to synchronize the state of an EC2 instance acting as NFS server to S3. The standby server uses S3 to stay up to date. Downside: S3 sync of lots of small files will take too long.
I recommend watching this talk from AWS re:Invent: https://youtu.be/xbuiIwEOCAs
AWS has reviewed and approved a number of SoftNAS AMIs, which are available on AWS Marketplace. The jointly published SoftNAS Architecture on AWS White Paper provides more details:
Security (pages 4-11)
HA across AZs (pages 13-14)
You can also try a 30 day free trial to see if it meets your needs.
http://softnas.com/tryaws
Full disclosure: I work for SoftNAS.
Related
i have multiple micro services and all use some local files, now i want to run each micro service on EC2 instance separately and perform file operations
(i found some hints from here :- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html )
so i want to know, is it possible?
if possible, then what should configuration of EC2 ?
if not possible then how can i archive it?
Definitely, yes.
According to documentation, there are some limitations:
Your EC2 instances should be in one Availability Zone
EBS multi-attach supported only for io1/io2 EBS volume family
You should use a file system that's cluster-aware (not EX4, etc...)
In case of microservices communication, best practice is use EFS that can be mounted to your EC2 instances. In case of EFS, you can use share storage between availability zones within VPC that increases availability of your application.
Yes, it's possible. However, multiple writes at a time might result in corrupted files (been there, done that). You can install Gluster to prevent that.
On the other hand, It's recommend to use EFS instead of EC2 multi attach for this kind of work, just remember to put dump file to EFS to increase iops.
Our current server consisting of an 2x EC2 instances and RDS (Read/Write) database is in Mumbai Region. However I would like to copy everything (2x EC2 & RDS (R/W)) across to Sydney, and other to other regions.
Ideally I would like to replicate the contents in those instances as well.
Does anyone know a quick and easy way of doing this?
Edit 25/01/2019:
However I would like to copy everything including what ever is inside the instances (2x EC2s and the RDSs)
Edit 29/01/2019:
The purpose is to "scale/expand out". I want to have the same infrastructure replicated 1-to-1 (exactly/identically) across various regions.
It is simple!
- For EC2 - you need to create an AMI of those instances then right click on the AMI you've just created and choose "copy AMI" to the designated region.
For RDS
If you just wanna copy data to another region then take a snapshot then copy that snapshot to destination region
If you want to make the RDS replicate to another region continuously then you need to create a read-replica from your RDS instance.
Option for replicating environment depends on how much downtime can you tolerate.
If you are okay with downtime
1. Copy the AMI of EC2 instance and snapshot of RDS to another regions
2. Bring up your new environment.
This is perfect for non critial workload
If this is critical application
1. Copy the AMI of ec2 instance ( I am assuming this would be your web/app instnaces) For real time replication use rsync or robocopy .. or solution like cloudendure .
2. Create a new RDS instance in sydney
3. USE DMS migration tool .. create source and target relationship
4. once insync cut off the relation bring new environment in sydney
As suggested by previous answers for EC2 you can create AMIs and then move the AMI to a different region.
For RDS, you can either create read replicas (and read replicas of read replicas, but beware of latency), read replicas are used to mainly improve read performance of your app.
You can also create a Multi AZ backup which will act as a disaster recovery site. However, note that Multi-AZ is only used in case of a failover. Moreover, Multi-AZ involves Synchronous data copy and read replicas are asynchronous, so read replicas can demonstrate eventual consistency behavior.
But the real question here is - What are you trying to achieve?
Are you trying to "scale out" your infrastructure to support huge traffic to your application? Or are you simply trying to setup disaster recovery (DR)?
If your answer is DR, then the approach is pretty straight forward with Multi AZ and EC2 instance snapshots. But if the answer is scaling out and performance, you really need to be thinking of better strategies such as using Cloudfront (CDN) if it is a web app, using Elasticache in-memory cache for frequently read data, or RDS read replicas, using Elastic Load Balancers with Dynamic/Step scale-out/scale-in. Other, methods would be to evaluate the type of RDS storage subsystem used i.e. using Provisional IOPs vs. Using General Purpose SSD, checking if there are any NAT “instance” bottlenecks in your VPC and so on.
It may be tempting to spin up all these redundant copies of EC2 AMIs or RDS read replicas with a click of a button, but you really need to be thinking about the cost you are going to incur on a monthly basis for completely un-used resources.
I've been considering moving some databases from self-hosted database instances (e.g. MySQL or PostgreSQL on Linux, either bare-metal or within AWS itself) into Amazon RDS, but it's unclear to me how everything will behave once I've created the database and it's time for maintenance to begin.
For example, I have to choose the type of instance(s) that will be used for the database, which I guess means how responsive everything will be, and there is an option for multi-AZ deployments, but it's not clear how many of those types of instances I'm actually configuring. (Presumably, multi-AZ deployment requires at least two instances).
There are options for Failover, which leads me to believe that I can rely on the service to stay up if there are problems with an instance, but then there is also a section for selecting maintenance windows for automated upgrades, which I find confusing. If I were administering e.g. a two-instance MySQL setup, I'd upgrade one instance and then the other to avoid any downtime. Is that not how RDS behaves?
RDS advertises support for automatic "minor version upgrades" (yes, please), but doesn't say anything about OS upgrades. Presumably, the db engine will be running on Amazon Linux or something similar, and will periodically require updates to those packages. Does that all happen automatically, or do I need to manually perform those upgrades, etc.?
The whole point of using something like RDS is that the service should become something I no longer have to worry about: I don't have to deal with package maintenance, upgrades, failover, or unexpected downtime (as long as I pay enough, of course). But all of the options for the RDS instance are making me skeptical of the advantages provided by RDS over just running everything myself.
Can anyone with experience with AWS RDS comment on their experiences with maintenance, upgrades, and failover?
These were the same concerns which we had when we were planning to use RDS. Now that we are effectively using AWS RDS for multiple production workloads, let me try to clarify your queries. Hope this helps.
Your Question 1 : I have to choose the type of instance(s) that will be used for the database, which I guess means how responsive everything will be
Answer : Yes. This is to define what capacity (CPU,RAM etc) you will need for your database workload
Your Question 2 : There is an option for multi-AZ deployments, but it's not clear how many of those types of instances I'm actually configuring.
Answer : Multi-AZ deployments are to ensure high availability. AZ (Availability Zones) are isolated locations within an AWS Region to provide better protection against disaster scenarios. So when we choose a Multi AZ deployment, RDS will place 2 instances of your database server in 2 Availability Zones in the region where you are provisioning.
This is done automatically by RDS and we dont have to setup/maintain 2 servers separately/manually. ( Note : Your VPC should have atleast 1 subnet in each of the 2 different AZ to provision Multi AZ Setup)
Your Question 3: If I were administering e.g. a two-instance MySQL setup, I'd upgrade one instance and then the other to avoid any downtime. Is that not how RDS behaves?
Yes. RDS does it by itself without manual intervention if you enable Automatic Upgrades while setting up RDS (Only if you choose to have Multi AZ option)
Your Question 4 : RDS advertises support for automatic "minor version upgrades" (yes, please), but doesn't say anything about OS upgrades.
Answer : RDS dont expose/provide any OS access to us. The underlying OS and its upgrades/other activities are all done without affecting the RDS services hosted on top of it. We dont have to do anything about the OS of RDS. So we can forget about that part.
Your question 5 : Regarding Failover of AWS RDS Multi AZ database
I would classify into 2 cases.
Case 1 : Fail-overs required during maintenance/other automatic activities done by Multi AZ RDS instance.
Here, RDS will automatically do the failover one instance at a time. It will first move all the ongoing traffic to second instance and then upgrade/reboot the first instance and then do the same with second instance.
Case 2 : Fail-overs required during manual reboot/manually triggered actions done on Multi AZ RDS instance.
In this case, during the reboot, AWS RDS provides an option for you to select whether the reboot should be with failover or without one.
How to share S3 storage between multiple EC2 instances? I am beginner to AWS, I need to know how to share a drive between multiple EC2 instances.
Currently you can't, and S3 is your best bet, but AWS does have their Elastic File System in BETA currently, and there is the possibility it will be available for general availability anytime (I have no inside knowledge, just a guess - maybe even this week, they often have lots of announcements during their annual conference going on now).
You can signup for 'preview' access and see if it suits your needs, and then decide if you can wait for it to become fully available.
AWS EFS will allow you to share a drive between instances:
Amazon EFS supports the Network File System version 4 (NFSv4)
protocol, so the applications and tools that you use today work
seamlessly with Amazon EFS. Multiple Amazon EC2 instances can access
an Amazon EFS file system at the same time, providing a common data
source for workloads and applications running on more than one
instance.
https://aws.amazon.com/efs/
EFS (still in beta, half a year later) indeed looks like the best option. But as EFS is basically just a managed, highly available NFS server, it should be possible to roll out some other NFS solution first, and replace it with EFS once it's finally available.
One promising candidate seems dCache, which is
a system for storing and retrieving huge amounts of data, distributed
among a large number of heterogenous server nodes, under a single
virtual filesystem tree with a variety of standard access methods.
It is used by research institutions all over the world to store over 100PB of data, and it provides an NFSv4 interface. Not sure how easy setup on AWS would be, or what the performance would be like.
https://www.dcache.org/
I am working on a project and am at a point where the POC is done and now want to move towards a real product. I am trying to understand the Amazon cloud offerings just to see if I need to be aware of them at development time. I have a bunch of questions that I cannot get answered from the Amazon site. Its probably because I am new to the whole web services thing and have never hosted a site before. I am hoping someone out here will explain this to me like I am a C programmer :)
I see amazon has a bunch of offerings -
EC2
Elastic Block Store
Simple DB
AuotScaling
Elastic Load Balancing
I understand EC2 is virtual server instances that I can use and these could come pre-loaded with what I want (say Apache + python). I have the following questions -
If I want a custom instance of something (like say a custom apache module I wrote for my project). Can I create a server instance using the exact modules and make it the default the next time I create a new instance or in Autoscaling?
Do I get an IP Address to access this? Can I set my own hostname to it? I mean do I get a DNS record? Or is it what Elastic IP is?
How do I access it from the outside? SSH? Remote Desktop? Or is it entirely up to how I configure the instance?
What do they mean by Inter-Region or Intra-Region data transfer? What is data transfer to begin with? Is it just people using my instance? So if I go live with it that will be the cost I have to pay for people using it?
What is the difference between AutoScaling and Elastic Load Balancing?
What is Elastic Block Store? Is it storage? If so do I have to worry about backups or do they take care of it?
About the Simple DB -
It looks like the interface to use this is different to my regular SQL calls. Am I correct?
If so the whole development needs to be tailored specifically for Amazon. Which kind of sucks. Is there a better alternative?
Do I get data backups or do I have to worry about it myself?
Will I be able to connect to the DB using regular tools to inspect the DB (during or afte development). Or do I get other tools made by Amazon for it?
What about security? The DB is obviously somewhere in the cloud farm away from the EC2 instance. My DB password is going over the wire and so is all my data totally unencrypted. Don't I have to worry about that? The question comes up only because I don't own any of the hardware.
I really hope some one points me in the right direction here.
Thanks for taking the time to read.
P
I just went through the question and here I tried to answer few of them,
1) AWS EC2 instances doesnt publish pre-configured instances, in fact its configured by the developers and made it publicly available to the users so that they can use it. One can any one of those instances or you can just opt for what ever OS you want which is raw and provision it accordingly and create a snap shot of it so that you can use it for autos caling.The snap shot becomes the base AMI in your case.
2) Every instance you boot will have a public DNS attach to it, you can use the public DNS to connect to that instance using ssh if your are a linux user or using putty if you are a windows users. Apart from that, you can also attach a elastic IP which comes with a cost will is like peanuts and attach it to the instance and access your instance through the elastic IP and you can either map the public DNS or elastic ip to map to a website by adding a A record or Cname respectively.
3)AWS owns databases in the different parts of the world. For example you deploy your application depending upon your customer base, if you target customers are based out of India, the nearest region available is Singapore which is called as ap-southeast-1 by AWS. Each region will have multiple availability zones, example ap-southeast-1a and ap-southeast-1b, which are two different databases and geographically part. Intre region means from ap-southeast-1a to ap-southeast-1b. Inter Region means, from ap-southeast-1 to us-east-1 which is Northern Virginia Data centre. AWS charges from in coming and out going bandwidth, trust me its nothing.
They chargge 1/8th of a cent per GB. Its a thing to even think about it.
4)Elastic Load balancer is cluster which divides the load equally to all your regions across availability zones (if you are running in multi AZ) ELB sits on top the AWS EC2 instances and monitors the instance health periodically and enables auto scaling
5) To help you understand what is autoscaling please go through this document http://aws.amazon.com/autoscaling/
6)Elastic Block store or EBS are like hard disk which is a persistent data storage which can be attached to your instance.Regarding back up yes dependents upon your use case. I do backups of EBS periodically.
7)Simple Db now renamed as dynamo DB is nosql DB, I hope you understand what is nosql db, its a non RDMS db systems. Please read some documentation to understand what is nosql db is.
8)If you have mysql or oracle db you can opt for RDS, please read the documents.
9)I personally feel you are newbie to the entire cloud eco system, you need to understand what exactly cloud does first.
10)You dont have to make large number of changes to development as such, just make sure it works fine in your local box, it can be deployed to cloud with out much ado.
11) You dont have to use any extra tool for that, change the database end point to RDS(if your use it) or else install mysql in your ec2 instance and connect to the local db which resides in the ec2 instance and connect to it,which is as simple as your development mode.
12)You dont have to worry about any security issues aws, it is secured. Dont follow the myths, I am have been using aws since 3 years running I dont even know remember how many applications, like(e-commerce,m-commerce,social media apps) I never faced any kind of security issues and also aws allows to set your security how ever you want.
Go ahead, happy coding. Contact me if you have any problem.
The answer above is a good summary on AWS. Just wanted to add
AWS offers full data center, so it depends what you are trying to achieve. For starters you will need,
EC2 - This is your server, it comes with instance storage, which will be lost on restart
EBS - Your mounted storage, the data is persisted across reboots
S3 - Provides storage (RESTful API's on top, the cost is usage based rather than "provisioned" as in EBS)
Databases - can start with Amazon RDS, which provides managed database services, you can chose between various available databases. You can also install your own database using EC2 + EBS, you will have to take care of managing the database yourself.
Elastic IP: Public facing IP address, you can point your DNS server to this.
One great tool to calculate the pricing,
http://calculator.s3.amazonaws.com/calc5.html
Some other services to take in account are:
VPC (Virtual Private Cloud). This is your own private network. You can define subnets, route tables and internet gateways there. I would strongly recommend to use VPC for any serious deployment of more than one instance.
Glacier - this will replace your tape library to storing backups.
Cloud Formation - great tool for deployment and automation of instances.