I need to set up a public-facing web server and a private database server in AWS. So that I can allow higher availability - is it okay that I have my public subnet in one AZ hosting my web server and another (so duplicate) WebServer in a different AZ? The same principle would apply to database server configuration. Essentially, Zone-A would host WebServer, Zone-B would host WebServer-copy. Zone-B would host DatabaseServer, Zone-A would host Database-copy. Is this architecture a good practice?
If yes, does this configuration mean site files and database files are duplicated on each AZ?
Yes, that's basically how you setup high availability on AWS. I would recommend using RDS for the database which will manage multi-az deployments for you automatically. Managing the data replication manually can be a real challenge.
I would also recommend looking into Elastic Beanstalk which will manage distributing the traffic across multiple zones, deploying and updating your application across multiple zones, and all the details that go along with that. I would not recommend diving straight in and trying to do all this manually in EC2 if you are new to AWS.
Related
I'm new to AWS and I am trying to gauge what migrating our existing applications into AWS would look like. I'm trying to host multiple apps as Services under a single ECS cluster, and use one Application Load Balancer with hostname rules to route requests to the correct container.
I was originally thinking I could give each service its own Target Group, but I ran into the RESOURCE:ENI error, which from what I can tell means that I can't just attach as many Target Groups as I want to the same cluster.
I don't want to create a separate cluster for each app, or use separate load balancers for them because these apps are very small and receive little to no traffic so it just wouldn't make sense. Even the minimum of 0.25 vCPU/0.5 GB that Fargate has is overkill for these apps.
What's the best way to host many apps under one ECS cluster and one Load Balancer? Is it best to create my own reverse-proxy server to do the routing to different apps?
You are likely using awsvpc network mode for the task definitions. You could change it to the (default) bridge mode instead. Your services don't seem to be ones that would need the added network performance boost of using the native EC2 networking stack.
The target groups' target types should be instance as per my understanding.
I am new to AWS, i am already having a godaddy VPS server, but my application is very slow when i hosted it in goDaddy VPS.
So i migrated to AWS, now my application works very fast, but some times the EC2 instance is getting failed and it automatically restarts after some times. since my application is basically an on-demand service app, these instance failure causes me to lose some conversations. So i heard about load balancing service from amazon, if one instance failed automatically turns the traffic to other instance.
I have used ubuntu 16.04 instance with vestaCP to host my application in AWS EC2. So is it possible to use the storage of my current-master EC2 instance with a new-alternative instance? so that same datas and database will be used by both the EC2 instances.
Might my question looks funny, but i need to know whether its possible or not? if possible any tutorials! if its not possible what kind of services need to use AWS load balancer to handle high traffic and instance failure.
Thanks
If you are migrating from a more conventional hosting to a cloud provider but you don't adopt a cloud architecture, you are missing out many of the benefits of the cloud.
In general, for a highly available, highly scalable web application, having shared data locally is an anti-pattern.
A modern web application would separate state (storage) from processing. Ideally your instance would hold only configuration and temporary data. For the database, assuming you are using a relational database, you would start a RDS instance. For the files, if they are mainly things like images and static content, you would probably use The Simple Storage Service, S3.
Your EC2 instance would connect to the RDS database and S3. Since the data is not local to the instance anymore, you can easily have multiple instances all using the same storage.
Your EC2 instances could be configured with autoscaling, so AWS would automatically add or remove instances responding to the real traffic you are seeing.
If you have complex storage needs and S3 is not enough for the file layer (and for most applications S3 should suffice), you can take a look at the Elastic File System.
Yes, It is achievable through ELB of AWS. But you have mention for separate requirement of ec2 instance, there is no need of such as AWS ELB manages all this for you.
Note: Always keep your database on another instance like 'AWS RDS' featuring data backup, rollback and if one instance fails then another instance have access to database. Same for files should be stored on 'AWS S3' then only you can achieve load balancing.
For more information.
link
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
Let me preface this by saying that I am primarily a programmer, though I have a pretty good working knowledge of Linux and "standard" LAMP installations. I have been tasked with setting up a persistent LAMP environment in Amazon Web Services (AWS), which is a good deal more involved than what I'm used to in this regard.
Although AWS is very well documented, I have yet to find a clear, definitive "Best Practices" overview for setting up a persistent LAMP environment. I followed the official Amazon tutorial ( http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html ) to set up a LAMP server on our EC2 instance, but found out later that these instances are "temporary" and that I need an EBS to make anything persist. Interestingly, EBS (Elastic Block Storage) does not appear in my Management Console , though they offer pricing out on the public side ( https://aws.amazon.com/ebs/pricing/ ). Is it still called EBS?
Of course, that begs the question - what happens to the programs I installed (Apache, MySQL) along with their respective config files? Surely Amazon doesn't expect us to reconfigure our server from scratch every time it boots up?
What I have now
1x EC2 instance running Amazon Linux. I installed and configured Apache and MySQL following the "Install LAMP" tutorial posted by Amazon.
1x Route 53 Hosted Zones (for DNS routing)
1x Elastic IP attached to the EC2 server
Additionally, there appears to be one unencrypted 8GB volume attached to /dev/xvda, although I didn't set it up and nobody has access but myself - it seems to have been generated when I requisitioned the EC2 - no idea if it is persistent or not.
What I think I need
So, here is what I'm thinking I need to do. Please tell me if I'm way off - is there a more sane alternative?
1x EC2 instance running Amazon Linux and Apache
1x RDS for MySQL
1x Route 53 Hosted Zone
1x Elastic IP attached to the EC2 server
1x (EBS? S3? EFS?) for storing htdocs
1x Snapshot of the EC2 to save server configuration
Does that sound right? Is there a better way to do this? Thanks so much for any advice. Amazon docs seem to be very good at giving granular information, but not as great at addressing overall strategy concerns.
Web Application
It is recommended to have 2 EC2 instances under an Elastic Load Balancer with these 2 instances being in separate availability zones for high availability. Going further is better to monitor these instances for CPU and bandwidth - CloudWatch - and once you see they are above some threshold you could automatically add more instances to the ELB, this is the auto scaling. Of course like you said you will need the AMI (snapshot) with the server software to be ready to be launched. You also need to take down servers when the load is small - again automatically with the metrics, but you should never go down under 2 machines. And don't forget to update these images when you upgrade the software.
Route 53
Because you would use an ELB you don't need Elastic IPs anymore, your web servers could only have private IPs. And on the Route 53 you need to point your website to the name of the ELB - here are more details about it
Database
For the database part go for the RDS for MySQL including Multi-AZ deployment, so you will have the master and one stand by replica in different availability zones.
EBS (disks)
For the EBS part you will ned to use that and they come in 3 flavours: magnetic(slowest), General Purpose SSD (faster) and Provisioned IOPS (fastest). These are the disks you mount on your machines, web servers and databases. For the database you should go with Provisioned, it is much harder to change them later, while for the web server we use General Purpose. In the AWS Console you find them under EC2, section Elastic Block Store.
The 8G disk that appeared is the default when you create a Linux machine and it is a General Purpose SSD which is good enough for a web server, but I think you should go with a bigger one, 50G at least.
Is it possible to have most of our server hardware outside of EC2, but with some kind of load balancer to divert traffic to EC2 when there's load that our servers can't handle, or as a backup incase these servers go down?
For example, have a physical server serving out our service (let's ignore database consistency for the moment), but there's a huge spike due to some coolness - can we spin up some EC2 instances and divert traffic off to it? This is much like Amazon's own auto scaling.
And also, if our server hardware dies for some reason (gremlins eat the power cables for example) - can we route all our traffic over to EC2 instances?
Thanks
Yes you can but you will have to code. AWS has Command Line Tools for doing EC2/Autoscaling/S3 stuff with simple commands in bash or other interfaces and SDKs, like Boto for Python etc.
You can find it here: http://aws.amazon.com/code/
Each Ec2 instance has a public network interface associated with it. Use a DNS CNAME record to "switch" your site traffic to the Ec2 instance. If you need to load-balance across multiple machines you can use round-robin DNS, or start a ELB and put any number of Ec2 instances behind it.
Ec2 infrastructure is extremely easy to scale. Deploying your application on top of an Ec2 is a whole other matter. It could be trivial -- or insanely complicated.