AWS - Scalability for our web site and App - amazon-web-services

We have a PHP web site with admin login and Android App. We are using admin login to add/remove products in the web site and customers uses the App for purchasing items.
Currently we are using single VPS server for the same.
We are planning to move the production to AWS for high availability and scalability.
We decided to use RDS for MySQL DB but not sure how to host the application behind the loadbalancer and autoscaling as we may need to add/remove the items from admin panel.
Please share your thoughts on this.
Thank You.

You can host your PHP application in ec2 this way:
1. run ec2 t2.micro instance
2. install your PHP app and make sure it runs smoothly even just for calling from the instance public IP
3. create an AMI image of your t2.micro instance
4. create a load balancer add listener port 80
5. create a target group and assign it to your load balancer
6. create launch configuration with your AMI image
7. create an auto-scaling group and select your launch configuration
8. add dynamic scaling policy
and if your domain is hosted on Route53, you can create an SSL with a certificate manager.

Related

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

how to add a website to aws ec2 instance, no target

I've a nice little ec2 instance, I've logged in the console, updated the YUM, started the httpd, but the IP doesn't work in the browser.
my httpd is up on chkconfig: httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Thought it would be as listed on my connect from public DNS, same as I connect to console through. I've used the S3 server into the properties on the instance and enabled static website hosting, just to test it before using PHP. Even created a like bucket, trying to use my domain name from the Route 53, but the Route 53 also shows "No Targets Available" in the S3 (or any other).
Alrighty, found it. Was a security issue, but here is the process, in two quick url's.
Tutorial: Installing a LAMP Web Server on Amazon Linux
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html?shortFooter=true
But it says if it doesn't work, check the security groups, and a couple of clicks later you're looking at this.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html?shortFooter=true
Go into the security group settings and add http from pull down and it's done.
Also, the second part where there isn't a target, be sure and associate the elasticIP (created in the AWS services console) and sure enough a web server is up and running. Once you've the elasticIP address simply add it to the route 53 rule set(s).

Proper loadbalancing EC2 instances in AWS

I am pretty new to AWS and want to build a simple example auto scaling wordpress application with EC2 instances.
I understand how to create a loadbalancer, how to create bitnami wordpress ec2 instances and a autoscaling group and get all running but here is what i dont get and cannot find in any documentation:
Every EC2 Wordpress instance that i create has obviously its own wordpress data and database. They are not synchronized. So if the Load Balancer sends the Traffic to EC2 A the user will see an other Appplication set then EC2 B.
How do people set this up / solve this to be able to add unlimited ressources which hold the same application / work for the same Application.
Running Wordpress behind a Load Balancer (ELB) is a little bit tricky as by default Wordpress is storing data on volumes of the EC2 instances.
A possible solution:
Use RDS to launch a managed MySQL database and connect Wordpress to it.
Outsource the user uploads to S3 with Wordpress plugins amazon-web-services and amazon-s3-and-cloudfront.
But beware: you need to disable auto-update, the Wordpress theme gallery, ... and everything else that is changing files on a single EC2 instance.
I've written a blog post covering that topic: https://cloudonaut.io/wordpress-on-aws-you-are-holding-it-wrong/ some time ago.
Alternatives:
Use a distributed file system (e.g. GlusterFS) to store all Wordpress files.
Use CloudFront (CDN) to cache incoming requests and run everything on a single EC2 instance.
There is official best practices and blog post. Check here
https://blogs.aws.amazon.com/php/post/Tx1TRYG42UP11ET/WordPress-on-AWS-Whitepapers

Setting up an Amazon Server with Go Daddy

I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.

Load balancer setup on Amazon Web services

I have an application on an Windows server EC2 with an SQL server for our database.
What I would like to do is an load balancer so the application won't fail due to overload.
I have a couple of questions that Im not certain about it.
I believe that i need to create an image of my current instance and duplicate it. my problem is that my database is based on my current instance so it would duplicate my database as well.
Do I need another instance just for my database?
If yes, then it means that I need a total of 3 instances. 2 for the application and 1 for the database.
In this case I need to change my application to connect to the new instance database instead of the current database.
After all that happens I need to add a load balancer.
I hope I made myself clear.
I would recommend using RDS (http://aws.amazon.com/rds/) for this. This way you don't have to worry about the database server and just host your application server on EC2 instances. Your AMI will then contain just the application server and thus when you scale up you will be launching additional app servers only and not database servers.
Since you are deploying a .NET application, I would also recommend taking a look at Elastic Beanstalk (http://aws.amazon.com/elasticbeanstalk/) since it will really help to make auto scaling much easier and your solution will scale up/down as well as self-heal itself.
As far the load balancer is concerned, you can either manually update your load balancer will the new instances of your application server or you can let your auto scale script do it for you. If you go for ElasticBeanstalk, then Elastic Beanstalk will take care of adding/removing instances to/from your Elastic Load Balancer for you on its own.