I'm new in AWS. For one project we require to purchase server on AWS. I don't know what configuration is required for the server. Our website will be like https://www.justdial.com/ and minimum 1000 users every time will be online on the website. Please, what configuration will be best with minimum pricing. I'm mentioning details below, what we want;
> • 1 - Elastic IP
> • 1 - Load Balancer
> • 2 - Webserver + autoscaling
> • 1 - Database SQL
> • 1 - S3 storage backup
> • CDN
if anything else is missing please guide me.
Go for micro-service architecture, and create labmda for every service. You can use private RDS for security. Using labmda based serverless approach will cost you on the basis of API request per API. Since during night time request reduces to close to zero, for that duration you wont be charged. AWS lambda auto balance load and availability of service to everyone with minimum CPU, memory usage. You won't be needing your load balanced as AWS does it by default.
Based upon your requirements use of a VM won't be a good idea, as most of these, Load-Balancer, Webserver autoscaling, are free for serverless lambda, and using RDS will put your database cost to minimum in place of owning a VM and scaling VM resources.
It really depends on your application. If all you do is return static pages, you might be fine with the smallest instance and CDN like CloudFront. If every request is dynamic and takes massive computations, you need some strong servers.
I suggest you start with some reasonable settings (e.g. t3.medium) and then load-test it to figure out what you really need. There are many tools for that. You basically need something that will generate a lot of requests to your servers and track errors, latency and total response time. If any of those metrics come back insufficient (this also depends on your needs), add more resources. Make sure to leave room for traffic spikes.
Related
I have one website that has simple one page that fetches trending videos from youtube with the use of youtube api and size of the website is just 100 kb. website is created by using of HTML,CSS,PHP. I want to host it on any good cloud hosting. Suppose i will get 10000 daily visitors to my website then 1gb ram and 1 coreCPU is sufficient for that?
Nobody can answer this question for you because every application is different, and much of it depends on the patterns of your particular user base.
The only way to know the requirements is to deploy the system and then simulate user traffic. Monitor the system to identify stress points, which could be RAM, CPU or Network. You can then adjust the size of the instance accordingly, and even change Instance Type to obtain a different mix of RAM and CPU.
Alternatively, just deploy something and monitor it closely. Then, adjust things based on usage patterns you see. This is "testing in production".
You could also consider using Amazon EC2 Auto Scaling, which can automatically launch new instances to handle an increased load. This way, the resources vary based on usage. However, this design would require a Load Balancer in front of the instances.
Then, if you want to get really fancy, you could simply host a static web page from Amazon S3 and have the page make API calls to a backend hosted in AWS Lambda. The Lambda function will automatically scale by running multiple functions in parallel. This way, you do not require any Amazon EC2 instances and you only pay for resources when somebody actually uses the website. It would be the cheapest architecture to run. However, you would need to rewrite your web page and back-end code to fit this architecture.
I'm considering a cloud run for web hosting rather than a complex compute engine.
I just want to make an api with node.js. I heard that automatic load balancing is also available. If so, is there any problem with concurrent traffic of 1 million people without any configuration? (The database server is somewhere else (which is serverless like cockroachDB)
Or Do I have to configure various complicated settings like aws ec2 or gce?
For such traffic, out of the box configuration must be fine tuned.
Firstly, the concurrency parameter on Cloud Run. This parameter indicate how many concurrent request can be handle per instance. It's 80 the default value, and you can set up to 1000 concurrent requests per instance.
Of course, if you handle 1000 concurrent request per instance (or less) you should require more CPU and Memory. You can also play with those parameters
You also have to change the max instance limit. By default, you are limited to 1000.
If you set 1000 concurrent requests and 1000 instances, you can handle 1 million of concurrent request.
However, you don't have a lot of margins, or your instance with 1000 concurrent requests can be struggle even with max CPU and memory.
You can request more than 1000 instances with a quota increase request.
You can also optimise differently, especially if your 1 million users aren't in the same country/Google Cloud Region. if so, you can deploy a HTTPS load balancer in front of your cloud run service and deploy it in all the region of your users. (The Cloud Run services deployed in different regions must have the same name).
Like that, it's not only one service that will have to absorb 1 million of users, but several, in different regions. In addition, the HTTPS load balancer route the request to the closest region and therefore your optimize the latency, and reduce the egress/cross region traffic.
I want to know the limit of requests per second for Load Balancer on Google Cloud Platform. I didn't found this information on documentation.
My project is a static website hosted on Storage Bucket behind the Load Balancer and CDN active,
This website will receive a campaign in a Television channel and the estimative is that 100k requests per second for 5 minutes.
Could anyone help me with this information? Its necessary to ask Support for pre-warmup the load balancer before the campaign starts?
From the front page of GCP Load Balancing:
https://cloud.google.com/load-balancing/
Cloud Load Balancing is built on the same frontend-serving
infrastructure that powers Google. It supports 1 million+ queries per
second with consistent high performance and low latency. Traffic
enters Cloud Load Balancing through 80+ distinct global load balancing
locations, maximizing the distance traveled on Google's fast private
network backbone.
This seems to say that 1 million+ request per second is fully supported.
However, with all that said ... I wouldn't wait for "the day" before testing. See if you can't practice a suitable load. Given that this sounds like a finite event with high visibility (television), I'm sure you don't want to wait for the event only to find out something was wrong in the setup or theory. From the perspective of "is 100K request per second through a load balancer" ... the answer appears to be yes.
If you (or you asking on behalf of) a GCP consumer, Google has Technical Account Managers associated with accounts that can be brought into the planning loop ... especially if there are questions on "can we do this". One should always be cautious of sudden high volume needs of GCP resources. Again, through a Technical Account Manager, it does no harm to pre-warn Google of large resource requests. For example, if you said that you needed an extra 5000 Compute Engines, you may be constrained on what regions are available to you given a finite existing capacity. Google, just like other public cloud providers, has to schedule and balance resources in its regions. Timing is also very important. If you need a sudden burst of resources and the time that you need them happens to coincide with some event such as Black Friday (US) or Singles Day (China) special preparation may be needed.
We are currently runs in-house hardware that we would like to potentially move to AWS. Our main application uses MySQL on a Linux machine (200GB Disk, 32GB RAM, 4 Cores) serving content to customers through a hardware load balancer (around 1 million unique users per month).
We also use a 500 GB CDN hosted by a third party that we would like to move to AWS potentially. What AWS services would you recommend we look at to achieve comparable functionality and would you have a rough monthly cost estimate?
The main reason we would like to move to AWS would be for cost reduction in hardware and better backup strategies.
Thanks!
1.You can host your application using two or more EC2 instances and you can use elastic load balancer to distribute load amongst these EC2 instances.
2.You could use amazon aurora MySQL(server less) which offers you pay as you go service which will allow to get maximum benefits minimising your cost.It is the very best option for your MySQL database as your users are very high and so as the load on the database.
3.For CDN you could go for aws cloudfront and s3. it offers higher availability to your application and it also has less costing.You just need to make some proper configuration and you are ready to go.
AWS is the best cloud option for you as it provides service for your each problem so can use services according to your use and make most of it.
It also provides very good costing options whcih makes your tasks easy.
Please go through aws docs and costing before you choose aws.
Comparable functionality would be to use AWS RDS to replace your MySQL database, one or more EC2 instances to run your application, and then AWS Load Balancer to distribute the load amongst those EC2 web instances. A combination of S3 and Cloudfront to use as a CDN.
Cost is going to depend on how many ec2 instances you use, and the size and options you use for RDS database(s) plus storage and bandwidth - impossible for me to estimate for you
But you can do your own estimates here: https://awstcocalculator.com/
I am about to launch an iOS app that will be communicating with my custom REST API. Right now I am running a single EC2 t2.micro instance running an Apache web server with MySQLi. Before I go ahead and launch it for the public, I want to hear what proper steps should be taken regarding the following.
Should I run two separate EC2 instances? One only for the web server and the other to handle only the database?
How should I approach setting up the database? Should I still use MySQLi or should I start using Amazon's RDS?
In relationship to number two, when the database and/or web server runs out of space, how is this issue handled so that it seamlessly adds space to allow the database/web server to continue growth? I also read something regarding auto-scale.
I will be expecting many requests per minute to my web server and want to take precaution.
The answer to these questions largely depends on the requirements of your application, your budget, and on what you decide to manage vs. what you'd prefer to allow AWS to manage. However, I'll answer these as best I can.
1) Yes. Separating the database from the web server (that is, 2 different EC2 instances) makes sense for a lot of reasons. This will allow you to tailor resources like memory, CPU, etc. to each layer of your application separately. You do not want your web and database competing for the same resources. Additionally, an issue that forces you to take down one (web or database) will not force you to also take down the other. If your database lives on one of the web servers and you need to perform maintenance, your app will effectively become offline, since down goes your database as you perform updates. Also, ideally you would protect your database server within a private subnet in your VPC. If you have the web and database on the same server, they will both be in a public subnet, since you're web will require access to an internet gateway.
2) Depends. If you want to maintain total control of the database server, than use an EC2 instance where you retain operating system control. If you want to take advantage of features like Multi-AZ for high availability or allowing AWS to manage things like updates for you, RDS can be a great option. Cost also plays a role. For things like read-replicas and Multi-AZ, you will pay more, but you are purchasing performance and high availability. Thus, depends on your requirements. You can find the features of RDS here: RDS Product Details
3) For anything running on an EC2 instance (database or web) or if you decide to use RDS, you may provision and attach additional storage volumes as necessary. The type of storage you select will depend on the performance requirements, your budget, and the kind of workload you expect your database to face. Amazon provides the storage options available to you as well as a section for adding more storage here: RDS Storage Options
If you are worried about too many requests overwhelming your EC2 t2.micro instance, consider creating an ELB load balancer and setting up an auto-scaling group which will allow you to expand your capacity as necessary while distributing traffic such that no one server gets overwhelmed.